US20070041444A1 - Stereoscopic 3D-video image digital decoding system and method - Google Patents

Stereoscopic 3D-video image digital decoding system and method Download PDF

Info

Publication number
US20070041444A1
US20070041444A1 US11/510,262 US51026206A US2007041444A1 US 20070041444 A1 US20070041444 A1 US 20070041444A1 US 51026206 A US51026206 A US 51026206A US 2007041444 A1 US2007041444 A1 US 2007041444A1
Authority
US
United States
Prior art keywords
video
image
tdvision
sequence
stereoscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/510,262
Inventor
Manuel Gutierrez Novelo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TD VISION CORP DE C V SA
Original Assignee
TD VISION CORP DE C V SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=34910116&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20070041444(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by TD VISION CORP DE C V SA filed Critical TD VISION CORP DE C V SA
Assigned to TD VISION CORPORATION S.A. DE C.V. reassignment TD VISION CORPORATION S.A. DE C.V. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOVELO, MANUEL RAFAEL GUTIERREZ
Publication of US20070041444A1 publication Critical patent/US20070041444A1/en
Priority to US12/837,421 priority Critical patent/US9503742B2/en
Priority to US15/094,808 priority patent/US20170070742A1/en
Priority to US15/644,307 priority patent/US20190058894A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention is related to stereoscopic video image display in the 3DVisor® device and, particularly, to a video image decoding method by means of a digital data compression system, which allows the storage of three-dimensional information by using standardized compression techniques.
  • DCT Discrete Cosine Transform
  • variable-length code is a reversible process, which allows the exact reconstruction of that which has been coded with the variable-length code.
  • the display of digital video signals includes a certain number of image frames (30 to 96 fps) displayed or represented successively at a 30 to 75 Hz frequency. Each image frame is still an image formed by a pixels array, of the display resolution of a particular system.
  • the VHS system has a display resolution of 320 columns and 480 rows
  • the NTSC system has a display resolution of 720 columns and 486 rows
  • the high definition television system (HDTV) has a display resolution of 1360 columns and 1020 rows.
  • 320 columns by 480 rows VHS format a two-hour long movie could be equivalent to 100 gigabytes of digital video information.
  • a conventional compact optical disk has an approximate capacity of 0.6 gigabytes
  • a magnetic hard disk has a 1-2 gigabyte capacity
  • the present compact optical disks have a capacity of 8 or more gigabytes.
  • each image needs to be divided in rows, where each line is in turn divided in picture elements or pixels, each pixel has two associated values, namely, luma and chroma.
  • Luma represents the light intensity at each point, while luma represents the color as a function of a defined color space (RGB), which can be represented by three bytes.
  • the images are displayed on a screen in a horizontal-vertical raster, top to bottom and left to right and so on, cyclically.
  • the number of lines and frequency of the display can change as a function of the format, such as NTSC, PAL, or SECAM.
  • the video signals can be digitized for storage in digital format, after being transmitted, received, and decoded to be displayed in a display device, such as a regular television set or the 3DVisor®, this process is known as analog-to-digital video signal coding-decoding.
  • MPEG has two different methods for interlacing video and audio in the system streams.
  • the transport stream is used in systems with a greater error possibility, such as satellite systems, which are susceptible to interference.
  • Each package is 188 bytes long, starting with an identification header, which makes recognizing gaps and repairing errors possible.
  • Various audio and video programs can be transmitted over the transport stream simultaneously on a single transport stream; due to the header, they can be independently and individually decoded and integrated into many programs.
  • the program stream is used in systems with a lesser error possibility, as in DVD playing.
  • the packages have a variable-length and a size substantially greater than the packages used in the transport stream.
  • the program stream allows only a single program content.
  • Decoding is associated to a lengthy mathematical process, which purpose is to decrease the information volume.
  • the complete image of a full frame is divided by a unit called macroblock, each macroblock is made up of a 16 pixels ⁇ 16 pixels matrix, and is ordered and named top to bottom and left to right.
  • the information sent over the information stream follows a special sequential sequence, i.e. the macroblocks are ordered in ascending order, this is, macroblock 0 , macroblock 1 , etc.
  • a set of consecutive macroblocks represents a slice; there can be any number of macroblocks in a slice given that the macroblocks pertain to a single row.
  • the slices are numbered from left to right and bottom to top.
  • the slices should cover the whole image, as this is a form in which MPEG2 compresses the video, a coded image not necessarily needs samples for each pixel.
  • Some MPEG profiles require handling a rigid slice structure, by which the whole image should be covered.
  • U.S. Pat. No. 5,963,257 granted on Oct. 5, 1999 to Katata et al. protects a flat video image decoding device with means to separate the coded data by position areas and image form, bottom layer code, predictive coding top layer code, thus obtaining a hierarchical structure of the coded data; the decoder has means to separate the data coded in the hierarchical structure in order to obtain a high quality image.
  • U.S. Pat. No. 6,292,588 granted on Sep. 18, 2001 to Shen et al. protects a device and method for coding predictive flat images reconstructed and decoded from a small region, in such way that the data of the reconstructed flat image is generated from the sum of the small region image data and the optimal prediction data for said image.
  • Said predictive decoding device for an image data stream includes a variable-length code for unidimensional DCT coefficients.
  • U.S. Pat. No. 6,370,276 granted on Apr. 9, 2002 to Boon uses a decoding method similar to the above.
  • U.S. Pat. No. 6,456,432 granted on Sep. 24, 2002 to Lazzaro et al. protects a stereoscopic 3D-image display system, which takes images from two perspectives, displays them on a CRT, and multiplexes the images in a field-sequential manner with no flickering for both eyes of the observer.
  • U.S. Pat. No. 6,658,056 granted on Dec. 2, 2003 to Duruoz et al. protects a digital video decoder comprising a logical display section responding to a “proximal field” command to get a digital video field of designated locations in an output memory.
  • the digital video display system is equipped with a MPEG2 video decoder. Images are decoded as a memory buffer, the memory buffer is optimized maintaining compensation variable tables and accessing fixed memory pointer tables displayed as data fields.
  • U.S. Pat. No. 6,665,445 granted on Dec. 16, 2003 to Boon protects a data structure for image transmission, a flat images coding method and a flat images decoding method.
  • the decoding method is comprised of two parts, the first part to codify the image-form information data stream, the second part is a decoding process for the pixel values of the image data stream, both parts can be switched of the flat image signal coding.
  • the circuit includes a microprocessor, a MPEG decoder, which decodes a flat image sequence, and a common memory for the microprocessor, and the decoder. It also includes a circuit for evaluating the decoder delay, and a control circuit for determining the memory priority for the microprocessor or the decoder.
  • VLD variable_length decoding
  • IDCT inverse_discrete_cosine_transform
  • FIG. 1 represents one embodiment of a technology map
  • FIG. 2 shows a flowchart in which the steps of one embodiment of a process are outlined.
  • FIG. 3 illustrates structures that can be modified and the video_sequence of the data stream in order to identify the TDVision® technology image type at the bit level.
  • FIG. 4 shows one embodiment of the compilation software format for the TDVision® decoding method ( 40 ).
  • FIG. 5 is a representation of one embodiment of the decoding compilation format of the hardware.
  • the combination of hardware and software algorithms makes possible the stereoscopic 3D-image information compression, which are received as two independent video signals but with the same time_code, corresponding to the left and right signals coming from a 3Dvision® camera, by sending two simultaneous programs with stereoscopic pair identifiers, thus promoting the coding-decoding process. Also, two interdependent video signals can be handled by obtaining their difference, which is stored as a “B” type frame with the image type identifier.
  • FIG. 1 represents the technology map to which the subject object of the present invention pertains. It shows a stereoscopic 3D-image coding and decoding system and corresponding method.
  • the images come from a stereoscopic camera ( 32 ), the information compiled in ( 31 ) and are displayed in any adequate system ( 30 ) or ( 33 ).
  • the information is coded in ( 34 ) and then it can be transmitted to a system having an adequate previous decoding stage such as ( 35 ), which may be a cable system ( 36 ), a satellite system ( 37 ), a high definition television system ( 38 ) or a stereoscopic vision system such as TDVision®'s 3DVisors® ( 39 ).
  • FIG. 2 shows a flowchart in which the steps of the process are outlined.
  • the objective is to obtain three-dimensional images from a digital video stream by making modifications to the current MPEG2 decoders, and changes to software ( 3 ) and hardware ( 4 ) in the decoding process ( 2 ): the decoder ( 1 ) should be compatible with MPEG2-4.
  • FIG. 3 outlines the structures that should be modified and the video_sequence of the data stream in order to identify the TDVision® technology image type at the bit level.
  • the coded data ( 10 ) are bytes with block information, macroblocks, fields, frames, and MPEG2 format video images.
  • Variable_length_decoding ( 11 ) (VLC, Variable-length Decoder) is a compression algorithm in which the most frequent patterns are replaced by shorter codes and those occurring less frequently are replaced by longer codes. The compressed version of this information occupies less space and can be transmitted faster by networks. However, it is not an easily editable format and requires decompression using a look-up table.
  • the blocks are 8 ⁇ 8 data matrixes, so it is necessary to convert the linear information in a square 8 ⁇ 8 matrix. This is made in a descending zigzag manner, top to bottom and left to right in both sequence types, depending on whether it is a progressive image or an interlaced image.
  • Inverse Quantization ( 13 ): It consists simply in multiplying each data value by a factor. When codified, most of the data in the blocks are quantized to remove information that the human eye is not able to perceive, the quantization allows to obtain a greater MPEG2 stream conversion, and it is also required to perform the inverse process (Inverse quantization) in the decoding process.
  • Inverse cosine transform ( 14 ) (IDCT, inverse_discrete_cosine_transform): The data handled within each block pertain to the frequency domain, this inverse cosine transform allows to return to the samples of the space domain. Once the data in the IDCT have been transformed, pixels, colors and color corrections can be obtained.
  • Motion compensation allows to correct some errors generated before the decoding stage of MPEG format, motion compensation takes as a reference a previous frame and calculates a motion vector relative to the pixels (it can calculate up to four vectors), and uses them to create a new image. This motion compensation is applied to the P and B type images, where the image position is located over a “t” time from the reference images. Additionally to the motion compensation, the error correction is also applied, as it is not enough to predict the position of a particular pixel, but a change in its color can also exist. Thus, the decoded image is obtained ( 16 ).
  • the previous process is outlined in such a way that the left or right signal is taken, both are stored in a temporary buffer, then the difference between the left and right signals is calculated, and then it is coded as a B type image stored in the video_sequence to be later decoded by differences from said image.
  • MPEG video sequence structure This is the maximum structure used in the MPEG2 format and has the following format:
  • Video sequence (Video_Sequence)
  • Sequence header (Sequence_Header)
  • Extension_and_User_Data ( 0 )
  • Image group header (Group_of_Picture_Header)
  • Extension_and_User_Data 2
  • Extension_and_User_Data 2
  • a video sequence is applied for MPEG format, in order to differentiate each version there should be a validation that immediately after the sequence header, the sequence extension is present; should the sequence extension not follow the header, then the stream is in MPEG1 format.
  • sequence_header and sequence_extension appear in the video_sequence.
  • the sequence_extension repetitions should be identical on the first try and the “s” repetitions of the sequence_header vary little compared to the first occurrence, only the portion defining the quantization matrixes should change. Having sequences repetition allows a random access to the video stream, i.e., if the decoder wants to start playing at the middle of the video stream this may be done, as it only needs to find the sequence_header and sequence_extension prior to that moment in order to decode the following images. This also happens for video streams that could not start from the beginning, such as a satellite decoder turned on after the transmission time.
  • the full video signal coding-decoding process is comprised of the following steps:
  • Digitizing the video signals which can be done in NTSC, PAL or SECAM format.
  • two channels should be initialized when calling the programming API of the DSP as, by example, the illustrative case of the Texas Instruments TMS320C62X DSP.
  • MPEG2VDEC_create (const IMPEG2VDEC_fxns*fxns, const MEPG2VDEC_Params*params).
  • MEPG2VDEC_Params are pointer structures defining the operation parameters for each video channel, e.g.:
  • 3DLhandle MPEG2VDEC_create (fxns3DLEFT,Params3DLEFT).
  • 3DRhandle MPEG2VDEC_create(fxns3DRIGHT,Params3DRIGH T.
  • a double display output buffer is needed and by means of software, it will be defined which of the two buffers should display the output by calling the AP function:
  • 3DLhandle is the pointer to the handle returned by the DSP's create function
  • the input1 parameter is the FUNC_DECODE_FRAME or FUNC_START_PARA address
  • input2 is the pointer to the external input buffer address
  • input3 is the size of the external input buffer size.
  • 3doutleft_pb is the address of the parameter buffer and 3doutleft_fb is the beginning of the output buffer where the decoded image will be stored.
  • the timecode and timestamp will be used for output to the final device in a sequential, synchronized manner.
  • DSP integrated circuits
  • These DSP are programmed by a C and Assembly language hybrid provided by the manufacturer.
  • Each DSP has its own API, consisting of a functions list or procedure calls located in the DSP and called by software.
  • sequence_header the sequence header
  • sequence extension the sequence extension
  • the repetitions of the sequence extension should be identical to the first.
  • sequence header repetitions vary a little as compared to the first occurrence, only the portion defining the quantization matrixes should change.
  • FIG. 4 shows the compilation software format for the TDVision® decoding method ( 40 ), where the video_sequence ( 41 ) of the digital stereoscopic image video stream is identified, which may be dependent or independent (parallel images), in the sequence_header ( 42 ). If the image is TDVision® then the double buffer is activated and the changes in the aspect_ratio_information are identified. The information corresponding to the image that can be found here is read in the user_data ( 43 ).
  • the sequence_scalable_extension ( 44 ) identifies the information contained in it and the base and enhancement layers, the video_sequence can be located here, defines the scalable_mode and the layer identifier.
  • extra_bit_picture ( 45 ) identifies the picture_estructure, picture_header and the picture_coding_extension ( 46 ) reads the “B” type images and if it is a TDVision® type image, then it decodes the second buffer.
  • picture_temporal_scalable_extension ( ) ( 47 ) in case of having temporal scalability, is used to decode B type images.
  • sequence_header provides a higher information level on the video stream, for clarity purposes the number of bits corresponding to each is also indicated, the most significative bits are located within the sequence extension (Sequence_Extension) structure, it is formed by the following structures: Sequense_Header Field bits Description Secuence_Header_Code 32 Sequence_Header Start 0x00001B3 Horizontal_Size_Value 12 less significative bits for width* Vertical Size Value 12 12 less significative bits for length Aspect Ratio Information 4 image aspect 0000 forbidden 0001 n/a TDVision ® 0010 4:3 TDVision ® 0011 16:9 TDVision ® 0100 2.21:1 TDVision ® 0111 will execute a logical “and” in order to obtain backward compatibility with 2D systems.
  • Vbv_buffer_size_value 10 The 10 less significative bits of vbv_buffer_size, which determines the size of the video buffering verifier (VBV), a structure used to ensure that a data stream can be used decoding a limited size buffer without exceeding or leaving too much free space in the buffer.
  • Constrained_parameters_flag 1 Always 0, not used in MPEG2.
  • Load_intra_quantizer_matrix 1 Indicates if an intra-coded quantization matrix is available.
  • Intra_quantizer_matrix(64) Intra_quantizer_matrix(64) 8x64 If a quantization matrix is indicated, then it should be specified here, it is a 8x64 matrix.
  • Load_non_intra_quantizer_matrix 1 If load_non_intra_quantizer_matrix If load_non_intra_quantizer_matrix Non_intra_quantizer_matrix (64) 8x64 If the previous flag is activated, the 8 ⁇ 64 data forming the quantized matrix are stored here. *The most significative bits are located within the sequence_extension structure.
  • Extension_start_code 32 Always 0x000001B5 Extension_start_code_identifier 4 Always 1000 F_code(0)(0) 4 Used to decode motion vectors; when it is a type I image, this data is filled with 1111.
  • F_code(0)(1) 4 F_code(1)(0) 4 Decoding information backwards in motion vectors (B), when it is a (P) type image it should be set to 1111, because there is no backward movement.
  • F_code(1)(1) 4 Decoding information backwards in motion vectors, when it is a P type image it should be set to 1111, because there is no backward movement.
  • Intra_dc_precision 2 precision used in the inverse quantizing of the coefficients of the DC discrete cosine transform.
  • Picture_temporal_scalable_extension( ) Field bits # Definition Extension_start_code_identifier 4 Always 1010 Reference_select_code 2 It is used to indicate that the reference image will be used to decode intra_coded images FOR O TYPE IMAGES 00 enhances the most recent images 01 the lower and most recent frame layer in display order 10 the next lower frame layer in order of forbidden display.
  • the enhancement layer contains data, which allow a better resolution of the base layer so it can be reconstructed.
  • the bottom layer should be escalated and offset in order to obtain greater resolution of the enhancement layer.
  • Copyright_extension( ) Extension_start_code_identifier 4 Always 010 Copyright_flag 1 if it is equal to 1 then it uses copyright If it is zero (0), no additional copyright information is needed
  • the image can be displayed in:
  • HDTV High Definition Television
  • SATELLITE DSS Digital Satellite Systems
  • the decoding compilation format in the hardware ( 50 ) section of FIG. 5 is duplicated in the DSP input memory, at the same time, the simultaneous input of two independent or dependent video signals is allowed, corresponding to the left-right stereoscopic existing signal taken by the stereoscopic TDVision® camera.
  • the video_sequence ( 51 ) is detected to alternate the left and right frames or sending them in parallel, sequence_header ( 52 ) identification, the image type ( 53 ) is identified, it passes to the normal video stream ( 54 ), then it is submitted to an error correction process ( 55 ), the video image information is sent to the output buffer ( 56 ) which in turn shares and distributes the information to the left channel ( 57 ) and the right channel ( 58 ) in said channels the video stream information is displayed in 3 D or 2 D.
  • DSP Digital Signal Processors
  • TMS320C62X Texas Instruments
  • Each DSP is programmed by a hybrid language from C and Assembly languages, provided by the manufacturer in question.
  • Each DSP has its own API, consisting of a functions list or procedure calls located in the DSP to be called by software. From this reference information, the 3D-images are coded, which are compatible with the MPEG2 format and with their own coding algorithm. When the information is coded, the DSP is in charge of running the prediction, comparison, quantization, and DCT function application processes in order to form the MPEG2 compressed video stream.
  • the difference decoding selector is activated.
  • the parallel decoding selector is activated.
  • the decompression process is executed.
  • the image is displayed in its corresponding output buffer.
  • a logical “and” will be executed with 0111 to obtain the backward compatibility with 2D systems, when this occurs, the instruction is sent to the DSP that the buffer of the stereoscopic pair (left or right) should be equal to the source, so all the images decoded will be sent to both output buffers to allow the image display in any device.
  • a logical “and” with 0111 will be executed in order to obtain backward compatibility with 2D systems.
  • a DSP which is in charge of executing the prediction, comparison, and quantization processes, applies the DCT to form the MPEG2 compressed video stream, and discriminates between 2D or 3D-images.
  • Two video signals are coded in an independent form but with the same time_code, signals corresponding to the left signal and the right signal coming from a 3DVision® camera, sending both programs simultaneously with TDVision® stereoscopic pair identifiers.
  • This type of decoding is known as “by parallel images”, consisting in storing both left and right (L and R) video streams simultaneously as two independent video streams, but time_code-synchronized. Later, they will be decoded and played back in parallel. Only the decoding software should be decoded, the coding and the compression algorithm of the transport stream will be identical to the current one.
  • two program streams should be programmed simultaneously, or two interdependent video signals, i.e., constructed from the difference between both stored as a B type frame with an identifier, following the programming API as in the example case, in the use of the TMS320C62X family Texas Instruments DSP.
  • the image is decoded in real-time
  • the results are stored in the secondary buffer.
  • a call to the special decoding function will be made which is then compared to the output buffer and applied from the current read offset of the video_sequence, the n bytes as a typical correction for B type frames. The output of this correction is sent to other output address, which is directly associated to a video output additional to that existing in the electronic display device.
  • the PICTURE_DATA3D( ) structure If the PICTURE_DATA3D( ) structure is recognized, then it proceeds to read the information directly by the decoder; but it writes the information in a second output buffer, which is also connected to a video output additional to that existing in the electronic display device.
  • a video containing a single video sequence is also implemented; but alternating the left and right frames at 60 frames per second (30 frames each) and when decoded place the video buffer image in the corresponding left or right channel.
  • the signal will also have the capacity of detecting via hardware if the signal is of TDVision® type, if this is the case, it will be identified if it is a transport stream, program stream or left-right multiplexion at 60 frames per second.
  • the backward compatibility system is available in the current decoders, having the ability to display the same video without 3d characteristics but only in 2D, in which case the DSP is disabled to display the image in any TDVision® or previous technique device.
  • the MPEG decoder with two video buffers (left-right) is enabled, identifying the adequate frame and separating each signal at 30 frames per second, thus providing a flickerless image, as the video stream is constant and due to the characteristic retention wave of the human eye the multiplexion effect is not appreciated.

Abstract

Described herein is a MPEG-2 compatible stereoscopic 3D-video image digital decoding method and system. In order to obtain 3D-images from a digital video stream, modifications are made to the current MPEG2 decoders, by means of software and hardware changes in different parts of the decoding process. Namely, the video_sequence structures of the video data stream are modified via software to include the necessary flags at the bit level of the image type in the TDVision® technology. Modifications are also made in the decoding processes as well as in decoding the information via software and hardware, wherein a double output buffer is activated, a parallel and difference decoding selector is activated, the decompression process is executed, the corresponding output buffer is displayed; and the decoder is programmed via software to simultaneously receive and decode two independent program streams, each with an TDVision® stereoscopic identifier.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of PCT Application No: PCT/MX2004/000012 filed on Feb. 27, 2004 in the Spanish language.
  • FIELD OF THE INVENTION
  • The present invention is related to stereoscopic video image display in the 3DVisor® device and, particularly, to a video image decoding method by means of a digital data compression system, which allows the storage of three-dimensional information by using standardized compression techniques.
  • BACKGROUND OF THE INVENTION
  • Presently, data compression techniques are used in order to decrease the bits consumption in the representation of an image or a series of images. The standardization works were carried out by a group of experts of the International Standardization Organization. Presently, the methods are usually known as JPEG (Joint Photographic Expert Group), and MPEG (Moving Pictures Expert Group).
  • A common characteristic of these techniques is that the image blocks are processed by means of the application of a transform adequate for the block, usually known as Discrete Cosine Transform (DCT). The formed blocks are submitted to a quantization process, and then coded with a variable-length code.
  • The variable-length code is a reversible process, which allows the exact reconstruction of that which has been coded with the variable-length code.
  • The display of digital video signals includes a certain number of image frames (30 to 96 fps) displayed or represented successively at a 30 to 75 Hz frequency. Each image frame is still an image formed by a pixels array, of the display resolution of a particular system. By example, the VHS system has a display resolution of 320 columns and 480 rows, the NTSC system has a display resolution of 720 columns and 486 rows, and the high definition television system (HDTV) has a display resolution of 1360 columns and 1020 rows. In reference to a digitized form of low resolution, 320 columns by 480 rows VHS format, a two-hour long movie could be equivalent to 100 gigabytes of digital video information. In comparison, a conventional compact optical disk has an approximate capacity of 0.6 gigabytes, a magnetic hard disk has a 1-2 gigabyte capacity, and the present compact optical disks have a capacity of 8 or more gigabytes.
  • All images we watch at the cinema and TV screens are based on the principle of presenting complete images (static images, like photographs) at a great speed. When they are presented in a fast and sequential manner at a 30 frames per second speed (30 fps) we perceive them as an animated image due to the retention of the human eye.
  • In order to codify the images to be presented in a sequential manner and form video signals, each image needs to be divided in rows, where each line is in turn divided in picture elements or pixels, each pixel has two associated values, namely, luma and chroma. Luma represents the light intensity at each point, while luma represents the color as a function of a defined color space (RGB), which can be represented by three bytes.
  • The images are displayed on a screen in a horizontal-vertical raster, top to bottom and left to right and so on, cyclically. The number of lines and frequency of the display can change as a function of the format, such as NTSC, PAL, or SECAM.
  • The video signals can be digitized for storage in digital format, after being transmitted, received, and decoded to be displayed in a display device, such as a regular television set or the 3DVisor®, this process is known as analog-to-digital video signal coding-decoding.
  • By definition, MPEG has two different methods for interlacing video and audio in the system streams.
  • The transport stream is used in systems with a greater error possibility, such as satellite systems, which are susceptible to interference. Each package is 188 bytes long, starting with an identification header, which makes recognizing gaps and repairing errors possible. Various audio and video programs can be transmitted over the transport stream simultaneously on a single transport stream; due to the header, they can be independently and individually decoded and integrated into many programs.
  • The program stream is used in systems with a lesser error possibility, as in DVD playing. In this case, the packages have a variable-length and a size substantially greater than the packages used in the transport stream. As a main characteristic, the program stream allows only a single program content.
  • Even when the transport and program streams handle different packages, the video and audio formats are decoded in an identical form.
  • In turn, there are three compression types, which are applied to the packages above, e.g. time prediction, compression, and space compression.
  • Decoding is associated to a lengthy mathematical process, which purpose is to decrease the information volume. The complete image of a full frame is divided by a unit called macroblock, each macroblock is made up of a 16 pixels×16 pixels matrix, and is ordered and named top to bottom and left to right. Even with a matrix array on screen, the information sent over the information stream follows a special sequential sequence, i.e. the macroblocks are ordered in ascending order, this is, macroblock0, macroblock1, etc.
  • A set of consecutive macroblocks represents a slice; there can be any number of macroblocks in a slice given that the macroblocks pertain to a single row. As with the macroblocks, the slices are numbered from left to right and bottom to top. The slices should cover the whole image, as this is a form in which MPEG2 compresses the video, a coded image not necessarily needs samples for each pixel. Some MPEG profiles require handling a rigid slice structure, by which the whole image should be covered.
  • U.S. Pat. No. 5,963,257 granted on Oct. 5, 1999 to Katata et al., protects a flat video image decoding device with means to separate the coded data by position areas and image form, bottom layer code, predictive coding top layer code, thus obtaining a hierarchical structure of the coded data; the decoder has means to separate the data coded in the hierarchical structure in order to obtain a high quality image.
  • U.S. Pat. No. 6,292,588 granted on Sep. 18, 2001 to Shen et al., protects a device and method for coding predictive flat images reconstructed and decoded from a small region, in such way that the data of the reconstructed flat image is generated from the sum of the small region image data and the optimal prediction data for said image. Said predictive decoding device for an image data stream includes a variable-length code for unidimensional DCT coefficients. U.S. Pat. No. 6,370,276 granted on Apr. 9, 2002 to Boon, uses a decoding method similar to the above.
  • U.S. Pat. No. 6,456,432 granted on Sep. 24, 2002 to Lazzaro et al., protects a stereoscopic 3D-image display system, which takes images from two perspectives, displays them on a CRT, and multiplexes the images in a field-sequential manner with no flickering for both eyes of the observer.
  • U.S. Pat. No. 6,658,056 granted on Dec. 2, 2003 to Duruoz et al., protects a digital video decoder comprising a logical display section responding to a “proximal field” command to get a digital video field of designated locations in an output memory. The digital video display system is equipped with a MPEG2 video decoder. Images are decoded as a memory buffer, the memory buffer is optimized maintaining compensation variable tables and accessing fixed memory pointer tables displayed as data fields.
  • U.S. Pat. No. 6,665,445 granted on Dec. 16, 2003 to Boon, protects a data structure for image transmission, a flat images coding method and a flat images decoding method. The decoding method is comprised of two parts, the first part to codify the image-form information data stream, the second part is a decoding process for the pixel values of the image data stream, both parts can be switched of the flat image signal coding.
  • U.S. Pat. No. 6,678,331 granted on Jan. 13, 2004 to Moutin et al., protects a MPEG decoder, which uses a shared memory. Actually, the circuit includes a microprocessor, a MPEG decoder, which decodes a flat image sequence, and a common memory for the microprocessor, and the decoder. It also includes a circuit for evaluating the decoder delay, and a control circuit for determining the memory priority for the microprocessor or the decoder.
  • U.S. Pat. No. 6,678,424 granted on Jan. 13, 2004 to Ferguson, protects a behavior model for a real-time human vision system; actually, it processes two image signals in two dimensions, one derived from the other, in different channels.
  • BRIEF DESCRIPTION OF THE INVENTION
  • It is an object of the present invention to provide a stereoscopic 3D-video image digital decoding system and method, comprised of changes in software and changes in hardware.
  • It is an additional object of the present invention to provide a decoding method where the normal video_sequence process is applied to the coded image data, i.e.variable_length decoding (VLD), inverse_scan; inverse_quantization, inverse_discrete_cosine_transform (IDCT), and motion_compensation.
  • It is also an object of the present invention to make changes in the software information for decoding the identification of the video format, 2D-images MPEG2 backward compatibility, discriminating a TDVision® type image, storing the last image buffer, applying information decoding, applying error correction and storing the results in the respective channel buffer.
  • It is still another object of the present invention to provide a decoding method with the video_sequence process normal form, in such a way that when a TDVision® type image is found, the buffer of the last complete image is stored in the left or right channel buffers.
  • It is also another object of the present invention to provide a decoding process in which two interdependent (difference) video signals can be sent within the same video_sequence, in which information decoding is applied and is stored as a B type frame.
  • It is still another object of the present invention to provide a decoding process in which error correction is applied to the last obtained image when the movement and color correction vectors are applied.
  • It is also an object of the present invention to program the decoder by software, to simultaneously receive and codify two independent program streams.
  • It is still another object of the present invention to provide a decoding system, which decodes the 3D-image information via hardware, in which a double output buffer is activated.
  • It is another object of the present invention to provide a decoding system of 3D-image information, which activates an image-decoding selector in parallel and by differences.
  • It is also another object of the present invention to provide a 3D-image information decoding system, which executes the decompression process and displays the corresponding output buffer.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 represents one embodiment of a technology map
  • FIG. 2 shows a flowchart in which the steps of one embodiment of a process are outlined.
  • FIG. 3 illustrates structures that can be modified and the video_sequence of the data stream in order to identify the TDVision® technology image type at the bit level.
  • FIG. 4 shows one embodiment of the compilation software format for the TDVision® decoding method (40).
  • FIG. 5 is a representation of one embodiment of the decoding compilation format of the hardware.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The combination of hardware and software algorithms makes possible the stereoscopic 3D-image information compression, which are received as two independent video signals but with the same time_code, corresponding to the left and right signals coming from a 3Dvision® camera, by sending two simultaneous programs with stereoscopic pair identifiers, thus promoting the coding-decoding process. Also, two interdependent video signals can be handled by obtaining their difference, which is stored as a “B” type frame with the image type identifier. As the coding process was left open in order to promote the technological development, it is only necessary to follow this decoding process, namely: apply variable-length decoding to the coded data where a substantial reduction is obtained, but a look-up table should be used to carry out decoding; apply an inverse scan process; apply an inverse quantization process in which each data is multiplied by a scalar; apply the inverse cosine transform function; apply error correction or motion compensation stage and eventually obtain the decoded image.
  • The novel characteristics of this invention in connection with its structure and operation method will be better understood from the description of the accompanying figures, together with the attached specification, where similar numerals refer to similar parts and steps.
  • FIG. 1 represents the technology map to which the subject object of the present invention pertains. It shows a stereoscopic 3D-image coding and decoding system and corresponding method. The images come from a stereoscopic camera (32), the information compiled in (31) and are displayed in any adequate system (30) or (33). The information is coded in (34) and then it can be transmitted to a system having an adequate previous decoding stage such as (35), which may be a cable system (36), a satellite system (37), a high definition television system (38) or a stereoscopic vision system such as TDVision®'s 3DVisors® (39).
  • FIG. 2 shows a flowchart in which the steps of the process are outlined. The objective is to obtain three-dimensional images from a digital video stream by making modifications to the current MPEG2 decoders, and changes to software (3) and hardware (4) in the decoding process (2): the decoder (1) should be compatible with MPEG2-4.
  • FIG. 3 outlines the structures that should be modified and the video_sequence of the data stream in order to identify the TDVision® technology image type at the bit level.
  • Each of the stages of the decoding process is detailed below (20):
  • The coded data (10) are bytes with block information, macroblocks, fields, frames, and MPEG2 format video images.
  • Variable_length_decoding (11) (VLC, Variable-length Decoder) is a compression algorithm in which the most frequent patterns are replaced by shorter codes and those occurring less frequently are replaced by longer codes. The compressed version of this information occupies less space and can be transmitted faster by networks. However, it is not an easily editable format and requires decompression using a look-up table.
  • For example, the word BEETLE
    Letter ASCII Code VLC
    B 01000010 0000 0010 10
    E 0110 0101 11
    L 0110 1100 0001 01
    T 0111 0100 0100
  • Therefore, the ASCII code for the word is:
  • 0100 0010 0110 0101 0110 0101 011 101000 0110 1100 0110 0101
  • in VLC: 0000 0010 10 11 11 0100 00010 01 11.
  • A substantial decrease is noted, however, in order to go back from VLC to the word ‘Beetle’ a search in the look-up table is needed to decode the bit stream, this is made by exact comparison of the read bits.
  • Inverse scan (12): The information should be grouped by blocks, and by coding the information with the VLC a linear stream is obtained. The blocks are 8×8 data matrixes, so it is necessary to convert the linear information in a square 8×8 matrix. This is made in a descending zigzag manner, top to bottom and left to right in both sequence types, depending on whether it is a progressive image or an interlaced image.
  • Inverse Quantization (13): It consists simply in multiplying each data value by a factor. When codified, most of the data in the blocks are quantized to remove information that the human eye is not able to perceive, the quantization allows to obtain a greater MPEG2 stream conversion, and it is also required to perform the inverse process (Inverse quantization) in the decoding process.
  • Inverse cosine transform (14) (IDCT, inverse_discrete_cosine_transform): The data handled within each block pertain to the frequency domain, this inverse cosine transform allows to return to the samples of the space domain. Once the data in the IDCT have been transformed, pixels, colors and color corrections can be obtained.
  • Motion compensation (15) allows to correct some errors generated before the decoding stage of MPEG format, motion compensation takes as a reference a previous frame and calculates a motion vector relative to the pixels (it can calculate up to four vectors), and uses them to create a new image. This motion compensation is applied to the P and B type images, where the image position is located over a “t” time from the reference images. Additionally to the motion compensation, the error correction is also applied, as it is not enough to predict the position of a particular pixel, but a change in its color can also exist. Thus, the decoded image is obtained (16).
  • To decode a P or B type image, the reference image is taken, the motion vectors are algebraically added to calculate the next image, and finally the error correction data is applied, thus generating the decoded image successfully. Actually, in the video_sequence, two interdependent video signals exist, “R−L=delta, the delta difference is that stored as a B type stereoscopic pair frame with TDVision® identifier and which is constructed at the moment of decoding by differences from the image. This is R−delta=L and L−delta=R, the left image is constructed from the difference with the right image, which in turn is constructed from the difference with the left image.
  • The previous process is outlined in such a way that the left or right signal is taken, both are stored in a temporary buffer, then the difference between the left and right signals is calculated, and then it is coded as a B type image stored in the video_sequence to be later decoded by differences from said image.
  • In the decoding process it can be deducted that the data inputted by the VLC stage are much smaller than the data outputted by the same stage.
  • MPEG video sequence structure: This is the maximum structure used in the MPEG2 format and has the following format:
  • Video sequence (Video_Sequence)
  • Sequence header (Sequence_Header)
  • Sequence extension (Sequence_Extension)
  • User Data (0) and Extension (Extension_and_User_Data (0))
  • Image group header (Group_of_Picture_Header)
  • User Data (1) and Extension (Extension_and_User_Data (1))
  • Image header (Picture_Header)
  • Coded image extension (Picture_Coding_Extension)
  • User Data (2) and Extensions (Extension_and_User_Data (2))
  • Image Data (Picture_Data)
  • Slice(Slice)
  • Macroblock (Macroblock)
  • Motion vectors (Motion_Vectors)
  • Coded Block Pattern (Coded_Block_Pattern)
  • Block (Block)
  • Final Sequence Code (Sequence_end_Code)
  • These structures make up the video sequence. A video sequence is applied for MPEG format, in order to differentiate each version there should be a validation that immediately after the sequence header, the sequence extension is present; should the sequence extension not follow the header, then the stream is in MPEG1 format.
  • At the beginning of a video sequence, the sequence_header and sequence_extension appear in the video_sequence. The sequence_extension repetitions should be identical on the first try and the “s” repetitions of the sequence_header vary little compared to the first occurrence, only the portion defining the quantization matrixes should change. Having sequences repetition allows a random access to the video stream, i.e., if the decoder wants to start playing at the middle of the video stream this may be done, as it only needs to find the sequence_header and sequence_extension prior to that moment in order to decode the following images. This also happens for video streams that could not start from the beginning, such as a satellite decoder turned on after the transmission time.
  • The full video signal coding-decoding process is comprised of the following steps:
  • Digitizing the video signals, which can be done in NTSC, PAL or SECAM format.
  • Storing the video signal in digital form
  • Transmitting the signals
  • Recording the digital video stream in a physical media (DVD, VCD, MiniDV)
  • Receiving the signals
  • Playing the video stream
  • Decoding the signal
  • Displaying the signal
  • It is essential to double the memory to be handled by the adequate DSP and have the possibility of disposing of up to 8 output buffers, which allow the previous and simultaneous representation of a stereoscopic image on a device such as TDVision®'s 3DVisor®
  • Actually, two channels should be initialized when calling the programming API of the DSP as, by example, the illustrative case of the Texas Instruments TMS320C62X DSP.
  • MPEG2VDEC_create (const IMPEG2VDEC_fxns*fxns, const MEPG2VDEC_Params*params).
  • Where IMPEG2VDEC_fxns y MEPG2VDEC_Params are pointer structures defining the operation parameters for each video channel, e.g.:
  • 3DLhandle=MPEG2VDEC_create (fxns3DLEFT,Params3DLEFT).
  • 3DRhandle=MPEG2VDEC_create(fxns3DRIGHT,Params3DRIGH T.
  • Thereby enabling two video channels to be decoded and obtaining two video handlers, one for the left-right stereoscopic channel.
  • A double display output buffer is needed and by means of software, it will be defined which of the two buffers should display the output by calling the AP function:
  • Namely, MPEG2VDEC_APPLY(3DRhandle, inputR1, inputR2, inputR3, 3doutright_pb, 3doutright_fb).
  • MPEG2VDEC_APPLY(3DLhandle, inputL1, inputL2, inputL3, 3doutleft_pb, 3doutleft_fb).
  • This same procedure can be implemented for any DSP, microprocessor or electronic device with similar functions.
  • Where 3DLhandle is the pointer to the handle returned by the DSP's create function, the input1 parameter is the FUNC_DECODE_FRAME or FUNC_START_PARA address, input2 is the pointer to the external input buffer address, and input3 is the size of the external input buffer size.
  • 3doutleft_pb is the address of the parameter buffer and 3doutleft_fb is the beginning of the output buffer where the decoded image will be stored.
  • The timecode and timestamp will be used for output to the final device in a sequential, synchronized manner.
  • It is essential to double the memory to be handled by the DSP and have the possibility of disposing of up to 8 output buffers which allow the previous and simultaneous display of a stereoscopic image on a device such as TDVision® Corporation's 3DVisor®.
  • The integration of software and hardware processes is carried out by devices known as DSP, which execute most of the hardware process. These DSP are programmed by a C and Assembly language hybrid provided by the manufacturer. Each DSP has its own API, consisting of a functions list or procedure calls located in the DSP and called by software.
  • With this reference information, the present application for MPEG2 format-compatible 3D-images decoding is made.
  • Actually, at the beginning of a video sequence the sequence header (sequence_header) and the sequence extension always appear. The repetitions of the sequence extension should be identical to the first. On the contrary, the sequence header repetitions vary a little as compared to the first occurrence, only the portion defining the quantization matrixes should change.
  • FIG. 4 shows the compilation software format for the TDVision® decoding method (40), where the video_sequence (41) of the digital stereoscopic image video stream is identified, which may be dependent or independent (parallel images), in the sequence_header (42). If the image is TDVision® then the double buffer is activated and the changes in the aspect_ratio_information are identified. The information corresponding to the image that can be found here is read in the user_data (43). The sequence_scalable_extension (44) identifies the information contained in it and the base and enhancement layers, the video_sequence can be located here, defines the scalable_mode and the layer identifier. extra_bit_picture (45) identifies the picture_estructure, picture_header and the picture_coding_extension (46) reads the “B” type images and if it is a TDVision® type image, then it decodes the second buffer. picture_temporal_scalable_extension ( ) (47), in case of having temporal scalability, is used to decode B type images.
  • Namely, the sequence header (sequence_header) provides a higher information level on the video stream, for clarity purposes the number of bits corresponding to each is also indicated, the most significative bits are located within the sequence extension (Sequence_Extension) structure, it is formed by the following structures:
    Sequense_Header
    Field bits Description
    Secuence_Header_Code
    32 Sequence_Header Start 0x00001B3
    Horizontal_Size_Value 12 less significative bits for width*
    Vertical Size Value 12 12 less significative bits for length
    Aspect Ratio Information 4 image aspect
    0000 forbidden
    0001 n/a TDVision ®
    0010 4:3 TDVision ®
    0011 16:9 TDVision ®
    0100 2.21:1 TDVision ®
    0111 will execute a logical “and” in order to obtain
    backward compatibility with 2D systems.
    0101...1111 reserved
    Frame rate code 4 0000 forbidden
    0001 24,000/1001
    (23.976) in TDVision ® format
    0010 24 in TDVision ® format
    0011 25 in TDVision ® format
    0100 30,000/1001 (29.97)”
    0101 30 in TDVision ® format
    0110 50 in TDVision ® format
    0111 60,000/1001 (59.94) ” (will execute a logical
    “and” in order to obtain
    backward compatibility with 2D systems.) 1000 60 1111
    reserved
    Bit_rate_value 18 The 18 less significative
    bits of the video_stream bit rate
    (bit_rate = 400 × bit_rate_value + bit_rate_extension << 18)
    the most significative bits are located within the
    sequence_extension structure.
    Marker_bit 1 Always 1 (prevents start_code failure).
    Vbv_buffer_size_value 10 The 10 less significative bits of vbv_buffer_size, which
    determines the size of the video buffering verifier (VBV), a
    structure used to ensure that a data stream can be used
    decoding a limited size buffer without exceeding or leaving too
    much free space in the buffer.
    Constrained_parameters_flag 1 Always 0, not used in MPEG2.
    Load_intra_quantizer_matrix 1 Indicates if an intra-coded quantization matrix is available.
    If (load_intra_quantizer_matrix)
    Intra_quantizer_matrix(64) 8x64 If a quantization matrix is indicated, then it should be
    specified here, it is a 8x64 matrix.
    Load_non_intra_quantizer_matrix 1 If load_non_intra_quantizer_matrix
    If
    load_non_intra_quantizer_matrix
    Non_intra_quantizer_matrix (64) 8x64 If the previous flag is activated, the 8 × 64 data forming
    the quantized matrix are stored here.

    *The most significative bits are located within the sequence_extension structure.
  • Picture_coding_extension
    Field bits # Description
    Extension_start_code
    32 Always 0x000001B5
    Extension_start_code_identifier
    4 Always 1000
    F_code(0)(0) 4 Used to decode motion vectors; when it is a type I
    image, this data is filled with 1111.
    F_code(0)(1) 4
    F_code(1)(0) 4 Decoding information backwards in motion vectors
    (B), when it is a (P) type image
    it should be set to 1111, because there is no backward
    movement.
    F_code(1)(1) 4 Decoding information backwards in motion vectors,
    when it is a P type image it should be set to 1111,
    because there is no backward movement.
    Intra_dc_precision 2 precision used in the inverse quantizing of the
    coefficients of the DC discrete cosine transform.
    00 8 bits precision
    01 9 bits precision
    10 10 bits precision
    11 11 bits precision
    Picture_structure
    2 Specifies if the image is divided in fields or in a full
    frame.
    00 reserved (image in TDVision ® format)
    01 top field
    10 bottom field
    11 by-frame image
    Top_field_first
    1 0 = decode bottom field first
    1 = decode top field first
    Frame_pred_frame_dct
    1
    Concealment_motion_vectors 1
    Q_scale_type 1
    Intra_vic_format 1
    Alternate_scan 1
    Repeat_first_field 1 0 = display a progressive
    frame
    1 = display two identical progressive frames
    Chroma_420_type
    1 If the chroma format is 4:2:0, then it should be equal
    to progressive frame, otherwise it should be equal to
    zero.
    Progressive_frame 1 0 = interlaced
    1 = progressive
    Composite_display_flag
    1 warns about the originally coded information
    V_axis
    1
    Field_sequence 3
    Sub_carrier 1
    Burst_amplitude 7
    Sub_carrier_phase 8
    Next_start_code( )
  • Picture_temporal_scalable_extension( )
  • Two spatial resolution streams exist in case of having temporal scalability, the bottom layer provides a lesser index version of the video frames, while the top layer can be used to derive a greater index version of frames of the same video. The temporal scalability can be used by low quality, low cost or free decoders, while the greater frames per second would be used for a fee.
    Picture_temporal_scalable_extension( )
    Field bits # Definition
    Extension_start_code_identifier
    4 Always 1010
    Reference_select_code 2 It is used to indicate
    that the reference image will be used to decode
    intra_coded images FOR O TYPE IMAGES
    00 enhances the most recent images
    01 the lower and most recent frame layer in
    display order
    10 the next lower frame
    layer in order of forbidden display.
    11 forbidden FOR B TYPE IMAGES
    00 forbidden
    01 most recently decoded images in enhanced
    mode
    10 most recently decoded images in enhanced
    mode
    11 most recent image
    in the bottom layer in display order
    Forward_temporal_reference
    10 Temporal reference
    Marker_bit
    1
    Backward_temporal_reference 10 Temporal reference
    Next_star_code( )
  • Picture_spatial_scalable_extension( )
  • In the case of image spatial scalability, the enhancement layer contains data, which allow a better resolution of the base layer so it can be reconstructed. When an enhancement layer is used as a function of a base layer as a reference for the motion compensation, then the bottom layer should be escalated and offset in order to obtain greater resolution of the enhancement layer.
    Picture_spatial_scalable_extension( )
    Field bits # Definition
    Extension_start_code_identifier
    4 Always
    1001
    Lower_layer_temporal_reference 10 Reference to
    the lower layer's temporal image
    Marker_bit
    1 1
    Lower_layer_horizontal_offset 15 Horizontal compensation
    (Offset)
    Marker_bit 1 1
    Lower_layer_vertical_offset 15 Vertical compensation
    (Offset)
    Spatial_temporal_weight_code_table_index 2 Prediction details
    Lower_layer_progressive_frame 1 1 = progressive
    0 = interlaced
    Lower_layer_desinterlaced_field_select 1 0 = the top field is used
    1 = the bottom field is used
    Next_start_code( )
  • Copyright_extension( )
    Extension_start_code_identifier 4 Always 010
    Copyright_flag 1 if it is equal to 1 then it
    uses copyright
    If it is zero (0), no
    additional copyright information
    is needed
    Copyright_identifier 8 1 = original
    0 = copy
    Original_or_copy
    1
    Reserved 7
    Marker_bit 1
    Copyright_number_1 20 Number granted by
    copyright instance
    Marker_bit
    1
    Copyright_number_2 22 Number granted by
    copyright instance
    Marker_bit
    1
    Copyright_number_3 22 Number granted by
    copyright instance
    Next_start_code( )
    Picture_data( )
  • This is a simple structure, it does not have field in itself.
  • Slice( )
  • Contains information on one or more macroblocks in the same vertical position.
  • Slice_start_code 32
  • Slice_vertical_position_extension 3
  • Priority_breakpoint 7
  • Quantizer_scale_code 5
  • Intra_slice_flag 1
  • Intra_slice 1
  • Reserved_bits 7
  • Extra_bit_slice 1
  • Extra_information_slice 8
  • Extra_bit_slice 1
  • Macroblock( )
  • Macroblock_modes( )
  • Motion_vectors( )
  • Motion vector( )
  • Coded_block_pattern( )
  • Block( )
  • EXTENSION_AND_USER_DATA(2)
  • The image can be displayed in:
  • DVD (Digital Versatile Disks)
  • DTV (Digital Television)
  • HDTV (High Definition Television)
  • CABLE (DVB Digital Video Broadcast)
  • SATELLITE (DSS Digital Satellite Systems); and it is the software and hardware process integration.
  • The decoding compilation format in the hardware (50) section of FIG. 5, is duplicated in the DSP input memory, at the same time, the simultaneous input of two independent or dependent video signals is allowed, corresponding to the left-right stereoscopic existing signal taken by the stereoscopic TDVision® camera. In the procedure the video_sequence (51) is detected to alternate the left and right frames or sending them in parallel, sequence_header (52) identification, the image type (53) is identified, it passes to the normal video stream (54), then it is submitted to an error correction process (55), the video image information is sent to the output buffer (56) which in turn shares and distributes the information to the left channel (57) and the right channel (58) in said channels the video stream information is displayed in 3D or 2D.
  • Consists in storing both L (left) and R (right) video streams in simultaneous form as two independent video streams, but synchronized with the same time_code, so they can later be decoded and played back in parallel in large output buffers. They can also be dependent and decodified by differences.
  • Regarding hardware, most of the process is executed by devices known as DSP (Digital Signal Processors). As an example, namely, the Motorola and the Texas Instruments (TMS320C62X) models can be used.
  • These DSP are programmed by a hybrid language from C and Assembly languages, provided by the manufacturer in question. Each DSP has its own API, consisting of a functions list or procedure calls located in the DSP to be called by software. From this reference information, the 3D-images are coded, which are compatible with the MPEG2 format and with their own coding algorithm. When the information is coded, the DSP is in charge of running the prediction, comparison, quantization, and DCT function application processes in order to form the MPEG2 compressed video stream.
  • In order to obtain three-dimensional images from a digital video stream, certain modifications have been made to the current MPEG2 decoders, by software and hardware changes in different parts of the decoding process. The structures and the video_sequence of the video data stream should be modified to include the necessary flags to identify at the bit level the TDVision® technology image type.
  • The modifications are made in the next decoding steps.
  • Software:
  • Video format identification.
  • Application of a logical “and” for MPEG2 backward compatibility in case of not being a TDVision® video.
  • Image decoding in normal manner (previous technique)
  • scanning the video_sequence.
  • In case of a TDVision® type image:
  • Discriminating if they are dependent or independent video signals
  • Store the last complete image buffer in the left or right channel buffer.
  • Apply the B type frame information decoding.
  • Apply error correction to the last obtained image by applying the motion and color correction vectors.
  • Store the results in their respective channel buffer.
  • Continue the video sequence reading.
  • Hardware:
  • When the information is decoded via hardware;
  • discriminate if the image is 2D or 3D
  • Activate a double output buffer (memory is increased).
  • The difference decoding selector is activated.
  • The parallel decoding selector is activated.
  • The decompression process is executed.
  • The image is displayed in its corresponding output buffer.
  • The following structures, sub-structures and sequenced will be used in specific ways; they belong to the video_sequence structure for the hardware implementation of the MPEG2 backward compatible TDVision® technology.
  • Actually:
  • Sequence_header
  • Aspect_ratio_information
  • 1001 n/a in TDVision®
  • 1010 4:3 in TDVision®
  • 1011 16:9 in TDVision®
  • 1100 2.21:1 in TDVision®
  • A logical “and” will be executed with 0111 to obtain the backward compatibility with 2D systems, when this occurs, the instruction is sent to the DSP that the buffer of the stereoscopic pair (left or right) should be equal to the source, so all the images decoded will be sent to both output buffers to allow the image display in any device.
  • Frame_rate_code
  • 1001 24,000/101 (23.976) in TDVision® format
  • 1010 24 in TDVision® format.
  • 1011 25 in TDVision® format.
  • 1100 30,000/1001 (29.97) in TDVision® format.
  • 1101 30 in TDVision® format.
  • 1110 50 in TDVision® format.
  • 1111 60,000/1001 (59.94) in TDVision® format.
  • A logical “and” with 0111 will be executed in order to obtain backward compatibility with 2D systems.
  • User_data( )
  • Sequence_scalable_extension
  • Picture_header
  • Extra_bit_picture
  • 0=TDVision®
  • 1=normal
  • Picture_coding_extension
  • Picture-structure
  • 00=image in TDVision® format
  • Picture_temporal_scalable_extension( )
  • At the moment of coding the information a DSP is used which is in charge of executing the prediction, comparison, and quantization processes, applies the DCT to form the MPEG2 compressed video stream, and discriminates between 2D or 3D-images.
  • Two video signals are coded in an independent form but with the same time_code, signals corresponding to the left signal and the right signal coming from a 3DVision® camera, sending both programs simultaneously with TDVision® stereoscopic pair identifiers. This type of decoding is known as “by parallel images”, consisting in storing both left and right (L and R) video streams simultaneously as two independent video streams, but time_code-synchronized. Later, they will be decoded and played back in parallel. Only the decoding software should be decoded, the coding and the compression algorithm of the transport stream will be identical to the current one.
  • Software modifications in the decoder.
  • In the decoder, two program streams should be programmed simultaneously, or two interdependent video signals, i.e., constructed from the difference between both stored as a B type frame with an identifier, following the programming API as in the example case, in the use of the TMS320C62X family Texas Instruments DSP.
  • DSP's programming algorithm and method.
  • Create two process channels when starting the DSP (primary and secondary buffers or left and right when calling API).
  • Get the RAM memory pointers for each channel (RAM addresses in the memory map)
  • When a TDVision® type video sequence is obtained
  • it is taken as B type
  • the image is decoded in real-time
  • the change or difference is applied to the complementary buffer
  • the results are stored in the secondary buffer.
  • In that related to the software in the video_sequence data stream, two options are implemented:
  • 1.—One modifies only the software and uses the user_data( ) section to store the error correction that allows to regenerate the stereoscopic signal.
  • 2.—The other enables by hardware the PICTURE_DATA3D( ) function which is transparent to MPEG2-compatible readers, and which it can be decoded by a TDVision®-compatible DSP.
  • At the moment that the MPEG2 decoder detects a user_data( ) code, it will search the 3DVISION_START_IDENTIFIER=0X000ABCD 32-bit identifier, which is an extremely high and difficult to reproduce code, or which does not represent data. Then, the 3D block length to be read will be taken into account, which is a 32-bit “n” data. When this information is detected within the USER_DATA( ), a call to the special decoding function will be made which is then compared to the output buffer and applied from the current read offset of the video_sequence, the n bytes as a typical correction for B type frames. The output of this correction is sent to other output address, which is directly associated to a video output additional to that existing in the electronic display device.
  • If the PICTURE_DATA3D( ) structure is recognized, then it proceeds to read the information directly by the decoder; but it writes the information in a second output buffer, which is also connected to a video output additional to that existing in the electronic display device.
  • In case of the program stream, two signals (left and right) are synchronized by the time_code, which will be decoded in parallel by a MPEG decoder with enough simultaneous multiple video channels decoding capability, or which can send two interdependent video signals within the same video_sequence, e.g., “R-L=delta”, where delta is the difference stores as a “B” type frame with stereoscopic pair TDVision® identifier and which can be reconstructed at the moment of the decoding by differences from the image, i.e., “R−delta=L” or “L−delta=R”, as in the case of the aforementioned Texas Instruments DSP, which is considered as an illustrative but not limiting example.
  • A video containing a single video sequence is also implemented; but alternating the left and right frames at 60 frames per second (30 frames each) and when decoded place the video buffer image in the corresponding left or right channel.
  • It will also have the capacity of detecting via hardware if the signal is of TDVision® type, if this is the case, it will be identified if it is a transport stream, program stream or left-right multiplexion at 60 frames per second.
  • In the case of the transport stream the backward compatibility system is available in the current decoders, having the ability to display the same video without 3d characteristics but only in 2D, in which case the DSP is disabled to display the image in any TDVision® or previous technique device.
  • In the case of the program stream unmodified coders are used, such as those currently used in satellite transmission systems; but the receptor and decoder have a TDVision® flag identification system, thus enabling the second video buffer to form a left-right pair.
  • Finally, in the case of multiplexed video, the MPEG decoder with two video buffers (left-right) is enabled, identifying the adequate frame and separating each signal at 30 frames per second, thus providing a flickerless image, as the video stream is constant and due to the characteristic retention wave of the human eye the multiplexion effect is not appreciated.
  • Particular embodiments of the invention have been illustrated and described, it will be obvious for those skilled in the art that several modifications or changes can be made without departing from the scope of the present invention. All such modifications and changes are intended to be covered by the following claims, so that all changes and modifications fall within the scope of the present invention.

Claims (10)

1. A stereoscopic 3D-video image digital decoding method, in which the structures of the video_sequence of the video data stream are modified via software, to include flags at the bit level for the image type, comprising:
modifying the software and by using the user_data( ) section to store the error correction which allows regeneration of the stereoscopic video signal, thereby actually identifying the video format;
applying a logical “and” for MPEG2 backward compatibility in case it is not a TDVision® video;
decoding by scanning the video_sequence; when the image is a TDVision® image:
a) storing the last complete image buffer in the left or right channel buffer.
b) applying the differences or parallel decoding for B type frame information,
c) applying error correction to the last image obtained by applying the motion and color correction vectors,
d) storing the results in their respective channel buffer, and
e) continuing with the video_sequence reading.
2. The stereoscopic 3D-video image digital decoding method and system, in which the video_sequence structures of the video data stream are modified via software to include the necessary flags at the bit level of the image type of claim 1, wherein the decoder compilation format comprises:
a) reading video_sequence,
b) discriminating the sequence_header, if a TDVision® image is identified, then activating the double buffer,
c) reading in the user_data the image as if it was contained in said structure,
d) adding in the sequence_scalable_extension information to the video_sequence MPEG, said information could be contained within said structure,
e) finding in the picture_header the TDVision® image identifier in the extra_bit_picture,
f) reading the “B” type image in the picture_coding_extension, and if it is a TDVision® type image, decoding then the second buffer, and
g) if the image is temporarily scalable, applying “B” to the decoder.
3. The stereoscopic 3D-video images digital decoding method, in which the structures and the video_sequence of the video data stream are modified to include the necessary flags at the bit level of the image type of claim 1, wherein when the decoder detects a user_data( ) code, it searches the 32-bit 3DVision®_start_identifier=0x000ABCD identifier, upon detecting this information a call is made to the special decoding function which compares the output buffer and applies it from the current reading offset of the video_sequence.
4. The stereoscopic 3D-video images digital decoding method, in which the video_sequence structures of the video data stream are modified via software to include the necessary flags at the bit level of the image type of claim 1, wherein the decoder is programmed via software to simultaneously receive and decode two program streams.
5. The stereoscopic 3D-video images digital decoding method, in which the video_sequence structures of the video data stream are modified via software to include the necessary flags at the bit level of the image type of claim 1, wherein two interdependent video signals can be sent within the same video_sequence; said signals depending one form the other, and coming from a 3DVision® camera; in terms of their algebraic addition (R−L=delta), each signal is stored as a B type frame, which decoding is by differences from one of them.
6. The Stereoscopic 3D-video images digital decoding method, in which the video_sequence structures of the video data stream are modified via software to include the necessary flags at the bit level of the image type of claim 1, wherein two independent video streams L and R are stored in simultaneous form, but being synchronized with the same time_code, and decoded and displayed in parallel.
7. A stereoscopic 3D-video image digital decoding system, in which the video_sequence structures of the video data stream are modified via hardware, wherein the specific use of the structures, substructures and sequences belong to the video_sequence to implement the MPEG2 backward-compatible TDVision® technology via hardware comprise:
discriminating whether it is a 2S or 3D signal;
activating a double output buffer (additional memory);
activating a parallel decoding selector, activating a difference-decoding selector;
executing the image decompression process, displaying the image in the corresponding output buffer; and
enabling the PICTURE_DATA3D( ) function, which is transparent for the compatible MPEG2 readers.
8. The stereoscopic 3D-video image digital decoding system of claim 7, wherein the specific use of the structures, substructures and sequences belonging to the video_sequence in order to implement the MPEG2 backward-compatible TDVision® technology via hardware comprise:
a) sequence_header
aspect_ratio_information
1001 n/a in TDVision®
1010 4:3 in TDVision®
1011 16:9 in TDVision®
1100 2.21:1 in TDVision®
a logical “and” with 0111 is executed to obtain backward compatibility with 2D systems, where an instruction is sent to the Digital Signal Processor (DSP) stating that the stereoscopic pair buffer (left or right) should be equal to the source;
b) frame_rate_code
1001 24,000/1001 (23.976) in TDVision® format
1010 24 in TDVision® format
1011 25 in TDVision® format
1100 30,000/1001 (29.97) in TDVision® format
1101 30 in TDVision® format
1110 50 in TDVision® format
1111 60,000/1001 (59.94) in TDVision® format
a logical “and” with 0111 is executed to obtain backward compatibility with 2D systems, where an instruction is sent to the DSP stating that the stereoscopic pair buffer (left or right) should be equal to the source;
c) user_data( )
sequence_scalable_extension
d) picture_header
extra_bit_picture
0=TDVision®
1=normal
e) picture_coding_extension
picture_structure
00=image in TDVision® format
f) picture_temporal_scalable_extension( ).
9. The stereoscopic 3D-video images digital decoding system of claim 7, wherein when the PICTURE_DATA3D( )structure is recognized, it proceeds to read the information directly by the decoder, but it writes the information in a second output buffer also connected to a video output additional to that existing in the electronic display device.
10. The stereoscopic 3D-video images digital decoding system of claim 7, wherein, if the signal is of TDVision® type, it is identified if it is a transport stream, program stream or left or right multiplexion at 60 frames per second; when it is a transport stream it has backward compatibility in the current 2D coders; where an instruction is sent to the DSP stating that the stereoscopic pair buffer (left or right) should be equal to the source, having the ability to display the video without 3D characteristics of TDVision®.
US11/510,262 2004-02-27 2006-08-25 Stereoscopic 3D-video image digital decoding system and method Abandoned US20070041444A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/837,421 US9503742B2 (en) 2004-02-27 2010-07-15 System and method for decoding 3D stereoscopic digital video
US15/094,808 US20170070742A1 (en) 2004-02-27 2016-04-08 System and method for decoding 3d stereoscopic digital video
US15/644,307 US20190058894A1 (en) 2004-02-27 2017-07-07 System and method for decoding 3d stereoscopic digital video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/MX2004/000012 WO2005083637A1 (en) 2004-02-27 2004-02-27 Method and system for digital decoding 3d stereoscopic video images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/MX2004/000012 Continuation WO2005083637A1 (en) 2004-02-27 2004-02-27 Method and system for digital decoding 3d stereoscopic video images

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US12/837,421 Continuation US9503742B2 (en) 2004-02-27 2010-07-15 System and method for decoding 3D stereoscopic digital video
US15/094,808 Continuation US20170070742A1 (en) 2004-02-27 2016-04-08 System and method for decoding 3d stereoscopic digital video

Publications (1)

Publication Number Publication Date
US20070041444A1 true US20070041444A1 (en) 2007-02-22

Family

ID=34910116

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/510,262 Abandoned US20070041444A1 (en) 2004-02-27 2006-08-25 Stereoscopic 3D-video image digital decoding system and method
US12/837,421 Expired - Fee Related US9503742B2 (en) 2004-02-27 2010-07-15 System and method for decoding 3D stereoscopic digital video
US15/094,808 Abandoned US20170070742A1 (en) 2004-02-27 2016-04-08 System and method for decoding 3d stereoscopic digital video
US15/644,307 Abandoned US20190058894A1 (en) 2004-02-27 2017-07-07 System and method for decoding 3d stereoscopic digital video

Family Applications After (3)

Application Number Title Priority Date Filing Date
US12/837,421 Expired - Fee Related US9503742B2 (en) 2004-02-27 2010-07-15 System and method for decoding 3D stereoscopic digital video
US15/094,808 Abandoned US20170070742A1 (en) 2004-02-27 2016-04-08 System and method for decoding 3d stereoscopic digital video
US15/644,307 Abandoned US20190058894A1 (en) 2004-02-27 2017-07-07 System and method for decoding 3d stereoscopic digital video

Country Status (7)

Country Link
US (4) US20070041444A1 (en)
EP (2) EP1727090A1 (en)
JP (1) JP2007525907A (en)
KR (1) KR101177663B1 (en)
CN (1) CN1938727A (en)
CA (1) CA2557534A1 (en)
WO (1) WO2005083637A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060114334A1 (en) * 2004-09-21 2006-06-01 Yoshinori Watanabe Image pickup apparatus with function of rate conversion processing and control method therefor
US20060132646A1 (en) * 2004-12-21 2006-06-22 Nec Electronics Corporation Video signal processing apparatus and video signal processing method
US20070041442A1 (en) * 2004-02-27 2007-02-22 Novelo Manuel R G Stereoscopic three dimensional video image digital coding system and method
US20070280543A1 (en) * 2006-04-25 2007-12-06 Seiko Epson Corporation Image processing apparatus and image processing method
WO2009076595A2 (en) * 2007-12-12 2009-06-18 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
US20090219985A1 (en) * 2008-02-28 2009-09-03 Vasanth Swaminathan Systems and Methods for Processing Multiple Projections of Video Data in a Single Video File
US20100175741A1 (en) * 2009-01-13 2010-07-15 John Danhakl Dual Axis Sun-Tracking Solar Panel Array
JP2010530160A (en) * 2007-06-07 2010-09-02 エンハンスト チップ テクノロジー インコーポレイテッド Encoded stereoscopic video data file format
US20100277568A1 (en) * 2007-12-12 2010-11-04 Electronics And Telecommunications Research Institute Method and apparatus for stereoscopic data processing based on digital multimedia broadcasting
WO2011008917A1 (en) * 2009-07-15 2011-01-20 General Instrument Corporation Simulcast of stereoviews for 3d tv
US20110128353A1 (en) * 2009-11-30 2011-06-02 Canon Kabushiki Kaisha Robust image alignment for distributed multi-view imaging systems
WO2011072016A1 (en) * 2009-12-08 2011-06-16 Broadcom Corporation Method and system for handling multiple 3-d video formats
US20110216163A1 (en) * 2010-03-08 2011-09-08 Dolby Laboratories Licensing Corporation Methods For Carrying And Transmitting 3D Z-Norm Attributes In Digital TV Closed Captioning
US20110241976A1 (en) * 2006-11-02 2011-10-06 Sensics Inc. Systems and methods for personal viewing devices
US20120331106A1 (en) * 2011-06-24 2012-12-27 General Instrument Corporation Intelligent buffering of media streams delivered over internet
US20130141533A1 (en) * 2008-12-18 2013-06-06 Jongyeul Suh Digital broadcasting reception method capable of displaying stereoscopic image, and digital broadcasting reception apparatus using same
US20140146895A1 (en) * 2012-11-28 2014-05-29 Cisco Technology, Inc. Fast Switching Hybrid Video Decoder
US20140341300A1 (en) * 2005-02-16 2014-11-20 Gvbb Holdings S.A.R.L. Agile decoder
US9215436B2 (en) 2009-06-24 2015-12-15 Dolby Laboratories Licensing Corporation Insertion of 3D objects in a stereoscopic image at relative depth
US9215435B2 (en) 2009-06-24 2015-12-15 Dolby Laboratories Licensing Corp. Method for embedding subtitles and/or graphic overlays in a 3D or multi-view video data
US9307002B2 (en) 2011-06-24 2016-04-05 Thomson Licensing Method and device for delivering 3D content
US9519994B2 (en) 2011-04-15 2016-12-13 Dolby Laboratories Licensing Corporation Systems and methods for rendering 3D image independent of display size and viewing distance
US20170164041A1 (en) * 2015-12-07 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for playing videos
US9984314B2 (en) * 2016-05-06 2018-05-29 Microsoft Technology Licensing, Llc Dynamic classifier selection based on class skew
CN110324628A (en) * 2014-03-07 2019-10-11 索尼公司 Sending device, sending method, reception device and method of reseptance
CN112684483A (en) * 2021-01-22 2021-04-20 浙江理工大学 Navigation deviation perception based on satellite and vision fusion and information acquisition method thereof
US11516451B2 (en) * 2012-04-25 2022-11-29 Sony Group Corporation Imaging apparatus, imaging processing method, image processing device and imaging processing system

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100842568B1 (en) * 2007-02-08 2008-07-01 삼성전자주식회사 Apparatus and method for making compressed image data and apparatus and method for output compressed image data
US20080252719A1 (en) * 2007-04-13 2008-10-16 Samsung Electronics Co., Ltd. Apparatus, method, and system for generating stereo-scopic image file based on media standards
JP2009044537A (en) * 2007-08-09 2009-02-26 Osaka Univ Video stream processing device, its control method, program, and recording medium
KR101194480B1 (en) * 2008-06-18 2012-10-24 미쓰비시덴키 가부시키가이샤 Three-dimensional video conversion recording device, three-dimensional video conversion recording method, recording medium, three-dimensional video conversion device, and three-dimensional video transmission device
CN102484730A (en) * 2009-06-04 2012-05-30 寇平公司 3d video processor integrated with head mounted display
CN102197655B (en) * 2009-06-10 2014-03-12 Lg电子株式会社 Stereoscopic image reproduction method in case of pause mode and stereoscopic image reproduction apparatus using same
CN102656620B (en) * 2009-11-13 2017-06-09 寇平公司 Method for driving 3D binocular ophthalmoscopes from standard video stream
US8922625B2 (en) * 2009-11-19 2014-12-30 Lg Electronics Inc. Mobile terminal and controlling method thereof
WO2011084895A1 (en) 2010-01-08 2011-07-14 Kopin Corporation Video eyewear for smart phone games
US8570361B2 (en) 2010-01-11 2013-10-29 Mediatek Inc. Decoding method and decoding apparatus for using parallel processing scheme to decode pictures in different bitstreams after required decoded data derived from decoding preceding picture(s) is ready
CN102123280B (en) * 2010-01-11 2016-03-02 联发科技股份有限公司 Coding/decoding method and decoding device
US20120281075A1 (en) * 2010-01-18 2012-11-08 Lg Electronics Inc. Broadcast signal receiver and method for processing video data
CA2797619C (en) * 2010-04-30 2015-11-24 Lg Electronics Inc. An apparatus of processing an image and a method of processing thereof
CN101888567B (en) * 2010-07-07 2011-10-05 深圳超多维光电子有限公司 Stereoscopic image processing method and stereoscopic display device
TWI406559B (en) * 2010-09-28 2013-08-21 Innolux Corp Display method and computer readable medium performing thereof
GB2487200A (en) 2011-01-12 2012-07-18 Canon Kk Video encoding and decoding with improved error resilience
US9681111B1 (en) * 2015-10-22 2017-06-13 Gopro, Inc. Apparatus and methods for embedding metadata into video stream
US10735826B2 (en) * 2017-12-20 2020-08-04 Intel Corporation Free dimension format and codec

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5612735A (en) * 1995-05-26 1997-03-18 Luncent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing two disparity estimates
US5619256A (en) * 1995-05-26 1997-04-08 Lucent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions
US5652616A (en) * 1996-08-06 1997-07-29 General Instrument Corporation Of Delaware Optimal disparity estimation for stereoscopic video coding
US5886736A (en) * 1996-10-24 1999-03-23 General Instrument Corporation Synchronization of a stereoscopic video sequence
US5963257A (en) * 1995-07-14 1999-10-05 Sharp Kabushiki Kaisha Video coding device and video decoding device
US6043838A (en) * 1997-11-07 2000-03-28 General Instrument Corporation View offset estimation for stereoscopic video coding
US6055012A (en) * 1995-12-29 2000-04-25 Lucent Technologies Inc. Digital multi-view video compression with complexity and compatibility constraints
US6072831A (en) * 1996-07-03 2000-06-06 General Instrument Corporation Rate control for stereoscopic digital video encoding
US6144701A (en) * 1996-10-11 2000-11-07 Sarnoff Corporation Stereoscopic video coding and decoding apparatus and method
US6151362A (en) * 1998-10-30 2000-11-21 Motorola, Inc. Joint rate control for stereoscopic video coding
US6292588B1 (en) * 1996-05-28 2001-09-18 Matsushita Electric Industrial Company, Limited Image predictive decoding apparatus
US20020009137A1 (en) * 2000-02-01 2002-01-24 Nelson John E. Three-dimensional video broadcasting system
US6370276B2 (en) * 1997-04-09 2002-04-09 Matsushita Electric Industrial Co., Ltd. Image predictive decoding method, image predictive decoding apparatus, image predictive coding method, image predictive coding apparatus, and data storage media
US6370193B1 (en) * 1997-02-26 2002-04-09 Samsung Electronics Co., Ltd. MPEG data compression and decompression using adjacent data value differencing
US6377625B1 (en) * 1999-06-05 2002-04-23 Soft4D Co., Ltd. Method and apparatus for generating steroscopic image using MPEG data
US6456432B1 (en) * 1990-06-11 2002-09-24 Reveo, Inc. Stereoscopic 3-d viewing system with portable electro-optical viewing glasses and shutter-state control signal transmitter having multiple modes of operation for stereoscopic viewing of 3-d images displayed in different stereoscopic image formats
US20030048354A1 (en) * 2001-08-29 2003-03-13 Sanyo Electric Co., Ltd. Stereoscopic image processing and display system
US20030095177A1 (en) * 2001-11-21 2003-05-22 Kug-Jin Yun 3D stereoscopic/multiview video processing system and its method
US20030190079A1 (en) * 2000-03-31 2003-10-09 Stephane Penain Encoding of two correlated sequences of data
US20030202592A1 (en) * 2002-04-20 2003-10-30 Sohn Kwang Hoon Apparatus for encoding a multi-view moving picture
US6658056B1 (en) * 1999-03-30 2003-12-02 Sony Corporation Digital video decoding, buffering and frame-rate converting method and apparatus
US6665445B1 (en) * 1997-07-10 2003-12-16 Matsushita Electric Industrial Co., Ltd. Data structure for image transmission, image coding method, and image decoding method
US6678331B1 (en) * 1999-11-03 2004-01-13 Stmicrolectronics S.A. MPEG decoder using a shared memory
US6678424B1 (en) * 1999-11-11 2004-01-13 Tektronix, Inc. Real time human vision system behavioral modeling
US20040008893A1 (en) * 2002-07-10 2004-01-15 Nec Corporation Stereoscopic image encoding and decoding device
US20040027452A1 (en) * 2002-08-07 2004-02-12 Yun Kug Jin Method and apparatus for multiplexing multi-view three-dimensional moving picture
US20040101043A1 (en) * 2002-11-25 2004-05-27 Dynamic Digital Depth Research Pty Ltd Image encoding system
US20040252186A1 (en) * 2003-03-20 2004-12-16 Ken Mashitani Method, program, storage medium, server and image filter for displaying a three-dimensional image
US20060133493A1 (en) * 2002-12-27 2006-06-22 Suk-Hee Cho Method and apparatus for encoding and decoding stereoscopic video
US20070041422A1 (en) * 2005-08-01 2007-02-22 Thermal Wave Imaging, Inc. Automated binary processing of thermographic sequence data
US7336088B2 (en) * 2002-09-20 2008-02-26 Josep Rius Vazquez Method and apparatus for determining IDDQ
US20100039499A1 (en) * 2003-04-17 2010-02-18 Toshio Nomura 3-dimensional image creating apparatus, 3-dimensional image reproducing apparatus, 3-dimensional image processing apparatus, 3-dimensional image processing program and recording medium recorded with the program

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496183B1 (en) 1998-06-30 2002-12-17 Koninklijke Philips Electronics N.V. Filter for transforming 3D data in a hardware accelerated rendering architecture
JP2586260B2 (en) * 1991-10-22 1997-02-26 三菱電機株式会社 Adaptive blocking image coding device
NO175080B (en) * 1992-03-11 1994-05-16 Teledirektoratets Forskningsav Procedure for encoding image data
EP0639031A3 (en) 1993-07-09 1995-04-05 Rca Thomson Licensing Corp Method and apparatus for encoding stereo video signals.
JPH07240943A (en) * 1994-02-25 1995-09-12 Sanyo Electric Co Ltd Stereoscopic image encoding method
JP3524147B2 (en) 1994-04-28 2004-05-10 キヤノン株式会社 3D image display device
JP3086396B2 (en) * 1995-03-10 2000-09-11 シャープ株式会社 Image encoding device and image decoding device
NZ309818A (en) 1995-06-02 1999-04-29 Philippe Schoulz Process for transforming images into stereoscopic images, images and image series obtained by this process
JPH09139957A (en) 1995-11-14 1997-05-27 Mitsubishi Electric Corp Graphic display device
JP3952319B2 (en) 1995-12-29 2007-08-01 株式会社セガ Stereoscopic image system, method thereof, game device, and recording medium
KR970060973A (en) 1996-01-31 1997-08-12 김광호 Digital stereoscopic image coding / decoding device
CN1136733C (en) * 1996-11-06 2004-01-28 松下电器产业株式会社 Image encoding/decoding method, image encoder/decoder and image encoding/decoding program recording medium
KR20000068660A (en) * 1997-07-29 2000-11-25 요트.게.아. 롤페즈 Method of reconstruction of tridimensional scenes and corresponding reconstruction device and decoding system
JPH1169346A (en) 1997-08-18 1999-03-09 Sony Corp Sender, receiver, transmitter, sending method, reception method and transmission method
JPH11113026A (en) * 1997-09-29 1999-04-23 Victor Co Of Japan Ltd Device and method for encoding and decoding stereoscopic moving image high efficiency
JP3420504B2 (en) 1998-06-30 2003-06-23 キヤノン株式会社 Information processing method
EP1006482A3 (en) * 1998-12-01 2005-08-10 Canon Kabushiki Kaisha Encoding separately image object and its boundary
DE60042475D1 (en) * 1999-05-27 2009-08-13 Ipg Electronics 503 Ltd CODING OF A VIDEO SIGNAL WITH HIGH RESOLUTION CODING FOR INTERESTING REGIONS
JP2001054140A (en) 1999-08-11 2001-02-23 Sukurudo Enterprise Kk Stereo video band compression coding method, decoding method and recording medium
AU2003231510A1 (en) * 2002-04-25 2003-11-10 Sharp Kabushiki Kaisha Image data creation device, image data reproduction device, and image data recording medium
JP3992533B2 (en) * 2002-04-25 2007-10-17 シャープ株式会社 Data decoding apparatus for stereoscopic moving images enabling stereoscopic viewing
CN1204757C (en) 2003-04-22 2005-06-01 上海大学 Stereo video stream coder/decoder and stereo video coding/decoding system
JP2007525906A (en) 2004-02-27 2007-09-06 ティディヴィジョン コーポレイション エス.エー. デ シー.ヴィ. Stereo 3D video image digital coding system and method
US20080017360A1 (en) * 2006-07-20 2008-01-24 International Business Machines Corporation Heat exchanger with angled secondary fins extending from primary fins

Patent Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456432B1 (en) * 1990-06-11 2002-09-24 Reveo, Inc. Stereoscopic 3-d viewing system with portable electro-optical viewing glasses and shutter-state control signal transmitter having multiple modes of operation for stereoscopic viewing of 3-d images displayed in different stereoscopic image formats
US5619256A (en) * 1995-05-26 1997-04-08 Lucent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions
US5612735A (en) * 1995-05-26 1997-03-18 Luncent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing two disparity estimates
US5963257A (en) * 1995-07-14 1999-10-05 Sharp Kabushiki Kaisha Video coding device and video decoding device
US6055012A (en) * 1995-12-29 2000-04-25 Lucent Technologies Inc. Digital multi-view video compression with complexity and compatibility constraints
US6292588B1 (en) * 1996-05-28 2001-09-18 Matsushita Electric Industrial Company, Limited Image predictive decoding apparatus
US6072831A (en) * 1996-07-03 2000-06-06 General Instrument Corporation Rate control for stereoscopic digital video encoding
US5652616A (en) * 1996-08-06 1997-07-29 General Instrument Corporation Of Delaware Optimal disparity estimation for stereoscopic video coding
US6144701A (en) * 1996-10-11 2000-11-07 Sarnoff Corporation Stereoscopic video coding and decoding apparatus and method
US5886736A (en) * 1996-10-24 1999-03-23 General Instrument Corporation Synchronization of a stereoscopic video sequence
US6370193B1 (en) * 1997-02-26 2002-04-09 Samsung Electronics Co., Ltd. MPEG data compression and decompression using adjacent data value differencing
US6370276B2 (en) * 1997-04-09 2002-04-09 Matsushita Electric Industrial Co., Ltd. Image predictive decoding method, image predictive decoding apparatus, image predictive coding method, image predictive coding apparatus, and data storage media
US6665445B1 (en) * 1997-07-10 2003-12-16 Matsushita Electric Industrial Co., Ltd. Data structure for image transmission, image coding method, and image decoding method
US6043838A (en) * 1997-11-07 2000-03-28 General Instrument Corporation View offset estimation for stereoscopic video coding
US6151362A (en) * 1998-10-30 2000-11-21 Motorola, Inc. Joint rate control for stereoscopic video coding
US6658056B1 (en) * 1999-03-30 2003-12-02 Sony Corporation Digital video decoding, buffering and frame-rate converting method and apparatus
US6377625B1 (en) * 1999-06-05 2002-04-23 Soft4D Co., Ltd. Method and apparatus for generating steroscopic image using MPEG data
US6678331B1 (en) * 1999-11-03 2004-01-13 Stmicrolectronics S.A. MPEG decoder using a shared memory
US6678424B1 (en) * 1999-11-11 2004-01-13 Tektronix, Inc. Real time human vision system behavioral modeling
US20020009137A1 (en) * 2000-02-01 2002-01-24 Nelson John E. Three-dimensional video broadcasting system
US20030190079A1 (en) * 2000-03-31 2003-10-09 Stephane Penain Encoding of two correlated sequences of data
US20030048354A1 (en) * 2001-08-29 2003-03-13 Sanyo Electric Co., Ltd. Stereoscopic image processing and display system
US20030095177A1 (en) * 2001-11-21 2003-05-22 Kug-Jin Yun 3D stereoscopic/multiview video processing system and its method
US20040120396A1 (en) * 2001-11-21 2004-06-24 Kug-Jin Yun 3D stereoscopic/multiview video processing system and its method
US20030202592A1 (en) * 2002-04-20 2003-10-30 Sohn Kwang Hoon Apparatus for encoding a multi-view moving picture
US20040008893A1 (en) * 2002-07-10 2004-01-15 Nec Corporation Stereoscopic image encoding and decoding device
US20040027452A1 (en) * 2002-08-07 2004-02-12 Yun Kug Jin Method and apparatus for multiplexing multi-view three-dimensional moving picture
US7136415B2 (en) * 2002-08-07 2006-11-14 Electronics And Telecommunications Research Institute Method and apparatus for multiplexing multi-view three-dimensional moving picture
US7336088B2 (en) * 2002-09-20 2008-02-26 Josep Rius Vazquez Method and apparatus for determining IDDQ
US20040101043A1 (en) * 2002-11-25 2004-05-27 Dynamic Digital Depth Research Pty Ltd Image encoding system
US20060133493A1 (en) * 2002-12-27 2006-06-22 Suk-Hee Cho Method and apparatus for encoding and decoding stereoscopic video
US20040252186A1 (en) * 2003-03-20 2004-12-16 Ken Mashitani Method, program, storage medium, server and image filter for displaying a three-dimensional image
US20100039499A1 (en) * 2003-04-17 2010-02-18 Toshio Nomura 3-dimensional image creating apparatus, 3-dimensional image reproducing apparatus, 3-dimensional image processing apparatus, 3-dimensional image processing program and recording medium recorded with the program
US20070041422A1 (en) * 2005-08-01 2007-02-22 Thermal Wave Imaging, Inc. Automated binary processing of thermographic sequence data

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070041442A1 (en) * 2004-02-27 2007-02-22 Novelo Manuel R G Stereoscopic three dimensional video image digital coding system and method
US20100271463A1 (en) * 2004-02-27 2010-10-28 Td Vision Corporation S.A. De C.V. System and method for encoding 3d stereoscopic digital video
US7860321B2 (en) * 2004-09-21 2010-12-28 Canon Kabushiki Kaisha Image pickup apparatus with function of rate conversion processing and control method therefor
US20060114334A1 (en) * 2004-09-21 2006-06-01 Yoshinori Watanabe Image pickup apparatus with function of rate conversion processing and control method therefor
US20060132646A1 (en) * 2004-12-21 2006-06-22 Nec Electronics Corporation Video signal processing apparatus and video signal processing method
US7697064B2 (en) * 2004-12-21 2010-04-13 Nec Electronics Corporation Video signal processing apparatus and video signal processing method
US20140341300A1 (en) * 2005-02-16 2014-11-20 Gvbb Holdings S.A.R.L. Agile decoder
US20070280543A1 (en) * 2006-04-25 2007-12-06 Seiko Epson Corporation Image processing apparatus and image processing method
US7860325B2 (en) * 2006-04-25 2010-12-28 Seiko Epson Corporation Image processing apparatus and image processing method for parallel decompression of image files
US10908421B2 (en) * 2006-11-02 2021-02-02 Razer (Asia-Pacific) Pte. Ltd. Systems and methods for personal viewing devices
US20110241976A1 (en) * 2006-11-02 2011-10-06 Sensics Inc. Systems and methods for personal viewing devices
JP2010530160A (en) * 2007-06-07 2010-09-02 エンハンスト チップ テクノロジー インコーポレイテッド Encoded stereoscopic video data file format
WO2009076595A3 (en) * 2007-12-12 2013-06-27 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
US20100277568A1 (en) * 2007-12-12 2010-11-04 Electronics And Telecommunications Research Institute Method and apparatus for stereoscopic data processing based on digital multimedia broadcasting
WO2009076595A2 (en) * 2007-12-12 2009-06-18 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
US20090219985A1 (en) * 2008-02-28 2009-09-03 Vasanth Swaminathan Systems and Methods for Processing Multiple Projections of Video Data in a Single Video File
US20130141533A1 (en) * 2008-12-18 2013-06-06 Jongyeul Suh Digital broadcasting reception method capable of displaying stereoscopic image, and digital broadcasting reception apparatus using same
US9516294B2 (en) * 2008-12-18 2016-12-06 Lg Electronics Inc. Digital broadcasting reception method capable of displaying stereoscopic image, and digital broadcasting reception apparatus using same
US20100175741A1 (en) * 2009-01-13 2010-07-15 John Danhakl Dual Axis Sun-Tracking Solar Panel Array
US9215435B2 (en) 2009-06-24 2015-12-15 Dolby Laboratories Licensing Corp. Method for embedding subtitles and/or graphic overlays in a 3D or multi-view video data
US9215436B2 (en) 2009-06-24 2015-12-15 Dolby Laboratories Licensing Corporation Insertion of 3D objects in a stereoscopic image at relative depth
US9036700B2 (en) 2009-07-15 2015-05-19 Google Technology Holdings LLC Simulcast of stereoviews for 3D TV
WO2011008917A1 (en) * 2009-07-15 2011-01-20 General Instrument Corporation Simulcast of stereoviews for 3d tv
US20110012992A1 (en) * 2009-07-15 2011-01-20 General Instrument Corporation Simulcast of stereoviews for 3d tv
US8810633B2 (en) * 2009-11-30 2014-08-19 Canon Kabushiki Kaisha Robust image alignment for distributed multi-view imaging systems
US20110128353A1 (en) * 2009-11-30 2011-06-02 Canon Kabushiki Kaisha Robust image alignment for distributed multi-view imaging systems
CN102474632A (en) * 2009-12-08 2012-05-23 美国博通公司 Method and system for handling multiple 3-d video formats
WO2011072016A1 (en) * 2009-12-08 2011-06-16 Broadcom Corporation Method and system for handling multiple 3-d video formats
US9426441B2 (en) 2010-03-08 2016-08-23 Dolby Laboratories Licensing Corporation Methods for carrying and transmitting 3D z-norm attributes in digital TV closed captioning
US20110216163A1 (en) * 2010-03-08 2011-09-08 Dolby Laboratories Licensing Corporation Methods For Carrying And Transmitting 3D Z-Norm Attributes In Digital TV Closed Captioning
US9519994B2 (en) 2011-04-15 2016-12-13 Dolby Laboratories Licensing Corporation Systems and methods for rendering 3D image independent of display size and viewing distance
US9615126B2 (en) * 2011-06-24 2017-04-04 Google Technology Holdings LLC Intelligent buffering of media streams delivered over internet
US9307002B2 (en) 2011-06-24 2016-04-05 Thomson Licensing Method and device for delivering 3D content
US9942585B2 (en) 2011-06-24 2018-04-10 Google Technology Holdings LLC Intelligent buffering of media streams delivered over internet
US20120331106A1 (en) * 2011-06-24 2012-12-27 General Instrument Corporation Intelligent buffering of media streams delivered over internet
US11516451B2 (en) * 2012-04-25 2022-11-29 Sony Group Corporation Imaging apparatus, imaging processing method, image processing device and imaging processing system
US9179144B2 (en) * 2012-11-28 2015-11-03 Cisco Technology, Inc. Fast switching hybrid video decoder
US20140146895A1 (en) * 2012-11-28 2014-05-29 Cisco Technology, Inc. Fast Switching Hybrid Video Decoder
US9967579B2 (en) 2012-11-28 2018-05-08 Cisco Technology, Inc. Fast switching hybrid video decoder
CN110324628A (en) * 2014-03-07 2019-10-11 索尼公司 Sending device, sending method, reception device and method of reseptance
US11758160B2 (en) 2014-03-07 2023-09-12 Sony Group Corporation Transmission device, transmission method, reception device, and reception method
US20170164041A1 (en) * 2015-12-07 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for playing videos
US9984314B2 (en) * 2016-05-06 2018-05-29 Microsoft Technology Licensing, Llc Dynamic classifier selection based on class skew
CN112684483A (en) * 2021-01-22 2021-04-20 浙江理工大学 Navigation deviation perception based on satellite and vision fusion and information acquisition method thereof

Also Published As

Publication number Publication date
EP2544451A2 (en) 2013-01-09
CN1938727A (en) 2007-03-28
EP1727090A1 (en) 2006-11-29
US20190058894A1 (en) 2019-02-21
CA2557534A1 (en) 2005-09-09
US9503742B2 (en) 2016-11-22
US20170070742A1 (en) 2017-03-09
EP2544451A3 (en) 2014-01-08
US20100271462A1 (en) 2010-10-28
KR101177663B1 (en) 2012-09-07
KR20110111545A (en) 2011-10-11
WO2005083637A1 (en) 2005-09-09
JP2007525907A (en) 2007-09-06

Similar Documents

Publication Publication Date Title
US20190058894A1 (en) System and method for decoding 3d stereoscopic digital video
US20190058865A1 (en) System and method for encoding 3d stereoscopic digital video
US5633682A (en) Stereoscopic coding system
RU2573778C2 (en) Image signal decoding device, image signal decoding method, image signal encoding device, image signal encoding method and programme
KR100246168B1 (en) Hierachical motion estimation for interlaced video
JP2001169292A (en) Device and method for processing information, and storage medium
KR100260475B1 (en) Methods and devices for encoding and decoding frame signals and recording medium therefor
US20100014585A1 (en) Method and system for encoding a video signal, encoded video signal, method and system for decoding a video signal
US5999657A (en) Recording and reproducing apparatus for digital image information
JPH09200695A (en) Method and device for decoding video data for high-speed reproduction
US7970056B2 (en) Method and/or apparatus for decoding an intra-only MPEG-2 stream composed of two separate fields encoded as a special frame picture
JP2001169278A (en) Device and method for generating stream, device and method for transmitting stream, device and method for coding and recording medium
JP5228077B2 (en) System and method for stereoscopic 3D video image digital decoding
LGGGGGG C ZIT źd d': http:% pic. gc. ca-Ottawa-Hull KlA 0C9-http://cipo. gc. ca () PI
CN101917616A (en) The method and system that is used for digital coding three-dimensional video image
KR20070011340A (en) Method and system for digital coding 3d stereoscopic video images
JP5227439B2 (en) Stereo 3D video image digital coding system and method
KR100210124B1 (en) Data deformatting circuit of picture encoder
KR20070011341A (en) Method and system for digital decoding 3d stereoscopic video images
MXPA06009734A (en) Method and system for digital decoding 3d stereoscopic video images.
JPH06276482A (en) Picture signal coding method, coder, decoding method and decoder
MXPA06009733A (en) Method and system for digital coding 3d stereoscopic video images.
JPH06197326A (en) Picture signal coding method, decoding, method, picture signal coder and decoder

Legal Events

Date Code Title Description
AS Assignment

Owner name: TD VISION CORPORATION S.A. DE C.V., MEXICO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELO, MANUEL RAFAEL GUTIERREZ;REEL/FRAME:018482/0089

Effective date: 20061017

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION