US20110134217A1 - Method and system for scaling 3d video - Google Patents
Method and system for scaling 3d video Download PDFInfo
- Publication number
- US20110134217A1 US20110134217A1 US12/963,014 US96301410A US2011134217A1 US 20110134217 A1 US20110134217 A1 US 20110134217A1 US 96301410 A US96301410 A US 96301410A US 2011134217 A1 US2011134217 A1 US 2011134217A1
- Authority
- US
- United States
- Prior art keywords
- video data
- format
- module
- input
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/007—Aspects relating to detection of stereoscopic image format, e.g. for adaptation to the display format
Definitions
- Certain embodiments of the invention relate to processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for scaling 3D video.
- FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention.
- FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention.
- FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention.
- FIGS. 4A and 4B illustrate format-related variables for left-and-right (L/R) format and over-and-under (O/U) format, respectively, in accordance with embodiments of the invention.
- FIGS. 5A and 5B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention.
- FIGS. 6A and 6B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention.
- FIGS. 7A and 7B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention.
- FIGS. 8A and 8B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention.
- FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080p O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention.
- FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in a processing network configured for scaling 3D video data, in accordance with embodiments of the invention.
- FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in a configured processing network, in accordance with an embodiment of the invention.
- FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in a configured processing network, in accordance with an embodiment of the invention.
- Certain embodiments of the invention may be found in a method and system for scaling 3D video.
- Various embodiments of the invention relate to an integrated circuit (IC) comprising multiple devices that may be selectively interconnected to route and process 3D video data.
- the IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory, and selectively interconnect one or more of the devices based on the determination.
- the selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor.
- the input format may be a left-and-right (L/R) format or an over-and-under (O/U) format.
- the output format may be a L/R format or an O/U format.
- the selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.
- FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention.
- SoC system-on-chip
- the SoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content.
- signals comprising video content that may be received and processed by the SoC 100 include, but need not be limited to, composite, blanking, and sync (CVBS) signals, separate video (S-video) signals, high-definition multimedia interface (HDMI) signals, component signals, personal computer (PC) signals, source input format (SIF) signals, YCrCb, and red, green, blue (RGB) signals.
- CVBS composite, blanking, and sync
- S-video separate video
- HDMI high-definition multimedia interface
- component signals component signals
- PC personal computer
- SIF source input format
- YCrCb YCrCb
- RGB red, green, blue
- the SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage.
- output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology.
- display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology.
- the characteristics of the output signals, such as pixel rate and/or resolution, for example, may be based on the type of output device to which those signals are to be provided.
- the host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100 .
- parameters and/or other information including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100 .
- the memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100 .
- the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with 3D video data processing.
- the SoC 100 may comprise an interface module 102 , a video processor module 104 , and a core processor module 106 .
- the SoC 100 may be implemented as a single integrated circuit comprising the components listed above.
- the interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content. Similarly, the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100 .
- the video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data associated with one or more signals received by the SoC 100 .
- the video processor module 104 may be operable to support multiple video data formats, including multiple input formats and multiple output formats for 3D video data.
- the video processor module 104 may be operable to perform various types of operations on 3D video data, including but not limited to format conversion and/or scaling.
- the video processor module 104 when the video content comprises audio data, the video processor module 104 , and/or another module in the SoC 100 , may be operable to handle the audio data.
- the core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100 .
- the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content, including but not limited to the processing of 3D video data.
- the core processor 106 may be operable to determine and/or calculate parameters associated with the processing of 3D video data that may be utilized to configure and/or operate the video processor module 104 .
- the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100 .
- the core processor module 106 may comprise memory that may be utilized during 3D video data processing by the video processor module 104 .
- the SoC 100 may receive one or more signals comprising 3D video data through the interface module 102 .
- the video processor module 104 and/or the core processor module 106 may be utilized to determine whether to scale 3D video data in the video processor module 104 before the 3D video data is captured to memory through the video processor module 104 or after the captured 3D video data is retrieved from the memory through the video processor module 104 .
- the memory into which the 3D video data is to be stored and from which it is to be subsequently retrieved may be a dynamic random access memory (DRAM) that may be part of the memory module 130 and/or of the core processor module 106 , for example.
- DRAM dynamic random access memory
- At least a portion of the video processor module 104 may be configured by the host processor module 120 and/or the core processor module 106 according to the determined order in which to scale the 3D video data. Such order may be based on an input format of the 3D video data, an output format of the 3D video data, and on a scaling factor. Moreover, the order in which to scale the 3D video data may be determined on a picture-by-picture basis. That is, the order in which to scale the 3D video data and the corresponding configuration of the video processor module 104 may be carried out for each picture in a video sequence that is received in the SoC. Once processed, the 3D video data may be communicated to one or more output devices by the SoC 100 .
- the SoC 100 may be operable handle 3D video data in multiple input formats and multiple output formats.
- the complexity of the SoC 100 may increase significantly the larger the number of input and output formats supported.
- An approach that may simplify the SoC 100 and that may enable support for a large number of formats is to convert an input format into one of a subset of formats supported by the SoC for processing and have the SoC 100 perform the processing of the 3D video data in that format. Once the processing is completed, the processed 3D video data may be converted to the appropriate output format if such conversion is necessary.
- FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention.
- a first packing scheme or first format 200 for 3D video data and a second packing scheme or second format 210 for 3D video data.
- Each of the first format 200 and the second format 210 illustrates the arrangement of the left eye content (L) and the right eye content (R) in a 3D video picture.
- a 3D video picture may refer to a 3D video frame or a 3D video field in a video sequence, whichever is appropriate.
- the L and R in the first format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format.
- the L and R in the second format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format.
- Another arrangement, one not shown in FIG. 2A may be one in which the L is in a first 3D video picture and the R is in a second 3D video picture. Such arrangement may be referred to as a sequential format because the 3D video pictures are processed sequentially.
- Both the first format 200 and the second format 210 may be utilized by the SoC 100 described above to process 3D video data and may be referred to as native formats of the SoC 100 .
- the SoC 100 may convert that input format to one of the first format 200 and the second format 210 , if such conversion is necessary.
- the SoC 100 may then process the 3D video data in a native format.
- the SoC 100 may convert the processed 3D video data into one of the multiple output formats supported by the SoC 100 , if such conversion is necessary.
- the SoC 100 may also be operable to process 3D video data in the sequential format, which is typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210 .
- an L/R input format 202 a may be converted to the first format 200 , which is also an L/R format.
- a line interleaved input format 204 a may be converted to the first format 200 .
- a checkerboard input format 206 a may be converted to the first format 200 .
- the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the first format 200 .
- the first format 200 may be converted to an L/R output format 202 b .
- the first format 200 may be converted to a line interleaved output format 204 b .
- the first format 200 may be converted to a checkerboard output format 206 b .
- the SoC 100 may determine the appropriate type of output format to which the first format 200 is to be converted.
- an O/U input format 212 a may be converted to the second format 210 , which is also an O/U format.
- an O/U ⁇ 2 input format 214 a may be converted to the second format 210 .
- a multi-decode input format 216 a may be converted to the second format 210 .
- the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the second format 210 .
- the second format 210 may be converted to an O/U output format 212 b .
- the second format 210 may be converted to an O/U x2 output format 214 b .
- the SoC 100 may determine the appropriate type of output format to which the second format 210 is to be converted.
- the conversion operations supported by the SoC 100 may also comprise converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200 .
- 3D video data may be received in any one of multiple input formats, such as the input formats 202 a , 204 a , 206 a , 212 a , 214 a , and 216 a ( FIGS. 2B and 2D ).
- resulting processed 3D video data may be generated in any one of multiple output formats, such as the output formats 202 b , 204 b , 206 b , 212 b , and 214 b ( FIGS. 2C and 2E ).
- the various input formats and output formats described above with respect to FIGS. 2A-2E are provided by way of illustration and not of limitation.
- the SoC 100 may support additional input formats that may be converted to a native format such as the first format 200 , the second format 210 , and the sequential format.
- the SoC may support additional output formats to which a native format may be converted.
- FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention.
- a processing network 300 that may be part of the video processor module 104 in the SoC 100 , for example.
- the processing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data, including 3D video data.
- the processing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data.
- the various devices, components, modules, blocks, circuits, or the like in the processing network 300 may be dynamically configured and/or dynamically interconnected during the operation of the SoC 100 through one or more signals generated by the core processor module 106 and/or by the host processor module 120 .
- the configuration and/or the selective interconnection of various portions of the processing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data.
- the processing network 300 may comprise an MPEG feeder (MFD) module 302 , multiple video feeder (VFD) modules 304 , an HDMI module 306 , crossbar modules 310 a and 310 b , multiple scaler (SCL) modules 308 , a motion-adaptive deinterlacer (MAD) module 312 , a digital noise reduction (DNR) module 314 , multiple capture (CAP) modules 320 , and two compositor (CMP) modules 322 .
- MFD MPEG feeder
- VFD video feeder
- HDMI HDMI
- SCL scaler
- MAD motion-adaptive deinterlacer
- DNR digital noise reduction
- CAP capture
- CMP two compositor
- DRAM utilized by the processing network 300 to handle storage of video data during various operations.
- DRAM may be part of the memory module 130 described above with respect to FIG. 1 .
- the DRAM may be part of memory embedded in the SoC 100 .
- the references to a video encoder (not shown) in FIG. 3A may be associated with hardware and/or software in the SoC 100 that may be utilized after the processing network 300 to further process video data for communication to an output device, such as a display device, for example.
- Each of the crossbar modules 310 a and 310 b may comprise multiple input ports and multiple output ports.
- the crossbar modules 310 a and 310 b may be configured such that any one of the input ports may be connected to one or more of the output ports.
- the crossbar modules 310 a and 310 b may enable pass-through connections 316 between one or more output ports of the crossbar module 310 a and corresponding input ports of the crossbar module 310 b .
- the crossbar modules 310 a and 310 b may enable feedback connections 318 between one or more output ports of the crossbar module 310 b and corresponding input ports of the crossbar module 310 a .
- the configuration of the crossbar modules 310 a and/or 310 b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed.
- the MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310 a .
- the video data read by the MFD module 302 may have been stored in memory after being generated by an MPEG encoder (not shown).
- Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310 .
- the video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300 .
- the HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310 a .
- the HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310 a at another data rate.
- Each SCL module 308 may be operable to scale video data received from the crossbar module 310 a and provide the scaled video data to the crossbar module 310 b .
- the MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310 a , including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310 b .
- the DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310 a , including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310 b .
- the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308 .
- Each CAP module 320 may be operable to capture video data from the crossbar module 310 b and store the captured video data in memory.
- Each CMP module 322 may be operable to blend or combine video data received from the crossbar module 310 b with graphics data.
- FIG. 3A shows one CMP module 322 being provided with a graphics feed Gfxa that is blended by the CMP module 322 with video data received from the crossbar module 310 b before the combination is communicated to a video encoder.
- another CMP module 322 is provided with a graphics feed Gfxb that is blended by the CMP module 322 with video data received from the crossbar module 310 b before the combination is communicated to a video encoder.
- the SCL module 308 in a first configuration that may be utilized when the 3D video data scaling comprises scaling down horizontally.
- the SCL module 308 may comprise a horizontal scaler (HSCL) module 330 , which may be configured to operate first and handles the horizontal scaling (sx) of the video data, and a vertical scaler (VSCL) module 332 , which may be configured to operate after the horizontal scaling and handles the vertical scaling (sy) of the video data.
- HSCL horizontal scaler
- VSCL vertical scaler
- the overall scaling of the SCL module 308 in this configuration may be given by the product sx ⁇ sy.
- the input pixel rate of the SCL module 308 at node “in” is SCL in
- the output pixel rate of the HSCL module 330 at node “H” is SCL H
- the output pixel rate of the VSCL module 332 at node “V” is SCL V , which is the same as the output pixel rate of the SCL module 308 at node “out”, SCL out .
- the SCL module 308 in a second configuration that may be utilized when the 3D video data scaling comprises scaling up horizontally.
- the VSCL module 332 may be configured to operate first and the HSCL module 330 may be configured to operate after the VSCL module 332 .
- the overall scaling of the SCL module 308 in this configuration may be given by the product sy ⁇ sx.
- the input pixel rate of the SCL module 308 at node “in” is SCL in
- the output pixel rate of the VSCL module 332 at node “V” is SCL V
- the output pixel rate of the HSCL module 330 at node “H” is SCL H , which is the same as the output pixel rate of the SCL module 308 at node “out”, SCL out .
- the processing network 300 may be utilized to scale and/or process 3D video data received by the SoC 100 in any one of the multiple input formats supported by the SoC 100 , such as those described above with respect to FIGS. 2B and 2D , for example.
- the scaled and/or processed 3D video data generated by the configured processing network 300 and/or one or more SCL modules 308 may be converted, if necessary, to any one of the multiple output formats supported by the SoC 100 , such as those described above with respect to FIGS. 2C and 2E , for example.
- FIGS. 4A and 4B illustrate format-related variables for L/R format and O/U format, respectively, in accordance with embodiments of the invention.
- FIG. 4A there is shown a 3D video data picture 400 that illustrates some of the variables associated with a side-by-side or left-and-right arrangement.
- FIG. 4B shows a 3D video data picture 410 that illustrates the same variables when associated with a top-and-bottom or over-and-under arrangement.
- a 3D video picture may be scaled up horizontally when ox>ix, may be scaled down horizontally when ox ⁇ ix, may be scaled up vertically when oy>iy, and may be scaled down vertically when oy ⁇ iy.
- the order in which the scaling of the 3D video data occurs with respect to the operations provided by the CAP module 320 and the VFD module 304 may depend on the characteristics of the input format of the 3D video data, the output format of the 3D video data, and the scaling that is to take place. In this regard, there may be bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300 .
- bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300 .
- FIGS. 5A and 5B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention.
- a first configuration 500 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
- the first configuration 500 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310 a .
- the output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310 b .
- the scaled 3D video data may be captured by the CAP module 320 and may be stored in a memory 502 .
- the memory 502 may be a DRAM memory, for example.
- One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
- the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
- the pixel rate at node “A”, p_rate A is the same as the input pixel rate of the SCL module 308 , SCL in .
- the pixel rate at node “C”, p_rate C is associated with the output characteristics of the 3D video data.
- the real time scheduling, cap_rts 1 is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
- ox is the width of the area on the display that the input content is to be displayed as indicated above with respect to FIGS. 4A and 4B
- N c is the burst size of the CAP module 320 in number of pixels.
- the real time scheduling, vfd_rts 1 is based on the number of requests for a line of data, n_req, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
- N V is the burst size of the VFD module 304 in number of pixels.
- a second configuration 510 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
- the second configuration 510 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
- the 3D video data may be captured by the CAP module 320 and may be stored in the memory 502 .
- One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310 a .
- the output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310 b .
- the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
- the pixel rate at node “C”, p_rate C may be the same as the output pixel rate of the SCL module 308 , SCL out .
- the pixel rate at node “A”, p_rate A may be associated with the input characteristics of the 3D video data.
- the real time scheduling, cap_rts 2 is based on the number of requests for a line of data, nreq, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
- ix is the width of the area of the picture that is to be cropped and displayed as indicated above with respect to FIGS. 4A and 4B .
- the real time scheduling, vfd_rts 2 is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
- ⁇ BW ⁇ ⁇ 2
- BW ⁇ ⁇ 1 ⁇ ox / N ⁇ ⁇ ix / N ⁇ ⁇ sy , ( 15 )
- BW 1 is the bandwidth associated with the first configuration 500
- BW 2 is the bandwidth associated with the second configuration 510
- ⁇ is the ratio of the two bandwidths.
- FIGS. 6A and 6B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention.
- FIG. 6A there is shown a third configuration 600 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
- the third configuration 600 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A .
- the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310 a .
- the output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310 b .
- the scaled 3D video data may be captured by the CAP module 320 and may be stored in the memory 502 .
- One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
- the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
- the real time scheduling, cap_rts 3 may be determined as follows:
- cap_rts 3 ( ix p_rate A ⁇ 1 sy ) / ⁇ ox N C ⁇ . ( 16 )
- the real time scheduling, vfd_rts 3 may be determined as follows:
- vfd_rts 3 ox p_rate D / ⁇ ox N V ⁇ , ( 17 )
- p_rate D may be associated with the output characteristics of the 3D video data.
- a fourth configuration 610 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
- the fourth configuration 610 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B . That is, the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
- the 3D video data may be captured by the CAP module 320 and may be stored in the memory 502 .
- One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310 a .
- the output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310 b .
- the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
- cap_rts 4 may be determined as follows:
- cap_rts 4 ix p_rate A / ⁇ ix N C ⁇ , ( 18 )
- the real time scheduling, vfd_rts 4 may be determined as follows:
- vfd_rts 4 ox p_rate D ⁇ sy / ⁇ ix N V ⁇ , ( 19 )
- pixel rate at node “D”, p_rate D may be the same as the output pixel rate of the SCL module 308 , SCL out .
- BW 1 is the bandwidth associated with the third configuration 600
- BW 2 is the bandwidth associated with the fourth configuration 610
- ⁇ is the ratio of the two bandwidths.
- FIGS. 7A and 7B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention.
- FIG. 7A there is shown a fifth configuration 700 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
- the fifth configuration 700 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A .
- the real time scheduling, cap may be determined as follows:
- cap_rts 5 ( ix p_rate B ⁇ 1 sy ) / ⁇ ox N C ⁇ , ( 23 )
- p_rate B may be the same as the input pixel rate of the SCL module 308 , SCL in .
- the real time scheduling, vfd_rts 5 may be determined as follows:
- vfd_rts 5 ox p_rate C / ⁇ ox N V ⁇ . ( 24 )
- FIG. 7B there is shown a sixth configuration 710 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
- the sixth configuration 710 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B .
- cap_rts s may be determined as follows:
- cap_rts 6 ix p_rate B / ⁇ ix N C ⁇ , ( 25 )
- p_rate B may be associated with the input characteristics of the 3D video data.
- the real time scheduling, vfd_rts s may be determined as follows:
- vfd_rts 6 ox p_rate C ⁇ sy / ⁇ ix N V ⁇ . ( 26 )
- BW 1 is the bandwidth associated with the fifth configuration 700
- BW 2 is the bandwidth associated with the sixth configuration 710
- ⁇ is the ratio of the two bandwidths.
- FIGS. 8A and 8B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention.
- a seventh configuration 800 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
- the seventh configuration 800 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A .
- the real time scheduling, cap may be determined as follows:
- cap_rts 7 ( ix p_rate B ⁇ 1 sy ) / ⁇ ox N C ⁇ . ( 28 )
- the real time scheduling, vfd_rts 7 may be determined as follows:
- vfd_rts 7 ox p_rate D / ⁇ ox N V ⁇ . ( 29 )
- an eighth configuration 810 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
- the eighth configuration 810 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B .
- the real time scheduling, cap may be determined as follows:
- cap_rts 8 ix p_rate B / ⁇ ix N C ⁇ . ( 30 )
- the real time scheduling, vfd_rts s may be determined as follows:
- vfd_rts 8 ox p_rate D ⁇ sy / ⁇ ix N V ⁇ . ( 31 )
- BW 1 is the bandwidth associated with the seventh configuration 800
- BW 2 is the bandwidth associated with the eighth configuration 810
- ⁇ is the ratio of the two bandwidths.
- FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080 progressive (1080p) O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention.
- the example shown corresponds to the fifth configuration 700 described above with respect to FIG. 7A .
- an input picture 900 which is formatted as 1080p O/U 3D video data, is provided to the processing network 300 for scaling and/or processing.
- the input picture 900 is scaled by a scaling operation 910 that is performed by, for example, one of the SCL modules 308 shown in FIG. 3A .
- a scaled picture 920 is then captured to memory by a capture operation 930 performed by, for example, one of the CAP modules 320 shown in FIG. 3A .
- the captured picture is retrieved from memory through a capture retrieval operation 940 performed by, for example, one of the VFD modules 304 shown in FIG. 3A .
- the retrieval of the captured picture that is, the manner in which the 3D video data is read from the memory, is performed such that an output picture 950 is generated having a 720p L/R format.
- FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in the processing network 300 when configured for scaling 3D video data, in accordance with embodiments of the invention.
- FIG. 10A there is shown a ninth configuration 1000 of the processing network 300 in which the location of the SCL module 308 is before the CAP module 320 .
- the ninth configuration 1000 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 . In this configuration, additional processing cores or operations may be performed on the 3D video data.
- a first core (P 1 ) module 1002 may be positioned before the SCL module 308
- a second core (P 2 ) module 1004 may be positioned after the SCL module 308
- a third core (P 3 ) module 1006 may be positioned after the VFD module 304 .
- the various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314 .
- Other modules not shown in FIG. 3A but that may be included in the processing network 300 , may also be utilized as core modules in the ninth configuration 1000 .
- the tenth configuration 1010 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
- additional processing cores or operations may be performed on the 3D video data.
- the P 1 module 1002 may be positioned before the CAP module 320 .
- the P 2 module 1004 may be positioned after the VFD module 304 and before the SCL module 308 .
- the P 3 module 1006 may be positioned after the SCL module 308 .
- the various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314 .
- Other modules not shown in FIG. 3A , but that may be included in the processing network 300 may also be utilized as core modules in the tenth configuration 1010 .
- FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in the configured processing network 300 , in accordance with an embodiment of the invention.
- a flow chart 1100 in which, at step 1110 , the video processor module 104 in the SoC 100 may receive 3D video data from a source of such data.
- the video processor module 104 and/or the host processor module 120 may determine whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104 .
- the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A .
- the configuration may be based on the order or positioning determined in step 1120 regarding the scaling of the 3D video data.
- the 3D video data may be scaled by the configured processing network in the video processor module 104 .
- FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in the configured processing network 300 , in accordance with an embodiment of the invention.
- a flow chart 1200 in which, at step 1210 , the video processor module 104 in the SoC 100 may receive 3D video data from multiple sources of such data.
- the video processor module 104 and/or the host processor module 120 may determine, for each of the sources, whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104 .
- the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A .
- the configuration may be based on the order or positioning determined in step 1220 regarding the scaling of the 3D video data for each of the sources.
- the processing network may be configured to have multiple paths for processing the 3D video data from the various sources of such data.
- the 3D video data from each source may be scaled by the configured processing network in the video processor module 104 .
- Various embodiments of the invention relate to an integrated circuit, such as the SoC 100 described above with respect to FIG. 1 , for example, which may be operable to selectively route and process 3D video data.
- the processing network 300 described above with respect to FIG. 3A may be utilized in the SoC 100 to route and process 3D video data.
- the integrated circuit may comprise multiple devices, such as the various modules in the processing network 300 , for example, which may be operable to be selectively interconnected to enable the routing and the processing of 3D video data.
- the integrated circuit may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory.
- the integrated circuit may be operable to selectively interconnect one or more of the multiple devices based on the determination.
- the integrated circuit may be operable to determine the selective interconnection of the one or more of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.
- the input format of the 3D video data may be a L/R input format or an O/U input format and the output format of the 3D video data may be a L/R output format or an O/U output format.
- the integrated circuit may be operable to determine the selective interconnection of the one or more devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.
- the integrated circuit may be operable to determine the selective interconnection of the one or more devices on a picture-by-picture basis.
- the selectively interconnected devices in the integrated circuit may be operable to horizontally scale the 3D video data and to vertically scale the 3D video data. Moreover, the selectively interconnected devices in the integrated circuit may be operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scale, or both.
- a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for scaling 3D video.
- the present invention may be realized in hardware, software, or a combination of hardware and software.
- the present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
- a typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- the present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
- Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
Abstract
Description
- This application makes reference to, claims priority to, and claims the benefit of:
- U.S. Provisional Patent Application Ser. No. 61/267,729 (Attorney Docket No. 20428US01) filed on Dec. 8, 2009;
U.S. Provisional Patent Application Ser. No. 61/296,851 (Attorney Docket No. 22866US01) filed on Jan. 20, 2010; and
U.S. Provisional Patent Application Ser. No. 61/330,456 (Attorney Docket No. 23028US01) filed on May 3, 2010. - This application also makes reference to:
- U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 20428U502) filed on Dec. 8, 2010;
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23438U502) filed on Dec. 8, 2010;
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23439U502) filed on Dec. 8, 2010; and
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23440U502) filed on Dec. 8, 2010. - Each of the above referenced applications is hereby incorporated herein by reference in its entirety.
- Certain embodiments of the invention relate to processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for scaling 3D video.
- The availability and access to 3D video content continues to grow. Such growth has brought about challenges regarding the handling of 3D video content from different types of sources and/or the reproduction of 3D video content on different types of displays.
- Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
- A system and/or method for scaling 3D video, as set forth more completely in the claims.
- Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
-
FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention. -
FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention. -
FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention. -
FIGS. 4A and 4B illustrate format-related variables for left-and-right (L/R) format and over-and-under (O/U) format, respectively, in accordance with embodiments of the invention. -
FIGS. 5A and 5B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention. -
FIGS. 6A and 6B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention. -
FIGS. 7A and 7B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention. -
FIGS. 8A and 8B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention. -
FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080p O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention. -
FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in a processing network configured for scaling 3D video data, in accordance with embodiments of the invention. -
FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in a configured processing network, in accordance with an embodiment of the invention. -
FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in a configured processing network, in accordance with an embodiment of the invention. - Certain embodiments of the invention may be found in a method and system for scaling 3D video. Various embodiments of the invention relate to an integrated circuit (IC) comprising multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.
-
FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention. Referring toFIG. 1 , there is shown anSoC 100, ahost processor module 120, and amemory module 130. TheSoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content. Examples of signals comprising video content that may be received and processed by theSoC 100 include, but need not be limited to, composite, blanking, and sync (CVBS) signals, separate video (S-video) signals, high-definition multimedia interface (HDMI) signals, component signals, personal computer (PC) signals, source input format (SIF) signals, YCrCb, and red, green, blue (RGB) signals. Such signals may be received by theSoC 100 from one or more video sources communicatively coupled to theSoC 100. - The
SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage. For example, output signals from theSoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology. The characteristics of the output signals, such as pixel rate and/or resolution, for example, may be based on the type of output device to which those signals are to be provided. - The
host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of theSoC 100. For example, parameters and/or other information, including but not limited to configuration data, may be provided to theSoC 100 by thehost processor module 120 at various times during the operation of theSoC 100. Thememory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of theSoC 100. For example, thememory module 130 may store intermediate values that result during the processing of video data, including those values associated with 3D video data processing. - The
SoC 100 may comprise aninterface module 102, avideo processor module 104, and acore processor module 106. TheSoC 100 may be implemented as a single integrated circuit comprising the components listed above. Theinterface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content. Similarly, theinterface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to theSoC 100. - The
video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data associated with one or more signals received by theSoC 100. Thevideo processor module 104 may be operable to support multiple video data formats, including multiple input formats and multiple output formats for 3D video data. Thevideo processor module 104 may be operable to perform various types of operations on 3D video data, including but not limited to format conversion and/or scaling. In some embodiments, when the video content comprises audio data, thevideo processor module 104, and/or another module in theSoC 100, may be operable to handle the audio data. - The
core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of theSoC 100. For example, thecore processor module 106 may be operable to control and/or configure operations of theSoC 100 that are associated with processing video content, including but not limited to the processing of 3D video data. In this regard, thecore processor 106 may be operable to determine and/or calculate parameters associated with the processing of 3D video data that may be utilized to configure and/or operate thevideo processor module 104. In some embodiments of the invention, thecore processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by theSoC 100. For example, thecore processor module 106 may comprise memory that may be utilized during 3D video data processing by thevideo processor module 104. - In operation, the
SoC 100 may receive one or more signals comprising 3D video data through theinterface module 102. When the 3D video data received in those signals is to be scaled, thevideo processor module 104 and/or thecore processor module 106 may be utilized to determine whether to scale 3D video data in thevideo processor module 104 before the 3D video data is captured to memory through thevideo processor module 104 or after the captured 3D video data is retrieved from the memory through thevideo processor module 104. The memory into which the 3D video data is to be stored and from which it is to be subsequently retrieved may be a dynamic random access memory (DRAM) that may be part of thememory module 130 and/or of thecore processor module 106, for example. - At least a portion of the
video processor module 104 may be configured by thehost processor module 120 and/or thecore processor module 106 according to the determined order in which to scale the 3D video data. Such order may be based on an input format of the 3D video data, an output format of the 3D video data, and on a scaling factor. Moreover, the order in which to scale the 3D video data may be determined on a picture-by-picture basis. That is, the order in which to scale the 3D video data and the corresponding configuration of thevideo processor module 104 may be carried out for each picture in a video sequence that is received in the SoC. Once processed, the 3D video data may be communicated to one or more output devices by theSoC 100. - As indicated above, the
SoC 100 may beoperable handle 3D video data in multiple input formats and multiple output formats. The complexity of theSoC 100, however, may increase significantly the larger the number of input and output formats supported. An approach that may simplify theSoC 100 and that may enable support for a large number of formats is to convert an input format into one of a subset of formats supported by the SoC for processing and have theSoC 100 perform the processing of the 3D video data in that format. Once the processing is completed, the processed 3D video data may be converted to the appropriate output format if such conversion is necessary. -
FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention. Referring toFIG. 2A , there is shown a first packing scheme orfirst format 200 for 3D video data and a second packing scheme orsecond format 210 for 3D video data. Each of thefirst format 200 and thesecond format 210 illustrates the arrangement of the left eye content (L) and the right eye content (R) in a 3D video picture. In this regard, a 3D video picture may refer to a 3D video frame or a 3D video field in a video sequence, whichever is appropriate. The L and R in thefirst format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format. The L and R in thesecond format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format. Another arrangement, one not shown inFIG. 2A , may be one in which the L is in a first 3D video picture and the R is in a second 3D video picture. Such arrangement may be referred to as a sequential format because the 3D video pictures are processed sequentially. - Both the
first format 200 and thesecond format 210 may be utilized by theSoC 100 described above to process 3D video data and may be referred to as native formats of theSoC 100. When 3D video data is received in one of the multiple input formats supported by theSoC 100, theSoC 100 may convert that input format to one of thefirst format 200 and thesecond format 210, if such conversion is necessary. TheSoC 100 may then process the 3D video data in a native format. Once the 3D video data is processed, theSoC 100 may convert the processed 3D video data into one of the multiple output formats supported by theSoC 100, if such conversion is necessary. TheSoC 100 may also be operable to process 3D video data in the sequential format, which is typically handled by theSoC 100 in a manner that is substantially similar to the handling of thesecond format 210. - Referring to
FIG. 2B , there is shown a conversion mapping of certain input formats 202 a, 204 a, and 206 a supported by theSoC 100 to thefirst format 200. For example, an L/R input format 202 a may be converted to thefirst format 200, which is also an L/R format. In another example, a line interleavedinput format 204 a may be converted to thefirst format 200. In yet another example, acheckerboard input format 206 a may be converted to thefirst format 200. In each of these scenarios, theSoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to thefirst format 200. - Referring to
FIG. 2C , there is shown a conversion mapping of thefirst format 200 tocertain output formats SoC 100. For example, thefirst format 200 may be converted to an L/R output format 202 b. In another example, thefirst format 200 may be converted to a line interleavedoutput format 204 b. In yet another example, thefirst format 200 may be converted to acheckerboard output format 206 b. In each of these scenarios, theSoC 100 may determine the appropriate type of output format to which thefirst format 200 is to be converted. - Referring to
FIG. 2D , there is shown a conversion mapping of certain input formats 212 a, 214 a, and 216 a supported by theSoC 100 to thesecond format 210. For example, an O/U input format 212 a may be converted to thesecond format 210, which is also an O/U format. In another example, an O/U ×2input format 214 a may be converted to thesecond format 210. In yet another example, amulti-decode input format 216 a may be converted to thesecond format 210. In each of these scenarios, theSoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to thesecond format 210. - Referring to
FIG. 2E , there is shown a conversion mapping of thesecond format 210 tocertain output formats SoC 100. For example, thesecond format 210 may be converted to an O/U output format 212 b. In another example, thesecond format 210 may be converted to an O/Ux2 output format 214 b. In each of these scenarios, theSoC 100 may determine the appropriate type of output format to which thesecond format 210 is to be converted. - The conversion operations supported by the
SoC 100 may also comprise converting from thefirst format 200 to thesecond format 210 and converting from thesecond format 210 to thefirst format 200. In this manner, 3D video data may be received in any one of multiple input formats, such as the input formats 202 a, 204 a, 206 a, 212 a, 214 a, and 216 a (FIGS. 2B and 2D ). Accordingly, resulting processed 3D video data may be generated in any one of multiple output formats, such as the output formats 202 b, 204 b, 206 b, 212 b, and 214 b (FIGS. 2C and 2E ). - The various input formats and output formats described above with respect to
FIGS. 2A-2E are provided by way of illustration and not of limitation. TheSoC 100 may support additional input formats that may be converted to a native format such as thefirst format 200, thesecond format 210, and the sequential format. Similarly, the SoC may support additional output formats to which a native format may be converted. -
FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable toscale 3D video data, in accordance with embodiments of the invention. Referring toFIG. 3A , there is shown aprocessing network 300 that may be part of thevideo processor module 104 in theSoC 100, for example. Theprocessing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data, including 3D video data. In this regard, theprocessing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data. The various devices, components, modules, blocks, circuits, or the like in theprocessing network 300 may be dynamically configured and/or dynamically interconnected during the operation of theSoC 100 through one or more signals generated by thecore processor module 106 and/or by thehost processor module 120. In this regard, the configuration and/or the selective interconnection of various portions of theprocessing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data. - In the embodiment of the invention described in
FIG. 3A , theprocessing network 300 may comprise an MPEG feeder (MFD)module 302, multiple video feeder (VFD)modules 304, anHDMI module 306,crossbar modules modules 308, a motion-adaptive deinterlacer (MAD)module 312, a digital noise reduction (DNR)module 314, multiple capture (CAP)modules 320, and two compositor (CMP)modules 322. Each of the above-listed components may be operable to handle video data, including 3D video data. The references to a memory (not shown) inFIG. 3A may be associated with a DRAM utilized by theprocessing network 300 to handle storage of video data during various operations. Such DRAM may be part of thememory module 130 described above with respect toFIG. 1 . In some instances, the DRAM may be part of memory embedded in theSoC 100. The references to a video encoder (not shown) inFIG. 3A may be associated with hardware and/or software in theSoC 100 that may be utilized after theprocessing network 300 to further process video data for communication to an output device, such as a display device, for example. - Each of the
crossbar modules crossbar modules crossbar modules connections 316 between one or more output ports of thecrossbar module 310 a and corresponding input ports of thecrossbar module 310 b. Moreover, thecrossbar modules feedback connections 318 between one or more output ports of thecrossbar module 310 b and corresponding input ports of thecrossbar module 310 a. The configuration of thecrossbar modules 310 a and/or 310 b may result in one or more processing paths being configured within theprocessing network 300 in accordance with the manner and/or order in which video data is to be processed. - The
MFD module 302 may be operable to read video data from memory and provide such video data to thecrossbar module 310 a. The video data read by theMFD module 302 may have been stored in memory after being generated by an MPEG encoder (not shown). EachVFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310. The video data read by theVFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with theprocessing network 300. TheHDMI module 306 may be operable to provide a live feed of high-definition video data to thecrossbar module 310 a. TheHDMI module 306 may comprise a buffer (not shown) that may enable theHDMI module 306 to receive the live feed at one data rate and provide the live feed to thecrossbar module 310 a at another data rate. - Each
SCL module 308 may be operable to scale video data received from thecrossbar module 310 a and provide the scaled video data to thecrossbar module 310 b. TheMAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from thecrossbar module 310 a, including operations related to inverse telecine (IT), and provide progressive video data to thecrossbar module 310 b. TheDNR module 314 may be operable to perform artifact reduction operations on video data received from thecrossbar module 310 a, including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to thecrossbar module 310 b. In some embodiments of the invention, the operations performed by theDNR module 314 may be utilized before the operations of theMAD module 312 and/or the operations of theSCL module 308. - Each
CAP module 320 may be operable to capture video data from thecrossbar module 310 b and store the captured video data in memory. EachCMP module 322 may be operable to blend or combine video data received from thecrossbar module 310 b with graphics data. For example,FIG. 3A shows oneCMP module 322 being provided with a graphics feed Gfxa that is blended by theCMP module 322 with video data received from thecrossbar module 310 b before the combination is communicated to a video encoder. Similarly, anotherCMP module 322 is provided with a graphics feed Gfxb that is blended by theCMP module 322 with video data received from thecrossbar module 310 b before the combination is communicated to a video encoder. - Referring to
FIG. 3B , there is shown theSCL module 308 in a first configuration that may be utilized when the 3D video data scaling comprises scaling down horizontally. In this configuration, theSCL module 308 may comprise a horizontal scaler (HSCL)module 330, which may be configured to operate first and handles the horizontal scaling (sx) of the video data, and a vertical scaler (VSCL)module 332, which may be configured to operate after the horizontal scaling and handles the vertical scaling (sy) of the video data. The overall scaling of theSCL module 308 in this configuration may be given by the product sx·sy. The input pixel rate of theSCL module 308 at node “in” is SCLin, the output pixel rate of theHSCL module 330 at node “H” is SCLH, and the output pixel rate of theVSCL module 332 at node “V” is SCLV, which is the same as the output pixel rate of theSCL module 308 at node “out”, SCLout. - Referring to
FIG. 3C , there is shown theSCL module 308 in a second configuration that may be utilized when the 3D video data scaling comprises scaling up horizontally. In this configuration, theVSCL module 332 may be configured to operate first and theHSCL module 330 may be configured to operate after theVSCL module 332. The overall scaling of theSCL module 308 in this configuration may be given by the product sy·sx. The input pixel rate of theSCL module 308 at node “in” is SCLin, the output pixel rate of theVSCL module 332 at node “V” is SCLV, and the output pixel rate of theHSCL module 330 at node “H” is SCLH, which is the same as the output pixel rate of theSCL module 308 at node “out”, SCLout. - By configuring the
processing network 300 and/or one or more of theSCL modules 308, theprocessing network 300 may be utilized to scale and/orprocess 3D video data received by theSoC 100 in any one of the multiple input formats supported by theSoC 100, such as those described above with respect toFIGS. 2B and 2D , for example. Similarly, the scaled and/or processed 3D video data generated by the configuredprocessing network 300 and/or one ormore SCL modules 308 may be converted, if necessary, to any one of the multiple output formats supported by theSoC 100, such as those described above with respect toFIGS. 2C and 2E , for example. -
FIGS. 4A and 4B illustrate format-related variables for L/R format and O/U format, respectively, in accordance with embodiments of the invention. Referring toFIG. 4A , there is shown a 3Dvideo data picture 400 that illustrates some of the variables associated with a side-by-side or left-and-right arrangement.FIG. 4B shows a 3Dvideo data picture 410 that illustrates the same variables when associated with a top-and-bottom or over-and-under arrangement. For example, when thepicture 400 or thepicture 410 is associated with an input format, such as before the 3D video data is scaled and/or processed by theprocessing network 300, the variables may be described as follows: xtot=ixtot is the total width of the picture, ytot=iytot is the total height of the picture, xact=ixact is the active width of the picture, yact=iyact is the active height of the picture, x=ix is the width of the area of the picture that is to be cropped and displayed, and y=iy is the height of the area of the picture that is to be cropped and displayed. - When the
picture 400 or thepicture 410 is associated with an output format, such as after the 3D video data is scaled and/or processed by theprocessing network 300, the variables may be described as follows: xtot=oxtot is the total width of the picture, ytot=oytot is the total height of the picture, xact=oxact is the active width of the picture, yact=oyact is the active height of the picture, x=ox is the width of the area on the display that the input content is to be displayed, and y=oy is the height of the area on the display that the input content is to be displayed. - Based on the variables described in
FIGS. 4A and 4B , a 3D video picture may be scaled up horizontally when ox>ix, may be scaled down horizontally when ox<ix, may be scaled up vertically when oy>iy, and may be scaled down vertically when oy<iy. - When 3D video data received by the
SoC 100 is scaled utilizing theprocessing network 300, the order in which the scaling of the 3D video data occurs with respect to the operations provided by theCAP module 320 and theVFD module 304 may depend on the characteristics of the input format of the 3D video data, the output format of the 3D video data, and the scaling that is to take place. In this regard, there may be bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of theprocessing network 300. Below are provided various scenarios that describe the selection of the order or positioning of the scaling operation in a sequence of operations that may be performed on 3D video data by theprocessing network 300. -
FIGS. 5A and 5B illustrate configurations of theprocessing network 300 when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention. Referring toFIG. 5A , there is shown afirst configuration 500 of theprocessing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. Thefirst configuration 500 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. For example, in thefirst configuration 500, the 3D video data may be provided to one of theSCL modules 308 from theMFD module 302 or from theHDMI module 306 by the appropriate configuration of thecrossbar module 310 a. The output of theSCL module 308 may be provided to one of theCAP modules 320 by the appropriate configuration of thecrossbar module 310 b. The scaled 3D video data may be captured by theCAP module 320 and may be stored in amemory 502. Thememory 502 may be a DRAM memory, for example. One of theVFD modules 304 may retrieve the scaled and captured 3D video data from thememory 502 and may provide the retrieved 3D video data to one of theCMP modules 322 through the pass-throughconnections 316 between thecrossbar modules CMP module 322 may subsequently communicate the 3D video data to a video encoder. - In the
first configuration 500, the pixel rate at node “A”, p_rateA, is the same as the input pixel rate of theSCL module 308, SCLin. The output pixel rate of theSCL module 308 is SCLout=SCLin·sx·sy=p_rateA·sx·sy. Moreover, the pixel rate at node “C”, p_rateC, is associated with the output characteristics of the 3D video data. - With respect to the
CAP module 320 in thefirst configuration 500, the real time scheduling, cap_rts1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows: -
- where ox is the width of the area on the display that the input content is to be displayed as indicated above with respect to
FIGS. 4A and 4B , and Nc is the burst size of theCAP module 320 in number of pixels. - With respect to the
VFD module 304 in thefirst configuration 500, the real time scheduling, vfd_rts1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows: -
- where NV is the burst size of the
VFD module 304 in number of pixels. - Referring to
FIG. 5B , there is shown asecond configuration 510 of theprocessing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. Thesecond configuration 510 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. For example, in thesecond configuration 510, the 3D video data may be provided to one of theCAP modules 320 from theMFD module 302 or from theHDMI module 306 through the pass-throughconnections 316 between thecrossbar modules CAP module 320 and may be stored in thememory 502. One of theVFD modules 304 may retrieve the captured 3D video data from thememory 502 and may provide the retrieved 3D video data to one of theSCL modules 308 by the appropriate configuration of thecrossbar module 310 a. The output of theSCL module 308 may be provided to one of theCMP modules 322 by the appropriate configuration of thecrossbar module 310 b. TheCMP module 322 may subsequently communicate the 3D video data to a video encoder. - In the
second configuration 510, the pixel rate at node “C”, p_rateC, may be the same as the output pixel rate of theSCL module 308, SCLout. The input pixel rate of theSCL module 308 may be SCLin=SCLout/(sx·sy)=p_rateC/(sx·sy). Moreover, the pixel rate at node “A”, p_rateA, may be associated with the input characteristics of the 3D video data. - With respect to the
CAP module 320 in thesecond configuration 510, the real time scheduling, cap_rts2, is based on the number of requests for a line of data, nreq, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows: -
- where ix is the width of the area of the picture that is to be cropped and displayed as indicated above with respect to
FIGS. 4A and 4B . - With respect to the
VFD module 304 in thesecond configuration 510, the real time scheduling, vfd_rts2, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows: -
- A decision or selection as to whether to perform the scaling operation before capture, as in the
first configuration 500, or after the captured data is retrieved from memory, as in thesecond configuration 510, may be based on the bandwidths associated with both scenarios. For the case when the burst size of theCAP module 320 and the burst size of theVFD module 304 are the same (i.e., Nc=Nv=N), the bandwidth calculations may be determined as follows: -
- where BW1 is the bandwidth associated with the
first configuration 500, BW2 is the bandwidth associated with thesecond configuration 510, and λ is the ratio of the two bandwidths. When λ<1, theSCL module 308 is to be positioned before theCAP module 320, as in thefirst configuration 500, and when λ>1, theSCL module 308 is to be positioned after theVFD module 304, as in thesecond configuration 510. -
FIGS. 6A and 6B illustrate configurations of theprocessing network 300 when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention. Referring toFIG. 6A , there is shown athird configuration 600 of theprocessing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. Thethird configuration 600 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In thethird configuration 600, the arrangement of theprocessing network 300 may be similar to that of thefirst configuration 500 inFIG. 5A . That is, the 3D video data may be provided to one of theSCL modules 308 from theMFD module 302 or from theHDMI module 306 by the appropriate configuration of thecrossbar module 310 a. The output of theSCL module 308 may be provided to one of theCAP modules 320 by the appropriate configuration of thecrossbar module 310 b. The scaled 3D video data may be captured by theCAP module 320 and may be stored in thememory 502. One of theVFD modules 304 may retrieve the scaled and captured 3D video data from thememory 502 and may provide the retrieved 3D video data to one of theCMP modules 322 through the pass-throughconnections 316 between thecrossbar modules CMP module 322 may subsequently communicate the 3D video data to a video encoder. - With respect to the
CAP module 320 in thethird configuration 600, the real time scheduling, cap_rts3, may be determined as follows: -
- With respect to the
VFD module 304 in thethird configuration 600, the real time scheduling, vfd_rts3, may be determined as follows: -
- where the pixel rate at node “D”, p_rateD, may be associated with the output characteristics of the 3D video data.
- Referring to
FIG. 6B , there is shown afourth configuration 610 of theprocessing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. Thefourth configuration 610 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In thefourth configuration 610, the arrangement of theprocessing network 300 may be similar to that of thesecond configuration 510 inFIG. 5B . That is, the 3D video data may be provided to one of theCAP modules 320 from theMFD module 302 or from theHDMI module 306 through the pass-throughconnections 316 between thecrossbar modules CAP module 320 and may be stored in thememory 502. One of theVFD modules 304 may retrieve the captured 3D video data from thememory 502 and may provide the retrieved 3D video data to one of theSCL modules 308 by the appropriate configuration of thecrossbar module 310 a. The output of theSCL module 308 may be provided to one of theCMP modules 322 by the appropriate configuration of thecrossbar module 310 b. TheCMP module 322 may subsequently communicate the 3D video data to a video encoder. - With respect to the
CAP module 320 in thefourth configuration 610, the real time scheduling, cap_rts4, may be determined as follows: -
- With respect to the
VFD module 304 in thefourth configuration 610, the real time scheduling, vfd_rts4, may be determined as follows: -
- where the pixel rate at node “D”, p_rateD, may be the same as the output pixel rate of the
SCL module 308, SCLout. - A decision or selection as to whether to perform the scaling operation before capture, as in the
third configuration 600, or after the captured data is retrieved from memory, as in thefourth configuration 610, may be based on the bandwidths associated with both scenarios. For the case when the burst size of theCAP module 320 and the burst size of theVFD module 304 are the same (i.e., NC=NV=N), the following ratio may be determined: -
- where BW1 is the bandwidth associated with the
third configuration 600, BW2 is the bandwidth associated with thefourth configuration 610, and λ is the ratio of the two bandwidths. When λ<1, theSCL module 308 is to be positioned before theCAP module 320, as in thethird configuration 600, and when λ>1, theSCL module 308 is to be positioned after theVFD module 304, as in thefourth configuration 610. -
FIGS. 7A and 7B illustrate configurations of theprocessing network 300 when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention. Referring toFIG. 7A , there is shown afifth configuration 700 of theprocessing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. Thefifth configuration 700 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In this configuration, the arrangement of theprocessing network 300 may be similar to that of thefirst configuration 500 inFIG. 5A . - With respect to the
CAP module 320 in thefifth configuration 700, the real time scheduling, cap may be determined as follows: -
- where the pixel rate at node “B”, p_rateB, may be the same as the input pixel rate of the
SCL module 308, SCLin. - With respect to the
VFD module 304 in thefifth configuration 700, the real time scheduling, vfd_rts5, may be determined as follows: -
- Referring to
FIG. 7B , there is shown asixth configuration 710 of theprocessing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. Thesixth configuration 710 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In this configuration, the arrangement of theprocessing network 300 may be similar to that of thesecond configuration 510 inFIG. 5B . - With respect to the
CAP module 320 in thesixth configuration 710, the real time scheduling, cap_rtss, may be determined as follows: -
- where the pixel rate at node “B”, p_rateB, may be associated with the input characteristics of the 3D video data.
- With respect to the
VFD module 304 in thesixth configuration 710, the real time scheduling, vfd_rtss, may be determined as follows: -
- A decision or selection as to whether to perform the scaling operation before capture, as in the
fifth configuration 700, or after the captured data is retrieved from memory, as in thesixth configuration 710, may be based on the bandwidths associated with both scenarios. For the case when the burst size of theCAP module 320 and the burst size of theVFD module 304 are the same (i.e., Nc=Nv=N), the following ratio may be determined: -
- where BW1 is the bandwidth associated with the
fifth configuration 700, BW2 is the bandwidth associated with thesixth configuration 710, and λ is the ratio of the two bandwidths. When λ<1, theSCL module 308 is to be positioned before theCAP module 320, as in thefifth configuration 700, and when λ>1, theSCL module 308 is to be positioned after theVFD module 304, as in thesixth configuration 710. -
FIGS. 8A and 8B illustrate configurations of theprocessing network 300 when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention. Referring toFIG. 8A , there is shown aseventh configuration 800 of theprocessing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. Theseventh configuration 800 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In this configuration, the arrangement of theprocessing network 300 may be similar to that of thefirst configuration 500 inFIG. 5A . - With respect to the
CAP module 320 in theseventh configuration 800, the real time scheduling, cap may be determined as follows: -
- With respect to the
VFD module 304 in theseventh configuration 800, the real time scheduling, vfd_rts7, may be determined as follows: -
- Referring to
FIG. 8B , there is shown aneighth configuration 810 of theprocessing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. Theeighth configuration 810 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In this configuration, the arrangement of theprocessing network 300 may be similar to that of thesecond configuration 510 inFIG. 5B . - With respect to the
CAP module 320 in theeighth configuration 810, the real time scheduling, cap may be determined as follows: -
- With respect to the
VFD module 304 in theeighth configuration 810, the real time scheduling, vfd_rtss, may be determined as follows: -
- A decision or selection as to whether to perform the scaling operation before capture, as in the
seventh configuration 800, or after the captured data is retrieved from memory, as in theeighth configuration 810, may be based on the bandwidths associated with both scenarios. For the case when the burst size of theCAP module 320 and the burst size of theVFD module 304 are the same (i.e., Nc=Nv=N), the following ratio may be determined: -
- where BW1 is the bandwidth associated with the
seventh configuration 800, BW2 is the bandwidth associated with theeighth configuration 810, and λ is the ratio of the two bandwidths. When λ<1, theSCL module 308 is to be positioned before theCAP module 320, as in theseventh configuration 800, and when λ>1, theSCL module 308 is to be positioned after theVFD module 304, as in theeighth configuration 810. -
FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080 progressive (1080p) O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention. Referring toFIG. 9 , the example shown corresponds to thefifth configuration 700 described above with respect toFIG. 7A . In this example, aninput picture 900, which is formatted as 1080p O/U 3D video data, is provided to theprocessing network 300 for scaling and/or processing. Theinput picture 900 is scaled by ascaling operation 910 that is performed by, for example, one of theSCL modules 308 shown inFIG. 3A . Ascaled picture 920 is then captured to memory by acapture operation 930 performed by, for example, one of theCAP modules 320 shown inFIG. 3A . The captured picture is retrieved from memory through acapture retrieval operation 940 performed by, for example, one of theVFD modules 304 shown inFIG. 3A . The retrieval of the captured picture, that is, the manner in which the 3D video data is read from the memory, is performed such that anoutput picture 950 is generated having a 720p L/R format. -
FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in theprocessing network 300 when configured for scaling 3D video data, in accordance with embodiments of the invention. Referring toFIG. 10A , there is shown aninth configuration 1000 of theprocessing network 300 in which the location of theSCL module 308 is before theCAP module 320. Theninth configuration 1000 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In this configuration, additional processing cores or operations may be performed on the 3D video data. For example, a first core (P1)module 1002 may be positioned before theSCL module 308, while a second core (P2)module 1004 may be positioned after theSCL module 308. Moreover, a third core (P3)module 1006 may be positioned after theVFD module 304. The various core modules described herein may refer to processing modules in theprocessing network 300 such as theMAD module 312 and/or theDNR module 314. Other modules not shown inFIG. 3A , but that may be included in theprocessing network 300, may also be utilized as core modules in theninth configuration 1000. - Referring to
FIG. 10B , there is shown atenth configuration 1010 of theprocessing network 300 in which the location of theSCL module 308 is after theVFD module 304. Thetenth configuration 1010 may refer to a particular interconnection and/or operation of several of the modules in theprocessing network 300. In this configuration, additional processing cores or operations may be performed on the 3D video data. For example, theP1 module 1002 may be positioned before theCAP module 320. TheP2 module 1004 may be positioned after theVFD module 304 and before theSCL module 308. Moreover, theP3 module 1006 may be positioned after theSCL module 308. As indicated above, the various core modules described herein may refer to processing modules in theprocessing network 300 such as theMAD module 312 and/or theDNR module 314. Other modules not shown inFIG. 3A , but that may be included in theprocessing network 300, may also be utilized as core modules in thetenth configuration 1010. -
FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in the configuredprocessing network 300, in accordance with an embodiment of the invention. Referring toFIG. 11 , there is shown aflow chart 1100 in which, atstep 1110, thevideo processor module 104 in theSoC 100 may receive 3D video data from a source of such data. Atstep 1120, thevideo processor module 104 and/or thehost processor module 120 may determine whether to scale the 3D video data received before capture to memory through thevideo processor module 104 or after capture to memory and subsequent retrieval from memory through thevideo processor module 104. - At
step 1130, thevideo processor module 104 and/or thehost processor module 120 may configure a portion of thevideo processor module 104 comprising a processing network, such as theprocessing network 300 shown inFIG. 3A . The configuration may be based on the order or positioning determined instep 1120 regarding the scaling of the 3D video data. Atstep 1140, the 3D video data may be scaled by the configured processing network in thevideo processor module 104. -
FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in the configuredprocessing network 300, in accordance with an embodiment of the invention. Referring toFIG. 12 , there is shown aflow chart 1200 in which, atstep 1210, thevideo processor module 104 in theSoC 100 may receive 3D video data from multiple sources of such data. Atstep 1220, thevideo processor module 104 and/or thehost processor module 120 may determine, for each of the sources, whether to scale the 3D video data received before capture to memory through thevideo processor module 104 or after capture to memory and subsequent retrieval from memory through thevideo processor module 104. - At
step 1230, thevideo processor module 104 and/or thehost processor module 120 may configure a portion of thevideo processor module 104 comprising a processing network, such as theprocessing network 300 shown inFIG. 3A . The configuration may be based on the order or positioning determined instep 1220 regarding the scaling of the 3D video data for each of the sources. In this regard, the processing network may be configured to have multiple paths for processing the 3D video data from the various sources of such data. Atstep 1240, the 3D video data from each source may be scaled by the configured processing network in thevideo processor module 104. - Various embodiments of the invention relate to an integrated circuit, such as the
SoC 100 described above with respect toFIG. 1 , for example, which may be operable to selectively route andprocess 3D video data. For example, theprocessing network 300 described above with respect toFIG. 3A may be utilized in theSoC 100 to route andprocess 3D video data. The integrated circuit may comprise multiple devices, such as the various modules in theprocessing network 300, for example, which may be operable to be selectively interconnected to enable the routing and the processing of 3D video data. The integrated circuit may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory. Moreover, the integrated circuit may be operable to selectively interconnect one or more of the multiple devices based on the determination. - The integrated circuit may be operable to determine the selective interconnection of the one or more of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor. The input format of the 3D video data may be a L/R input format or an O/U input format and the output format of the 3D video data may be a L/R output format or an O/U output format. The integrated circuit may be operable to determine the selective interconnection of the one or more devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data. The integrated circuit may be operable to determine the selective interconnection of the one or more devices on a picture-by-picture basis.
- The selectively interconnected devices in the integrated circuit may be operable to horizontally scale the 3D video data and to vertically scale the 3D video data. Moreover, the selectively interconnected devices in the integrated circuit may be operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scale, or both.
- In another embodiment of the invention, a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for scaling 3D video.
- Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
- While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
Claims (20)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/963,014 US20110134217A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for scaling 3d video |
US12/962,995 US9137513B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for mixing video and graphics |
US14/819,728 US9307223B2 (en) | 2009-12-08 | 2015-08-06 | Method and system for mixing video and graphics |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US26772909P | 2009-12-08 | 2009-12-08 | |
US29685110P | 2010-01-20 | 2010-01-20 | |
US33045610P | 2010-05-03 | 2010-05-03 | |
US12/963,014 US20110134217A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for scaling 3d video |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110134217A1 true US20110134217A1 (en) | 2011-06-09 |
Family
ID=44081627
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/962,995 Active 2033-01-01 US9137513B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for mixing video and graphics |
US12/963,014 Abandoned US20110134217A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for scaling 3d video |
US12/963,212 Abandoned US20110134211A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for handling multiple 3-d video formats |
US12/963,320 Expired - Fee Related US8947503B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for processing 3-D video |
US12/963,035 Abandoned US20110134218A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for utilizing mosaic mode to create 3d video |
US14/819,728 Active US9307223B2 (en) | 2009-12-08 | 2015-08-06 | Method and system for mixing video and graphics |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/962,995 Active 2033-01-01 US9137513B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for mixing video and graphics |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/963,212 Abandoned US20110134211A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for handling multiple 3-d video formats |
US12/963,320 Expired - Fee Related US8947503B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for processing 3-D video |
US12/963,035 Abandoned US20110134218A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for utilizing mosaic mode to create 3d video |
US14/819,728 Active US9307223B2 (en) | 2009-12-08 | 2015-08-06 | Method and system for mixing video and graphics |
Country Status (4)
Country | Link |
---|---|
US (6) | US9137513B2 (en) |
EP (1) | EP2462748A4 (en) |
CN (1) | CN102474632A (en) |
WO (1) | WO2011072016A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9407902B1 (en) | 2011-04-10 | 2016-08-02 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US9485494B1 (en) * | 2011-04-10 | 2016-11-01 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008106185A (en) * | 2006-10-27 | 2008-05-08 | Shin Etsu Chem Co Ltd | Method for adhering thermally conductive silicone composition, primer for adhesion of thermally conductive silicone composition and method for production of adhesion composite of thermally conductive silicone composition |
CN102474632A (en) * | 2009-12-08 | 2012-05-23 | 美国博通公司 | Method and system for handling multiple 3-d video formats |
US8565516B2 (en) * | 2010-02-05 | 2013-10-22 | Sony Corporation | Image processing apparatus, image processing method, and program |
US9414042B2 (en) * | 2010-05-05 | 2016-08-09 | Google Technology Holdings LLC | Program guide graphics and video in window for 3DTV |
US8768044B2 (en) | 2010-09-14 | 2014-07-01 | Texas Instruments Incorporated | Automatic convergence of stereoscopic images based on disparity maps |
US20120281064A1 (en) * | 2011-05-03 | 2012-11-08 | Citynet LLC | Universal 3D Enabler and Recorder |
US20130044192A1 (en) * | 2011-08-17 | 2013-02-21 | Google Inc. | Converting 3d video into 2d video based on identification of format type of 3d video and providing either 2d or 3d video based on identification of display device type |
US20130147912A1 (en) * | 2011-12-09 | 2013-06-13 | General Instrument Corporation | Three dimensional video and graphics processing |
US9069374B2 (en) | 2012-01-04 | 2015-06-30 | International Business Machines Corporation | Web video occlusion: a method for rendering the videos watched over multiple windows |
WO2015192557A1 (en) * | 2014-06-19 | 2015-12-23 | 杭州立体世界科技有限公司 | Control circuit for high-definition naked-eye portable stereo video player and stereo video conversion method |
US9716913B2 (en) * | 2014-12-19 | 2017-07-25 | Texas Instruments Incorporated | Generation of a video mosaic display |
CN108419068A (en) * | 2018-05-25 | 2018-08-17 | 张家港康得新光电材料有限公司 | A kind of 3D rendering treating method and apparatus |
CN111263231B (en) * | 2018-11-30 | 2022-07-15 | 西安诺瓦星云科技股份有限公司 | Window setting method, device, system and computer readable medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6661427B1 (en) * | 1998-11-09 | 2003-12-09 | Broadcom Corporation | Graphics display system with video scaler |
US20040218269A1 (en) * | 2002-01-14 | 2004-11-04 | Divelbiss Adam W. | General purpose stereoscopic 3D format conversion system and method |
US20040233994A1 (en) * | 2003-05-22 | 2004-11-25 | Lsi Logic Corporation | Reconfigurable computing based multi-standard video codec |
US20040243947A1 (en) * | 2003-05-30 | 2004-12-02 | Neolinear, Inc. | Method and apparatus for quantifying tradeoffs for multiple competing goals in circuit design |
US8373802B1 (en) * | 2009-09-01 | 2013-02-12 | Disney Enterprises, Inc. | Art-directable retargeting for streaming video |
Family Cites Families (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5481275A (en) * | 1992-11-02 | 1996-01-02 | The 3Do Company | Resolution enhancement for video display using multi-line interpolation |
CN1311957A (en) * | 1998-08-03 | 2001-09-05 | 赤道技术公司 | Circuit and method for generating filler pixels from the original pixels in a video stream |
US6704042B2 (en) * | 1998-12-10 | 2004-03-09 | Canon Kabushiki Kaisha | Video processing apparatus, control method therefor, and storage medium |
AU2001287258A1 (en) * | 2000-03-17 | 2001-10-03 | Thomson Licensing S.A. | Method and apparatus for simultaneous recording and displaying two different video programs |
TW580826B (en) * | 2001-01-12 | 2004-03-21 | Vrex Inc | Method and apparatus for stereoscopic display using digital light processing |
US20030103136A1 (en) * | 2001-12-05 | 2003-06-05 | Koninklijke Philips Electronics N.V. | Method and system for 2D/3D illusion generation |
US7236207B2 (en) * | 2002-01-22 | 2007-06-26 | Broadcom Corporation | System and method of transmission and reception of progressive content with isolated fields for conversion to interlaced display |
CA2380105A1 (en) * | 2002-04-09 | 2003-10-09 | Nicholas Routhier | Process and system for encoding and playback of stereoscopic video sequences |
US7804995B2 (en) * | 2002-07-02 | 2010-09-28 | Reald Inc. | Stereoscopic format converter |
KR100488804B1 (en) * | 2002-10-07 | 2005-05-12 | 한국전자통신연구원 | System for data processing of 2-view 3dimention moving picture being based on MPEG-4 and method thereof |
US9377987B2 (en) * | 2002-10-22 | 2016-06-28 | Broadcom Corporation | Hardware assisted format change mechanism in a display controller |
US7113221B2 (en) * | 2002-11-06 | 2006-09-26 | Broadcom Corporation | Method and system for converting interlaced formatted video to progressive scan video |
US7154555B2 (en) * | 2003-01-10 | 2006-12-26 | Realnetworks, Inc. | Automatic deinterlacing and inverse telecine |
US7098868B2 (en) * | 2003-04-08 | 2006-08-29 | Microsoft Corporation | Display source divider |
JP4251907B2 (en) * | 2003-04-17 | 2009-04-08 | シャープ株式会社 | Image data creation device |
US20040239757A1 (en) * | 2003-05-29 | 2004-12-02 | Alden Ray M. | Time sequenced user space segmentation for multiple program and 3D display |
US20070216808A1 (en) * | 2003-06-30 | 2007-09-20 | Macinnis Alexander G | System, method, and apparatus for scaling pictures |
US7420618B2 (en) * | 2003-12-23 | 2008-09-02 | Genesis Microchip Inc. | Single chip multi-function display controller and method of use thereof |
US7262818B2 (en) * | 2004-01-02 | 2007-08-28 | Trumpion Microelectronic Inc. | Video system with de-motion-blur processing |
CA2557534A1 (en) * | 2004-02-27 | 2005-09-09 | Td Vision Corporation S.A. De C.V. | Method and system for digital decoding 3d stereoscopic video images |
EP1617370B1 (en) * | 2004-07-15 | 2013-01-23 | Samsung Electronics Co., Ltd. | Image format transformation |
KR100716982B1 (en) * | 2004-07-15 | 2007-05-10 | 삼성전자주식회사 | Multi-dimensional video format transforming apparatus and method |
CN1756317A (en) * | 2004-10-01 | 2006-04-05 | 三星电子株式会社 | The equipment of transforming multidimensional video format and method |
US20060139448A1 (en) * | 2004-12-29 | 2006-06-29 | Samsung Electronics Co., Ltd. | 3D displays with flexible switching capability of 2D/3D viewing modes |
KR100932977B1 (en) * | 2005-07-05 | 2009-12-21 | 삼성모바일디스플레이주식회사 | Stereoscopic video display |
KR100898287B1 (en) * | 2005-07-05 | 2009-05-18 | 삼성모바일디스플레이주식회사 | Stereoscopic image display device |
JP2007080357A (en) * | 2005-09-13 | 2007-03-29 | Toshiba Corp | Information storage medium, information reproducing method, information reproducing apparatus |
US7711200B2 (en) * | 2005-09-29 | 2010-05-04 | Apple Inc. | Video acquisition with integrated GPU processing |
JP2007115293A (en) * | 2005-10-17 | 2007-05-10 | Toshiba Corp | Information storage medium, program, information reproducing method, information reproducing apparatus, data transfer method, and data processing method |
US20070140187A1 (en) * | 2005-12-15 | 2007-06-21 | Rokusek Daniel S | System and method for handling simultaneous interaction of multiple wireless devices in a vehicle |
WO2007117485A2 (en) * | 2006-04-03 | 2007-10-18 | Sony Computer Entertainment Inc. | Screen sharing method and apparatus |
JP4929819B2 (en) * | 2006-04-27 | 2012-05-09 | 富士通株式会社 | Video signal conversion apparatus and method |
US8106917B2 (en) * | 2006-06-29 | 2012-01-31 | Broadcom Corporation | Method and system for mosaic mode display of video |
US8330801B2 (en) * | 2006-12-22 | 2012-12-11 | Qualcomm Incorporated | Complexity-adaptive 2D-to-3D video sequence conversion |
US8594180B2 (en) * | 2007-02-21 | 2013-11-26 | Qualcomm Incorporated | 3D video encoding |
US20080285652A1 (en) * | 2007-05-14 | 2008-11-20 | Horizon Semiconductors Ltd. | Apparatus and methods for optimization of image and motion picture memory access |
US8479253B2 (en) * | 2007-12-17 | 2013-07-02 | Ati Technologies Ulc | Method, apparatus and machine-readable medium for video processing capability communication between a video source device and a video sink device |
KR101539935B1 (en) * | 2008-06-24 | 2015-07-28 | 삼성전자주식회사 | Method and apparatus for processing 3D video image |
KR101664419B1 (en) * | 2008-10-10 | 2016-10-10 | 엘지전자 주식회사 | Reception system and data processing method |
JP2010140235A (en) * | 2008-12-11 | 2010-06-24 | Sony Corp | Image processing apparatus, image processing method, and program |
KR20110113186A (en) * | 2009-01-20 | 2011-10-14 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Method and system for transmitting over a video interface and for compositing 3d video and 3d overlays |
US20100254453A1 (en) * | 2009-04-02 | 2010-10-07 | Qualcomm Incorporated | Inverse telecine techniques |
JP4748251B2 (en) * | 2009-05-12 | 2011-08-17 | パナソニック株式会社 | Video conversion method and video conversion apparatus |
CN102577398B (en) * | 2009-06-05 | 2015-11-25 | Lg电子株式会社 | Image display device and method of operation thereof |
US8614737B2 (en) * | 2009-09-11 | 2013-12-24 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US20110126160A1 (en) * | 2009-11-23 | 2011-05-26 | Samsung Electronics Co., Ltd. | Method of providing 3d image and 3d display apparatus using the same |
CN102474632A (en) * | 2009-12-08 | 2012-05-23 | 美国博通公司 | Method and system for handling multiple 3-d video formats |
US20110157322A1 (en) * | 2009-12-31 | 2011-06-30 | Broadcom Corporation | Controlling a pixel array to support an adaptable light manipulator |
KR20110096494A (en) * | 2010-02-22 | 2011-08-30 | 엘지전자 주식회사 | Electronic device and method for displaying stereo-view or multiview sequence image |
KR101699738B1 (en) * | 2010-04-30 | 2017-02-13 | 엘지전자 주식회사 | Operating Method for Image Display Device and Shutter Glass for the Image Display Device |
US9414042B2 (en) * | 2010-05-05 | 2016-08-09 | Google Technology Holdings LLC | Program guide graphics and video in window for 3DTV |
US8553072B2 (en) * | 2010-11-23 | 2013-10-08 | Circa3D, Llc | Blanking inter-frame transitions of a 3D signal |
KR20120126458A (en) * | 2011-05-11 | 2012-11-21 | 엘지전자 주식회사 | Method for processing broadcasting signal and display device thereof |
US20130044192A1 (en) * | 2011-08-17 | 2013-02-21 | Google Inc. | Converting 3d video into 2d video based on identification of format type of 3d video and providing either 2d or 3d video based on identification of display device type |
JP5319796B2 (en) * | 2012-01-12 | 2013-10-16 | 株式会社東芝 | Information processing apparatus and display control method |
-
2010
- 2010-12-08 CN CN2010800296617A patent/CN102474632A/en active Pending
- 2010-12-08 US US12/962,995 patent/US9137513B2/en active Active
- 2010-12-08 WO PCT/US2010/059469 patent/WO2011072016A1/en active Application Filing
- 2010-12-08 US US12/963,014 patent/US20110134217A1/en not_active Abandoned
- 2010-12-08 US US12/963,212 patent/US20110134211A1/en not_active Abandoned
- 2010-12-08 US US12/963,320 patent/US8947503B2/en not_active Expired - Fee Related
- 2010-12-08 EP EP10836612.1A patent/EP2462748A4/en not_active Withdrawn
- 2010-12-08 US US12/963,035 patent/US20110134218A1/en not_active Abandoned
-
2015
- 2015-08-06 US US14/819,728 patent/US9307223B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6661427B1 (en) * | 1998-11-09 | 2003-12-09 | Broadcom Corporation | Graphics display system with video scaler |
US20040218269A1 (en) * | 2002-01-14 | 2004-11-04 | Divelbiss Adam W. | General purpose stereoscopic 3D format conversion system and method |
US20040233994A1 (en) * | 2003-05-22 | 2004-11-25 | Lsi Logic Corporation | Reconfigurable computing based multi-standard video codec |
US20040243947A1 (en) * | 2003-05-30 | 2004-12-02 | Neolinear, Inc. | Method and apparatus for quantifying tradeoffs for multiple competing goals in circuit design |
US8373802B1 (en) * | 2009-09-01 | 2013-02-12 | Disney Enterprises, Inc. | Art-directable retargeting for streaming video |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9407902B1 (en) | 2011-04-10 | 2016-08-02 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US9485494B1 (en) * | 2011-04-10 | 2016-11-01 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US10681333B2 (en) * | 2011-04-10 | 2020-06-09 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US11575870B2 (en) | 2011-04-10 | 2023-02-07 | Nevermind Capital Llc | 3D video encoding and decoding methods and apparatus |
Also Published As
Publication number | Publication date |
---|---|
US20110134218A1 (en) | 2011-06-09 |
EP2462748A1 (en) | 2012-06-13 |
EP2462748A4 (en) | 2013-11-13 |
US9307223B2 (en) | 2016-04-05 |
CN102474632A (en) | 2012-05-23 |
US20110134216A1 (en) | 2011-06-09 |
US20110134211A1 (en) | 2011-06-09 |
US9137513B2 (en) | 2015-09-15 |
WO2011072016A1 (en) | 2011-06-16 |
US20150341613A1 (en) | 2015-11-26 |
US8947503B2 (en) | 2015-02-03 |
US20110134212A1 (en) | 2011-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110134217A1 (en) | Method and system for scaling 3d video | |
US20100177161A1 (en) | Multiplexed stereoscopic video transmission | |
US20150348509A1 (en) | Dynamic frame repetition in a variable refresh rate system | |
US8922622B2 (en) | Image processing device, image processing method, and program | |
JP7262877B2 (en) | Adaptive High Dynamic Range Tonemapping with Overlay Directives | |
US20140333838A1 (en) | Image processing method | |
CN102291587B (en) | Full high-definition 3D (Three Dimensional) video processing method | |
KR20120047055A (en) | Display apparatus and method for providing graphic image | |
CN108345559B (en) | Virtual reality data input device and virtual reality equipment | |
JP2011114861A (en) | 3d image display apparatus and display method | |
US8184137B2 (en) | System and method for ordering of scaling and capturing in a video system | |
US9020044B2 (en) | Method and apparatus for writing video data in raster order and reading video data in macroblock order | |
US9239697B2 (en) | Display multiplier providing independent pixel resolutions | |
US20140118620A1 (en) | Video/Audio Signal Processing Apparatus | |
US20130009998A1 (en) | Display control system | |
US7227554B2 (en) | Method and system for providing accelerated video processing in a communication device | |
JP5015089B2 (en) | Frame rate conversion device, frame rate conversion method, television receiver, frame rate conversion program, and recording medium recording the program | |
US20130050183A1 (en) | System and Method of Rendering Stereoscopic Images | |
US7526186B2 (en) | Method of scaling subpicture data and related apparatus | |
JP2006303631A (en) | On-screen display device and on-screen display generation method | |
US8896615B2 (en) | Image processing device, projector, and image processing method | |
JP2003189262A (en) | Method for integrating three-dimensional y/c comb line filter and interlace/progressive converter into single chip and system thereof | |
KR20100005273A (en) | Multi-vision system and picture visualizing method the same | |
CN103634534A (en) | Image processing device, image processing method, and program | |
KR20190017286A (en) | Method for Image Processing and Display Device using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NEUMAN, DARREN;HERRICK, JASON;ZHAO, QINGHUA;AND OTHERS;SIGNING DATES FROM 20101110 TO 20101201;REEL/FRAME:025724/0254 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |