US20110134217A1 - Method and system for scaling 3d video - Google Patents

Method and system for scaling 3d video Download PDF

Info

Publication number
US20110134217A1
US20110134217A1 US12/963,014 US96301410A US2011134217A1 US 20110134217 A1 US20110134217 A1 US 20110134217A1 US 96301410 A US96301410 A US 96301410A US 2011134217 A1 US2011134217 A1 US 2011134217A1
Authority
US
United States
Prior art keywords
video data
format
module
input
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/963,014
Inventor
Darren Neuman
Jason Herrick
Qinghua Zhao
Christopher Payson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US12/963,014 priority Critical patent/US20110134217A1/en
Priority to US12/962,995 priority patent/US9137513B2/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZHAO, QINGHUA, PAYSON, CHRISTOPHER, HERRICK, JASON, NEUMAN, DARREN
Publication of US20110134217A1 publication Critical patent/US20110134217A1/en
Priority to US14/819,728 priority patent/US9307223B2/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/007Aspects relating to detection of stereoscopic image format, e.g. for adaptation to the display format

Definitions

  • Certain embodiments of the invention relate to processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for scaling 3D video.
  • FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention.
  • FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention.
  • FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention.
  • FIGS. 4A and 4B illustrate format-related variables for left-and-right (L/R) format and over-and-under (O/U) format, respectively, in accordance with embodiments of the invention.
  • FIGS. 5A and 5B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention.
  • FIGS. 6A and 6B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention.
  • FIGS. 7A and 7B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention.
  • FIGS. 8A and 8B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention.
  • FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080p O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention.
  • FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in a processing network configured for scaling 3D video data, in accordance with embodiments of the invention.
  • FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in a configured processing network, in accordance with an embodiment of the invention.
  • FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in a configured processing network, in accordance with an embodiment of the invention.
  • Certain embodiments of the invention may be found in a method and system for scaling 3D video.
  • Various embodiments of the invention relate to an integrated circuit (IC) comprising multiple devices that may be selectively interconnected to route and process 3D video data.
  • the IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory, and selectively interconnect one or more of the devices based on the determination.
  • the selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor.
  • the input format may be a left-and-right (L/R) format or an over-and-under (O/U) format.
  • the output format may be a L/R format or an O/U format.
  • the selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.
  • FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention.
  • SoC system-on-chip
  • the SoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content.
  • signals comprising video content that may be received and processed by the SoC 100 include, but need not be limited to, composite, blanking, and sync (CVBS) signals, separate video (S-video) signals, high-definition multimedia interface (HDMI) signals, component signals, personal computer (PC) signals, source input format (SIF) signals, YCrCb, and red, green, blue (RGB) signals.
  • CVBS composite, blanking, and sync
  • S-video separate video
  • HDMI high-definition multimedia interface
  • component signals component signals
  • PC personal computer
  • SIF source input format
  • YCrCb YCrCb
  • RGB red, green, blue
  • the SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage.
  • output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology.
  • display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology.
  • the characteristics of the output signals, such as pixel rate and/or resolution, for example, may be based on the type of output device to which those signals are to be provided.
  • the host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100 .
  • parameters and/or other information including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100 .
  • the memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100 .
  • the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with 3D video data processing.
  • the SoC 100 may comprise an interface module 102 , a video processor module 104 , and a core processor module 106 .
  • the SoC 100 may be implemented as a single integrated circuit comprising the components listed above.
  • the interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content. Similarly, the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100 .
  • the video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data associated with one or more signals received by the SoC 100 .
  • the video processor module 104 may be operable to support multiple video data formats, including multiple input formats and multiple output formats for 3D video data.
  • the video processor module 104 may be operable to perform various types of operations on 3D video data, including but not limited to format conversion and/or scaling.
  • the video processor module 104 when the video content comprises audio data, the video processor module 104 , and/or another module in the SoC 100 , may be operable to handle the audio data.
  • the core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100 .
  • the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content, including but not limited to the processing of 3D video data.
  • the core processor 106 may be operable to determine and/or calculate parameters associated with the processing of 3D video data that may be utilized to configure and/or operate the video processor module 104 .
  • the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100 .
  • the core processor module 106 may comprise memory that may be utilized during 3D video data processing by the video processor module 104 .
  • the SoC 100 may receive one or more signals comprising 3D video data through the interface module 102 .
  • the video processor module 104 and/or the core processor module 106 may be utilized to determine whether to scale 3D video data in the video processor module 104 before the 3D video data is captured to memory through the video processor module 104 or after the captured 3D video data is retrieved from the memory through the video processor module 104 .
  • the memory into which the 3D video data is to be stored and from which it is to be subsequently retrieved may be a dynamic random access memory (DRAM) that may be part of the memory module 130 and/or of the core processor module 106 , for example.
  • DRAM dynamic random access memory
  • At least a portion of the video processor module 104 may be configured by the host processor module 120 and/or the core processor module 106 according to the determined order in which to scale the 3D video data. Such order may be based on an input format of the 3D video data, an output format of the 3D video data, and on a scaling factor. Moreover, the order in which to scale the 3D video data may be determined on a picture-by-picture basis. That is, the order in which to scale the 3D video data and the corresponding configuration of the video processor module 104 may be carried out for each picture in a video sequence that is received in the SoC. Once processed, the 3D video data may be communicated to one or more output devices by the SoC 100 .
  • the SoC 100 may be operable handle 3D video data in multiple input formats and multiple output formats.
  • the complexity of the SoC 100 may increase significantly the larger the number of input and output formats supported.
  • An approach that may simplify the SoC 100 and that may enable support for a large number of formats is to convert an input format into one of a subset of formats supported by the SoC for processing and have the SoC 100 perform the processing of the 3D video data in that format. Once the processing is completed, the processed 3D video data may be converted to the appropriate output format if such conversion is necessary.
  • FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention.
  • a first packing scheme or first format 200 for 3D video data and a second packing scheme or second format 210 for 3D video data.
  • Each of the first format 200 and the second format 210 illustrates the arrangement of the left eye content (L) and the right eye content (R) in a 3D video picture.
  • a 3D video picture may refer to a 3D video frame or a 3D video field in a video sequence, whichever is appropriate.
  • the L and R in the first format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format.
  • the L and R in the second format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format.
  • Another arrangement, one not shown in FIG. 2A may be one in which the L is in a first 3D video picture and the R is in a second 3D video picture. Such arrangement may be referred to as a sequential format because the 3D video pictures are processed sequentially.
  • Both the first format 200 and the second format 210 may be utilized by the SoC 100 described above to process 3D video data and may be referred to as native formats of the SoC 100 .
  • the SoC 100 may convert that input format to one of the first format 200 and the second format 210 , if such conversion is necessary.
  • the SoC 100 may then process the 3D video data in a native format.
  • the SoC 100 may convert the processed 3D video data into one of the multiple output formats supported by the SoC 100 , if such conversion is necessary.
  • the SoC 100 may also be operable to process 3D video data in the sequential format, which is typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210 .
  • an L/R input format 202 a may be converted to the first format 200 , which is also an L/R format.
  • a line interleaved input format 204 a may be converted to the first format 200 .
  • a checkerboard input format 206 a may be converted to the first format 200 .
  • the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the first format 200 .
  • the first format 200 may be converted to an L/R output format 202 b .
  • the first format 200 may be converted to a line interleaved output format 204 b .
  • the first format 200 may be converted to a checkerboard output format 206 b .
  • the SoC 100 may determine the appropriate type of output format to which the first format 200 is to be converted.
  • an O/U input format 212 a may be converted to the second format 210 , which is also an O/U format.
  • an O/U ⁇ 2 input format 214 a may be converted to the second format 210 .
  • a multi-decode input format 216 a may be converted to the second format 210 .
  • the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the second format 210 .
  • the second format 210 may be converted to an O/U output format 212 b .
  • the second format 210 may be converted to an O/U x2 output format 214 b .
  • the SoC 100 may determine the appropriate type of output format to which the second format 210 is to be converted.
  • the conversion operations supported by the SoC 100 may also comprise converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200 .
  • 3D video data may be received in any one of multiple input formats, such as the input formats 202 a , 204 a , 206 a , 212 a , 214 a , and 216 a ( FIGS. 2B and 2D ).
  • resulting processed 3D video data may be generated in any one of multiple output formats, such as the output formats 202 b , 204 b , 206 b , 212 b , and 214 b ( FIGS. 2C and 2E ).
  • the various input formats and output formats described above with respect to FIGS. 2A-2E are provided by way of illustration and not of limitation.
  • the SoC 100 may support additional input formats that may be converted to a native format such as the first format 200 , the second format 210 , and the sequential format.
  • the SoC may support additional output formats to which a native format may be converted.
  • FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention.
  • a processing network 300 that may be part of the video processor module 104 in the SoC 100 , for example.
  • the processing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data, including 3D video data.
  • the processing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data.
  • the various devices, components, modules, blocks, circuits, or the like in the processing network 300 may be dynamically configured and/or dynamically interconnected during the operation of the SoC 100 through one or more signals generated by the core processor module 106 and/or by the host processor module 120 .
  • the configuration and/or the selective interconnection of various portions of the processing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data.
  • the processing network 300 may comprise an MPEG feeder (MFD) module 302 , multiple video feeder (VFD) modules 304 , an HDMI module 306 , crossbar modules 310 a and 310 b , multiple scaler (SCL) modules 308 , a motion-adaptive deinterlacer (MAD) module 312 , a digital noise reduction (DNR) module 314 , multiple capture (CAP) modules 320 , and two compositor (CMP) modules 322 .
  • MFD MPEG feeder
  • VFD video feeder
  • HDMI HDMI
  • SCL scaler
  • MAD motion-adaptive deinterlacer
  • DNR digital noise reduction
  • CAP capture
  • CMP two compositor
  • DRAM utilized by the processing network 300 to handle storage of video data during various operations.
  • DRAM may be part of the memory module 130 described above with respect to FIG. 1 .
  • the DRAM may be part of memory embedded in the SoC 100 .
  • the references to a video encoder (not shown) in FIG. 3A may be associated with hardware and/or software in the SoC 100 that may be utilized after the processing network 300 to further process video data for communication to an output device, such as a display device, for example.
  • Each of the crossbar modules 310 a and 310 b may comprise multiple input ports and multiple output ports.
  • the crossbar modules 310 a and 310 b may be configured such that any one of the input ports may be connected to one or more of the output ports.
  • the crossbar modules 310 a and 310 b may enable pass-through connections 316 between one or more output ports of the crossbar module 310 a and corresponding input ports of the crossbar module 310 b .
  • the crossbar modules 310 a and 310 b may enable feedback connections 318 between one or more output ports of the crossbar module 310 b and corresponding input ports of the crossbar module 310 a .
  • the configuration of the crossbar modules 310 a and/or 310 b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed.
  • the MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310 a .
  • the video data read by the MFD module 302 may have been stored in memory after being generated by an MPEG encoder (not shown).
  • Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310 .
  • the video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300 .
  • the HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310 a .
  • the HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310 a at another data rate.
  • Each SCL module 308 may be operable to scale video data received from the crossbar module 310 a and provide the scaled video data to the crossbar module 310 b .
  • the MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310 a , including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310 b .
  • the DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310 a , including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310 b .
  • the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308 .
  • Each CAP module 320 may be operable to capture video data from the crossbar module 310 b and store the captured video data in memory.
  • Each CMP module 322 may be operable to blend or combine video data received from the crossbar module 310 b with graphics data.
  • FIG. 3A shows one CMP module 322 being provided with a graphics feed Gfxa that is blended by the CMP module 322 with video data received from the crossbar module 310 b before the combination is communicated to a video encoder.
  • another CMP module 322 is provided with a graphics feed Gfxb that is blended by the CMP module 322 with video data received from the crossbar module 310 b before the combination is communicated to a video encoder.
  • the SCL module 308 in a first configuration that may be utilized when the 3D video data scaling comprises scaling down horizontally.
  • the SCL module 308 may comprise a horizontal scaler (HSCL) module 330 , which may be configured to operate first and handles the horizontal scaling (sx) of the video data, and a vertical scaler (VSCL) module 332 , which may be configured to operate after the horizontal scaling and handles the vertical scaling (sy) of the video data.
  • HSCL horizontal scaler
  • VSCL vertical scaler
  • the overall scaling of the SCL module 308 in this configuration may be given by the product sx ⁇ sy.
  • the input pixel rate of the SCL module 308 at node “in” is SCL in
  • the output pixel rate of the HSCL module 330 at node “H” is SCL H
  • the output pixel rate of the VSCL module 332 at node “V” is SCL V , which is the same as the output pixel rate of the SCL module 308 at node “out”, SCL out .
  • the SCL module 308 in a second configuration that may be utilized when the 3D video data scaling comprises scaling up horizontally.
  • the VSCL module 332 may be configured to operate first and the HSCL module 330 may be configured to operate after the VSCL module 332 .
  • the overall scaling of the SCL module 308 in this configuration may be given by the product sy ⁇ sx.
  • the input pixel rate of the SCL module 308 at node “in” is SCL in
  • the output pixel rate of the VSCL module 332 at node “V” is SCL V
  • the output pixel rate of the HSCL module 330 at node “H” is SCL H , which is the same as the output pixel rate of the SCL module 308 at node “out”, SCL out .
  • the processing network 300 may be utilized to scale and/or process 3D video data received by the SoC 100 in any one of the multiple input formats supported by the SoC 100 , such as those described above with respect to FIGS. 2B and 2D , for example.
  • the scaled and/or processed 3D video data generated by the configured processing network 300 and/or one or more SCL modules 308 may be converted, if necessary, to any one of the multiple output formats supported by the SoC 100 , such as those described above with respect to FIGS. 2C and 2E , for example.
  • FIGS. 4A and 4B illustrate format-related variables for L/R format and O/U format, respectively, in accordance with embodiments of the invention.
  • FIG. 4A there is shown a 3D video data picture 400 that illustrates some of the variables associated with a side-by-side or left-and-right arrangement.
  • FIG. 4B shows a 3D video data picture 410 that illustrates the same variables when associated with a top-and-bottom or over-and-under arrangement.
  • a 3D video picture may be scaled up horizontally when ox>ix, may be scaled down horizontally when ox ⁇ ix, may be scaled up vertically when oy>iy, and may be scaled down vertically when oy ⁇ iy.
  • the order in which the scaling of the 3D video data occurs with respect to the operations provided by the CAP module 320 and the VFD module 304 may depend on the characteristics of the input format of the 3D video data, the output format of the 3D video data, and the scaling that is to take place. In this regard, there may be bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300 .
  • bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300 .
  • FIGS. 5A and 5B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention.
  • a first configuration 500 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
  • the first configuration 500 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310 a .
  • the output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310 b .
  • the scaled 3D video data may be captured by the CAP module 320 and may be stored in a memory 502 .
  • the memory 502 may be a DRAM memory, for example.
  • One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
  • the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • the pixel rate at node “A”, p_rate A is the same as the input pixel rate of the SCL module 308 , SCL in .
  • the pixel rate at node “C”, p_rate C is associated with the output characteristics of the 3D video data.
  • the real time scheduling, cap_rts 1 is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
  • ox is the width of the area on the display that the input content is to be displayed as indicated above with respect to FIGS. 4A and 4B
  • N c is the burst size of the CAP module 320 in number of pixels.
  • the real time scheduling, vfd_rts 1 is based on the number of requests for a line of data, n_req, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
  • N V is the burst size of the VFD module 304 in number of pixels.
  • a second configuration 510 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
  • the second configuration 510 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
  • the 3D video data may be captured by the CAP module 320 and may be stored in the memory 502 .
  • One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310 a .
  • the output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310 b .
  • the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • the pixel rate at node “C”, p_rate C may be the same as the output pixel rate of the SCL module 308 , SCL out .
  • the pixel rate at node “A”, p_rate A may be associated with the input characteristics of the 3D video data.
  • the real time scheduling, cap_rts 2 is based on the number of requests for a line of data, nreq, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
  • ix is the width of the area of the picture that is to be cropped and displayed as indicated above with respect to FIGS. 4A and 4B .
  • the real time scheduling, vfd_rts 2 is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
  • BW ⁇ ⁇ 2
  • BW ⁇ ⁇ 1 ⁇ ox / N ⁇ ⁇ ix / N ⁇ ⁇ sy , ( 15 )
  • BW 1 is the bandwidth associated with the first configuration 500
  • BW 2 is the bandwidth associated with the second configuration 510
  • is the ratio of the two bandwidths.
  • FIGS. 6A and 6B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention.
  • FIG. 6A there is shown a third configuration 600 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
  • the third configuration 600 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A .
  • the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310 a .
  • the output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310 b .
  • the scaled 3D video data may be captured by the CAP module 320 and may be stored in the memory 502 .
  • One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
  • the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • the real time scheduling, cap_rts 3 may be determined as follows:
  • cap_rts 3 ( ix p_rate A ⁇ 1 sy ) / ⁇ ox N C ⁇ . ( 16 )
  • the real time scheduling, vfd_rts 3 may be determined as follows:
  • vfd_rts 3 ox p_rate D / ⁇ ox N V ⁇ , ( 17 )
  • p_rate D may be associated with the output characteristics of the 3D video data.
  • a fourth configuration 610 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
  • the fourth configuration 610 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B . That is, the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310 a and 310 b .
  • the 3D video data may be captured by the CAP module 320 and may be stored in the memory 502 .
  • One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310 a .
  • the output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310 b .
  • the CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • cap_rts 4 may be determined as follows:
  • cap_rts 4 ix p_rate A / ⁇ ix N C ⁇ , ( 18 )
  • the real time scheduling, vfd_rts 4 may be determined as follows:
  • vfd_rts 4 ox p_rate D ⁇ sy / ⁇ ix N V ⁇ , ( 19 )
  • pixel rate at node “D”, p_rate D may be the same as the output pixel rate of the SCL module 308 , SCL out .
  • BW 1 is the bandwidth associated with the third configuration 600
  • BW 2 is the bandwidth associated with the fourth configuration 610
  • is the ratio of the two bandwidths.
  • FIGS. 7A and 7B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention.
  • FIG. 7A there is shown a fifth configuration 700 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
  • the fifth configuration 700 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A .
  • the real time scheduling, cap may be determined as follows:
  • cap_rts 5 ( ix p_rate B ⁇ 1 sy ) / ⁇ ox N C ⁇ , ( 23 )
  • p_rate B may be the same as the input pixel rate of the SCL module 308 , SCL in .
  • the real time scheduling, vfd_rts 5 may be determined as follows:
  • vfd_rts 5 ox p_rate C / ⁇ ox N V ⁇ . ( 24 )
  • FIG. 7B there is shown a sixth configuration 710 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
  • the sixth configuration 710 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B .
  • cap_rts s may be determined as follows:
  • cap_rts 6 ix p_rate B / ⁇ ix N C ⁇ , ( 25 )
  • p_rate B may be associated with the input characteristics of the 3D video data.
  • the real time scheduling, vfd_rts s may be determined as follows:
  • vfd_rts 6 ox p_rate C ⁇ sy / ⁇ ix N V ⁇ . ( 26 )
  • BW 1 is the bandwidth associated with the fifth configuration 700
  • BW 2 is the bandwidth associated with the sixth configuration 710
  • is the ratio of the two bandwidths.
  • FIGS. 8A and 8B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention.
  • a seventh configuration 800 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation.
  • the seventh configuration 800 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A .
  • the real time scheduling, cap may be determined as follows:
  • cap_rts 7 ( ix p_rate B ⁇ 1 sy ) / ⁇ ox N C ⁇ . ( 28 )
  • the real time scheduling, vfd_rts 7 may be determined as follows:
  • vfd_rts 7 ox p_rate D / ⁇ ox N V ⁇ . ( 29 )
  • an eighth configuration 810 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory.
  • the eighth configuration 810 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B .
  • the real time scheduling, cap may be determined as follows:
  • cap_rts 8 ix p_rate B / ⁇ ix N C ⁇ . ( 30 )
  • the real time scheduling, vfd_rts s may be determined as follows:
  • vfd_rts 8 ox p_rate D ⁇ sy / ⁇ ix N V ⁇ . ( 31 )
  • BW 1 is the bandwidth associated with the seventh configuration 800
  • BW 2 is the bandwidth associated with the eighth configuration 810
  • is the ratio of the two bandwidths.
  • FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080 progressive (1080p) O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention.
  • the example shown corresponds to the fifth configuration 700 described above with respect to FIG. 7A .
  • an input picture 900 which is formatted as 1080p O/U 3D video data, is provided to the processing network 300 for scaling and/or processing.
  • the input picture 900 is scaled by a scaling operation 910 that is performed by, for example, one of the SCL modules 308 shown in FIG. 3A .
  • a scaled picture 920 is then captured to memory by a capture operation 930 performed by, for example, one of the CAP modules 320 shown in FIG. 3A .
  • the captured picture is retrieved from memory through a capture retrieval operation 940 performed by, for example, one of the VFD modules 304 shown in FIG. 3A .
  • the retrieval of the captured picture that is, the manner in which the 3D video data is read from the memory, is performed such that an output picture 950 is generated having a 720p L/R format.
  • FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in the processing network 300 when configured for scaling 3D video data, in accordance with embodiments of the invention.
  • FIG. 10A there is shown a ninth configuration 1000 of the processing network 300 in which the location of the SCL module 308 is before the CAP module 320 .
  • the ninth configuration 1000 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 . In this configuration, additional processing cores or operations may be performed on the 3D video data.
  • a first core (P 1 ) module 1002 may be positioned before the SCL module 308
  • a second core (P 2 ) module 1004 may be positioned after the SCL module 308
  • a third core (P 3 ) module 1006 may be positioned after the VFD module 304 .
  • the various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314 .
  • Other modules not shown in FIG. 3A but that may be included in the processing network 300 , may also be utilized as core modules in the ninth configuration 1000 .
  • the tenth configuration 1010 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300 .
  • additional processing cores or operations may be performed on the 3D video data.
  • the P 1 module 1002 may be positioned before the CAP module 320 .
  • the P 2 module 1004 may be positioned after the VFD module 304 and before the SCL module 308 .
  • the P 3 module 1006 may be positioned after the SCL module 308 .
  • the various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314 .
  • Other modules not shown in FIG. 3A , but that may be included in the processing network 300 may also be utilized as core modules in the tenth configuration 1010 .
  • FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in the configured processing network 300 , in accordance with an embodiment of the invention.
  • a flow chart 1100 in which, at step 1110 , the video processor module 104 in the SoC 100 may receive 3D video data from a source of such data.
  • the video processor module 104 and/or the host processor module 120 may determine whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104 .
  • the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A .
  • the configuration may be based on the order or positioning determined in step 1120 regarding the scaling of the 3D video data.
  • the 3D video data may be scaled by the configured processing network in the video processor module 104 .
  • FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in the configured processing network 300 , in accordance with an embodiment of the invention.
  • a flow chart 1200 in which, at step 1210 , the video processor module 104 in the SoC 100 may receive 3D video data from multiple sources of such data.
  • the video processor module 104 and/or the host processor module 120 may determine, for each of the sources, whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104 .
  • the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A .
  • the configuration may be based on the order or positioning determined in step 1220 regarding the scaling of the 3D video data for each of the sources.
  • the processing network may be configured to have multiple paths for processing the 3D video data from the various sources of such data.
  • the 3D video data from each source may be scaled by the configured processing network in the video processor module 104 .
  • Various embodiments of the invention relate to an integrated circuit, such as the SoC 100 described above with respect to FIG. 1 , for example, which may be operable to selectively route and process 3D video data.
  • the processing network 300 described above with respect to FIG. 3A may be utilized in the SoC 100 to route and process 3D video data.
  • the integrated circuit may comprise multiple devices, such as the various modules in the processing network 300 , for example, which may be operable to be selectively interconnected to enable the routing and the processing of 3D video data.
  • the integrated circuit may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory.
  • the integrated circuit may be operable to selectively interconnect one or more of the multiple devices based on the determination.
  • the integrated circuit may be operable to determine the selective interconnection of the one or more of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.
  • the input format of the 3D video data may be a L/R input format or an O/U input format and the output format of the 3D video data may be a L/R output format or an O/U output format.
  • the integrated circuit may be operable to determine the selective interconnection of the one or more devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.
  • the integrated circuit may be operable to determine the selective interconnection of the one or more devices on a picture-by-picture basis.
  • the selectively interconnected devices in the integrated circuit may be operable to horizontally scale the 3D video data and to vertically scale the 3D video data. Moreover, the selectively interconnected devices in the integrated circuit may be operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scale, or both.
  • a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for scaling 3D video.
  • the present invention may be realized in hardware, software, or a combination of hardware and software.
  • the present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

Abstract

A method and system are provided in which an integrated circuit (IC) comprises multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE
  • This application makes reference to, claims priority to, and claims the benefit of:
  • U.S. Provisional Patent Application Ser. No. 61/267,729 (Attorney Docket No. 20428US01) filed on Dec. 8, 2009;
    U.S. Provisional Patent Application Ser. No. 61/296,851 (Attorney Docket No. 22866US01) filed on Jan. 20, 2010; and
    U.S. Provisional Patent Application Ser. No. 61/330,456 (Attorney Docket No. 23028US01) filed on May 3, 2010.
  • This application also makes reference to:
  • U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 20428U502) filed on Dec. 8, 2010;
    U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23438U502) filed on Dec. 8, 2010;
    U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23439U502) filed on Dec. 8, 2010; and
    U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23440U502) filed on Dec. 8, 2010.
  • Each of the above referenced applications is hereby incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • Certain embodiments of the invention relate to processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for scaling 3D video.
  • BACKGROUND OF THE INVENTION
  • The availability and access to 3D video content continues to grow. Such growth has brought about challenges regarding the handling of 3D video content from different types of sources and/or the reproduction of 3D video content on different types of displays.
  • Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
  • BRIEF SUMMARY OF THE INVENTION
  • A system and/or method for scaling 3D video, as set forth more completely in the claims.
  • Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
  • BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
  • FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention.
  • FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention.
  • FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention.
  • FIGS. 4A and 4B illustrate format-related variables for left-and-right (L/R) format and over-and-under (O/U) format, respectively, in accordance with embodiments of the invention.
  • FIGS. 5A and 5B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention.
  • FIGS. 6A and 6B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention.
  • FIGS. 7A and 7B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention.
  • FIGS. 8A and 8B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention.
  • FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080p O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention.
  • FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in a processing network configured for scaling 3D video data, in accordance with embodiments of the invention.
  • FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in a configured processing network, in accordance with an embodiment of the invention.
  • FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in a configured processing network, in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Certain embodiments of the invention may be found in a method and system for scaling 3D video. Various embodiments of the invention relate to an integrated circuit (IC) comprising multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.
  • FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention. Referring to FIG. 1, there is shown an SoC 100, a host processor module 120, and a memory module 130. The SoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content. Examples of signals comprising video content that may be received and processed by the SoC 100 include, but need not be limited to, composite, blanking, and sync (CVBS) signals, separate video (S-video) signals, high-definition multimedia interface (HDMI) signals, component signals, personal computer (PC) signals, source input format (SIF) signals, YCrCb, and red, green, blue (RGB) signals. Such signals may be received by the SoC 100 from one or more video sources communicatively coupled to the SoC 100.
  • The SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage. For example, output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology. The characteristics of the output signals, such as pixel rate and/or resolution, for example, may be based on the type of output device to which those signals are to be provided.
  • The host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, parameters and/or other information, including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100. The memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100. For example, the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with 3D video data processing.
  • The SoC 100 may comprise an interface module 102, a video processor module 104, and a core processor module 106. The SoC 100 may be implemented as a single integrated circuit comprising the components listed above. The interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content. Similarly, the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100.
  • The video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data associated with one or more signals received by the SoC 100. The video processor module 104 may be operable to support multiple video data formats, including multiple input formats and multiple output formats for 3D video data. The video processor module 104 may be operable to perform various types of operations on 3D video data, including but not limited to format conversion and/or scaling. In some embodiments, when the video content comprises audio data, the video processor module 104, and/or another module in the SoC 100, may be operable to handle the audio data.
  • The core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content, including but not limited to the processing of 3D video data. In this regard, the core processor 106 may be operable to determine and/or calculate parameters associated with the processing of 3D video data that may be utilized to configure and/or operate the video processor module 104. In some embodiments of the invention, the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100. For example, the core processor module 106 may comprise memory that may be utilized during 3D video data processing by the video processor module 104.
  • In operation, the SoC 100 may receive one or more signals comprising 3D video data through the interface module 102. When the 3D video data received in those signals is to be scaled, the video processor module 104 and/or the core processor module 106 may be utilized to determine whether to scale 3D video data in the video processor module 104 before the 3D video data is captured to memory through the video processor module 104 or after the captured 3D video data is retrieved from the memory through the video processor module 104. The memory into which the 3D video data is to be stored and from which it is to be subsequently retrieved may be a dynamic random access memory (DRAM) that may be part of the memory module 130 and/or of the core processor module 106, for example.
  • At least a portion of the video processor module 104 may be configured by the host processor module 120 and/or the core processor module 106 according to the determined order in which to scale the 3D video data. Such order may be based on an input format of the 3D video data, an output format of the 3D video data, and on a scaling factor. Moreover, the order in which to scale the 3D video data may be determined on a picture-by-picture basis. That is, the order in which to scale the 3D video data and the corresponding configuration of the video processor module 104 may be carried out for each picture in a video sequence that is received in the SoC. Once processed, the 3D video data may be communicated to one or more output devices by the SoC 100.
  • As indicated above, the SoC 100 may be operable handle 3D video data in multiple input formats and multiple output formats. The complexity of the SoC 100, however, may increase significantly the larger the number of input and output formats supported. An approach that may simplify the SoC 100 and that may enable support for a large number of formats is to convert an input format into one of a subset of formats supported by the SoC for processing and have the SoC 100 perform the processing of the 3D video data in that format. Once the processing is completed, the processed 3D video data may be converted to the appropriate output format if such conversion is necessary.
  • FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention. Referring to FIG. 2A, there is shown a first packing scheme or first format 200 for 3D video data and a second packing scheme or second format 210 for 3D video data. Each of the first format 200 and the second format 210 illustrates the arrangement of the left eye content (L) and the right eye content (R) in a 3D video picture. In this regard, a 3D video picture may refer to a 3D video frame or a 3D video field in a video sequence, whichever is appropriate. The L and R in the first format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format. The L and R in the second format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format. Another arrangement, one not shown in FIG. 2A, may be one in which the L is in a first 3D video picture and the R is in a second 3D video picture. Such arrangement may be referred to as a sequential format because the 3D video pictures are processed sequentially.
  • Both the first format 200 and the second format 210 may be utilized by the SoC 100 described above to process 3D video data and may be referred to as native formats of the SoC 100. When 3D video data is received in one of the multiple input formats supported by the SoC 100, the SoC 100 may convert that input format to one of the first format 200 and the second format 210, if such conversion is necessary. The SoC 100 may then process the 3D video data in a native format. Once the 3D video data is processed, the SoC 100 may convert the processed 3D video data into one of the multiple output formats supported by the SoC 100, if such conversion is necessary. The SoC 100 may also be operable to process 3D video data in the sequential format, which is typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210.
  • Referring to FIG. 2B, there is shown a conversion mapping of certain input formats 202 a, 204 a, and 206 a supported by the SoC 100 to the first format 200. For example, an L/R input format 202 a may be converted to the first format 200, which is also an L/R format. In another example, a line interleaved input format 204 a may be converted to the first format 200. In yet another example, a checkerboard input format 206 a may be converted to the first format 200. In each of these scenarios, the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the first format 200.
  • Referring to FIG. 2C, there is shown a conversion mapping of the first format 200 to certain output formats 202 b, 204 b, and 206 b supported by the SoC 100. For example, the first format 200 may be converted to an L/R output format 202 b. In another example, the first format 200 may be converted to a line interleaved output format 204 b. In yet another example, the first format 200 may be converted to a checkerboard output format 206 b. In each of these scenarios, the SoC 100 may determine the appropriate type of output format to which the first format 200 is to be converted.
  • Referring to FIG. 2D, there is shown a conversion mapping of certain input formats 212 a, 214 a, and 216 a supported by the SoC 100 to the second format 210. For example, an O/U input format 212 a may be converted to the second format 210, which is also an O/U format. In another example, an O/U ×2 input format 214 a may be converted to the second format 210. In yet another example, a multi-decode input format 216 a may be converted to the second format 210. In each of these scenarios, the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the second format 210.
  • Referring to FIG. 2E, there is shown a conversion mapping of the second format 210 to certain output formats 212 b and 214 b supported by the SoC 100. For example, the second format 210 may be converted to an O/U output format 212 b. In another example, the second format 210 may be converted to an O/U x2 output format 214 b. In each of these scenarios, the SoC 100 may determine the appropriate type of output format to which the second format 210 is to be converted.
  • The conversion operations supported by the SoC 100 may also comprise converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200. In this manner, 3D video data may be received in any one of multiple input formats, such as the input formats 202 a, 204 a, 206 a, 212 a, 214 a, and 216 a (FIGS. 2B and 2D). Accordingly, resulting processed 3D video data may be generated in any one of multiple output formats, such as the output formats 202 b, 204 b, 206 b, 212 b, and 214 b (FIGS. 2C and 2E).
  • The various input formats and output formats described above with respect to FIGS. 2A-2E are provided by way of illustration and not of limitation. The SoC 100 may support additional input formats that may be converted to a native format such as the first format 200, the second format 210, and the sequential format. Similarly, the SoC may support additional output formats to which a native format may be converted.
  • FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention. Referring to FIG. 3A, there is shown a processing network 300 that may be part of the video processor module 104 in the SoC 100, for example. The processing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data, including 3D video data. In this regard, the processing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data. The various devices, components, modules, blocks, circuits, or the like in the processing network 300 may be dynamically configured and/or dynamically interconnected during the operation of the SoC 100 through one or more signals generated by the core processor module 106 and/or by the host processor module 120. In this regard, the configuration and/or the selective interconnection of various portions of the processing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data.
  • In the embodiment of the invention described in FIG. 3A, the processing network 300 may comprise an MPEG feeder (MFD) module 302, multiple video feeder (VFD) modules 304, an HDMI module 306, crossbar modules 310 a and 310 b, multiple scaler (SCL) modules 308, a motion-adaptive deinterlacer (MAD) module 312, a digital noise reduction (DNR) module 314, multiple capture (CAP) modules 320, and two compositor (CMP) modules 322. Each of the above-listed components may be operable to handle video data, including 3D video data. The references to a memory (not shown) in FIG. 3A may be associated with a DRAM utilized by the processing network 300 to handle storage of video data during various operations. Such DRAM may be part of the memory module 130 described above with respect to FIG. 1. In some instances, the DRAM may be part of memory embedded in the SoC 100. The references to a video encoder (not shown) in FIG. 3A may be associated with hardware and/or software in the SoC 100 that may be utilized after the processing network 300 to further process video data for communication to an output device, such as a display device, for example.
  • Each of the crossbar modules 310 a and 310 b may comprise multiple input ports and multiple output ports. The crossbar modules 310 a and 310 b may be configured such that any one of the input ports may be connected to one or more of the output ports. The crossbar modules 310 a and 310 b may enable pass-through connections 316 between one or more output ports of the crossbar module 310 a and corresponding input ports of the crossbar module 310 b. Moreover, the crossbar modules 310 a and 310 b may enable feedback connections 318 between one or more output ports of the crossbar module 310 b and corresponding input ports of the crossbar module 310 a. The configuration of the crossbar modules 310 a and/or 310 b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed.
  • The MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310 a. The video data read by the MFD module 302 may have been stored in memory after being generated by an MPEG encoder (not shown). Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310. The video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300. The HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310 a. The HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310 a at another data rate.
  • Each SCL module 308 may be operable to scale video data received from the crossbar module 310 a and provide the scaled video data to the crossbar module 310 b. The MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310 a, including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310 b. The DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310 a, including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310 b. In some embodiments of the invention, the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308.
  • Each CAP module 320 may be operable to capture video data from the crossbar module 310 b and store the captured video data in memory. Each CMP module 322 may be operable to blend or combine video data received from the crossbar module 310 b with graphics data. For example, FIG. 3A shows one CMP module 322 being provided with a graphics feed Gfxa that is blended by the CMP module 322 with video data received from the crossbar module 310 b before the combination is communicated to a video encoder. Similarly, another CMP module 322 is provided with a graphics feed Gfxb that is blended by the CMP module 322 with video data received from the crossbar module 310 b before the combination is communicated to a video encoder.
  • Referring to FIG. 3B, there is shown the SCL module 308 in a first configuration that may be utilized when the 3D video data scaling comprises scaling down horizontally. In this configuration, the SCL module 308 may comprise a horizontal scaler (HSCL) module 330, which may be configured to operate first and handles the horizontal scaling (sx) of the video data, and a vertical scaler (VSCL) module 332, which may be configured to operate after the horizontal scaling and handles the vertical scaling (sy) of the video data. The overall scaling of the SCL module 308 in this configuration may be given by the product sx·sy. The input pixel rate of the SCL module 308 at node “in” is SCLin, the output pixel rate of the HSCL module 330 at node “H” is SCLH, and the output pixel rate of the VSCL module 332 at node “V” is SCLV, which is the same as the output pixel rate of the SCL module 308 at node “out”, SCLout.
  • Referring to FIG. 3C, there is shown the SCL module 308 in a second configuration that may be utilized when the 3D video data scaling comprises scaling up horizontally. In this configuration, the VSCL module 332 may be configured to operate first and the HSCL module 330 may be configured to operate after the VSCL module 332. The overall scaling of the SCL module 308 in this configuration may be given by the product sy·sx. The input pixel rate of the SCL module 308 at node “in” is SCLin, the output pixel rate of the VSCL module 332 at node “V” is SCLV, and the output pixel rate of the HSCL module 330 at node “H” is SCLH, which is the same as the output pixel rate of the SCL module 308 at node “out”, SCLout.
  • By configuring the processing network 300 and/or one or more of the SCL modules 308, the processing network 300 may be utilized to scale and/or process 3D video data received by the SoC 100 in any one of the multiple input formats supported by the SoC 100, such as those described above with respect to FIGS. 2B and 2D, for example. Similarly, the scaled and/or processed 3D video data generated by the configured processing network 300 and/or one or more SCL modules 308 may be converted, if necessary, to any one of the multiple output formats supported by the SoC 100, such as those described above with respect to FIGS. 2C and 2E, for example.
  • FIGS. 4A and 4B illustrate format-related variables for L/R format and O/U format, respectively, in accordance with embodiments of the invention. Referring to FIG. 4A, there is shown a 3D video data picture 400 that illustrates some of the variables associated with a side-by-side or left-and-right arrangement. FIG. 4B shows a 3D video data picture 410 that illustrates the same variables when associated with a top-and-bottom or over-and-under arrangement. For example, when the picture 400 or the picture 410 is associated with an input format, such as before the 3D video data is scaled and/or processed by the processing network 300, the variables may be described as follows: xtot=ixtot is the total width of the picture, ytot=iytot is the total height of the picture, xact=ixact is the active width of the picture, yact=iyact is the active height of the picture, x=ix is the width of the area of the picture that is to be cropped and displayed, and y=iy is the height of the area of the picture that is to be cropped and displayed.
  • When the picture 400 or the picture 410 is associated with an output format, such as after the 3D video data is scaled and/or processed by the processing network 300, the variables may be described as follows: xtot=oxtot is the total width of the picture, ytot=oytot is the total height of the picture, xact=oxact is the active width of the picture, yact=oyact is the active height of the picture, x=ox is the width of the area on the display that the input content is to be displayed, and y=oy is the height of the area on the display that the input content is to be displayed.
  • Based on the variables described in FIGS. 4A and 4B, a 3D video picture may be scaled up horizontally when ox>ix, may be scaled down horizontally when ox<ix, may be scaled up vertically when oy>iy, and may be scaled down vertically when oy<iy.
  • When 3D video data received by the SoC 100 is scaled utilizing the processing network 300, the order in which the scaling of the 3D video data occurs with respect to the operations provided by the CAP module 320 and the VFD module 304 may depend on the characteristics of the input format of the 3D video data, the output format of the 3D video data, and the scaling that is to take place. In this regard, there may be bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300. Below are provided various scenarios that describe the selection of the order or positioning of the scaling operation in a sequence of operations that may be performed on 3D video data by the processing network 300.
  • FIGS. 5A and 5B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention. Referring to FIG. 5A, there is shown a first configuration 500 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The first configuration 500 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. For example, in the first configuration 500, the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310 a. The output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310 b. The scaled 3D video data may be captured by the CAP module 320 and may be stored in a memory 502. The memory 502 may be a DRAM memory, for example. One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310 a and 310 b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • In the first configuration 500, the pixel rate at node “A”, p_rateA, is the same as the input pixel rate of the SCL module 308, SCLin. The output pixel rate of the SCL module 308 is SCLout=SCLin·sx·sy=p_rateA·sx·sy. Moreover, the pixel rate at node “C”, p_rateC, is associated with the output characteristics of the 3D video data.
  • With respect to the CAP module 320 in the first configuration 500, the real time scheduling, cap_rts1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
  • n_req = ox N C , ( 1 ) t_n _req = ox S C L out = ox p_rate A · sx · sy = ix p_rate A · 1 sy , ( 2 ) cap_rts 1 = n_req t_n _req = ( ix p_rate A · 1 sy ) / ox N C , ( 3 )
  • where ox is the width of the area on the display that the input content is to be displayed as indicated above with respect to FIGS. 4A and 4B, and Nc is the burst size of the CAP module 320 in number of pixels.
  • With respect to the VFD module 304 in the first configuration 500, the real time scheduling, vfd_rts1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
  • n_req = ox N V , ( 4 ) t_n _req = ox p_rate C , ( 5 ) vfd_rts 1 = n_req t_n _req = ox p_rate C / ox N V , ( 6 )
  • where NV is the burst size of the VFD module 304 in number of pixels.
  • Referring to FIG. 5B, there is shown a second configuration 510 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The second configuration 510 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. For example, in the second configuration 510, the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310 a and 310 b. The 3D video data may be captured by the CAP module 320 and may be stored in the memory 502. One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310 a. The output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310 b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • In the second configuration 510, the pixel rate at node “C”, p_rateC, may be the same as the output pixel rate of the SCL module 308, SCLout. The input pixel rate of the SCL module 308 may be SCLin=SCLout/(sx·sy)=p_rateC/(sx·sy). Moreover, the pixel rate at node “A”, p_rateA, may be associated with the input characteristics of the 3D video data.
  • With respect to the CAP module 320 in the second configuration 510, the real time scheduling, cap_rts2, is based on the number of requests for a line of data, nreq, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
  • n_req = ix N C , ( 7 ) t_n _req = ix p_rate A , ( 8 ) cap_rts 2 = t_n _req n_req = ix p_rate A / ix N C , ( 9 )
  • where ix is the width of the area of the picture that is to be cropped and displayed as indicated above with respect to FIGS. 4A and 4B.
  • With respect to the VFD module 304 in the second configuration 510, the real time scheduling, vfd_rts2, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
  • n_req = ix N V , ( 10 ) t_n _req = ix S C L i n = ix p_rate C / ( sx · sy ) = ox · sy p_rate C , ( 11 ) vfd_rts 2 = t_n _req n_req = ox p_rate C · sy / ix N V . ( 12 )
  • A decision or selection as to whether to perform the scaling operation before capture, as in the first configuration 500, or after the captured data is retrieved from memory, as in the second configuration 510, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., Nc=Nv=N), the bandwidth calculations may be determined as follows:
  • BW 1 = cap_rts 1 + vfd_rts 1 = ( ix p_rate A · 1 sy + ox p_rate C ) / ox N , ( 13 ) BW 2 = cap_rts 2 + vfd_rts 2 = ( ix p_rate A + ox p_rate C · sy ) / ix N , ( 14 ) λ = BW 2 BW 1 = ox / N ix / N · sy , ( 15 )
  • where BW1 is the bandwidth associated with the first configuration 500, BW2 is the bandwidth associated with the second configuration 510, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the first configuration 500, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the second configuration 510.
  • FIGS. 6A and 6B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention. Referring to FIG. 6A, there is shown a third configuration 600 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The third configuration 600 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In the third configuration 600, the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A. That is, the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310 a. The output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310 b. The scaled 3D video data may be captured by the CAP module 320 and may be stored in the memory 502. One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310 a and 310 b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • With respect to the CAP module 320 in the third configuration 600, the real time scheduling, cap_rts3, may be determined as follows:
  • cap_rts 3 = ( ix p_rate A · 1 sy ) / ox N C . ( 16 )
  • With respect to the VFD module 304 in the third configuration 600, the real time scheduling, vfd_rts3, may be determined as follows:
  • vfd_rts 3 = ox p_rate D / ox N V , ( 17 )
  • where the pixel rate at node “D”, p_rateD, may be associated with the output characteristics of the 3D video data.
  • Referring to FIG. 6B, there is shown a fourth configuration 610 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The fourth configuration 610 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In the fourth configuration 610, the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B. That is, the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310 a and 310 b. The 3D video data may be captured by the CAP module 320 and may be stored in the memory 502. One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310 a. The output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310 b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.
  • With respect to the CAP module 320 in the fourth configuration 610, the real time scheduling, cap_rts4, may be determined as follows:
  • cap_rts 4 = ix p_rate A / ix N C , ( 18 )
  • With respect to the VFD module 304 in the fourth configuration 610, the real time scheduling, vfd_rts4, may be determined as follows:
  • vfd_rts 4 = ox p_rate D · sy / ix N V , ( 19 )
  • where the pixel rate at node “D”, p_rateD, may be the same as the output pixel rate of the SCL module 308, SCLout.
  • A decision or selection as to whether to perform the scaling operation before capture, as in the third configuration 600, or after the captured data is retrieved from memory, as in the fourth configuration 610, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., NC=NV=N), the following ratio may be determined:
  • λ = BW 2 BW 1 = ox / N ix / N · sy . ( 22 )
  • where BW1 is the bandwidth associated with the third configuration 600, BW2 is the bandwidth associated with the fourth configuration 610, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the third configuration 600, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the fourth configuration 610.
  • FIGS. 7A and 7B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention. Referring to FIG. 7A, there is shown a fifth configuration 700 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The fifth configuration 700 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A.
  • With respect to the CAP module 320 in the fifth configuration 700, the real time scheduling, cap may be determined as follows:
  • cap_rts 5 = ( ix p_rate B · 1 sy ) / ox N C , ( 23 )
  • where the pixel rate at node “B”, p_rateB, may be the same as the input pixel rate of the SCL module 308, SCLin.
  • With respect to the VFD module 304 in the fifth configuration 700, the real time scheduling, vfd_rts5, may be determined as follows:
  • vfd_rts 5 = ox p_rate C / ox N V . ( 24 )
  • Referring to FIG. 7B, there is shown a sixth configuration 710 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The sixth configuration 710 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B.
  • With respect to the CAP module 320 in the sixth configuration 710, the real time scheduling, cap_rtss, may be determined as follows:
  • cap_rts 6 = ix p_rate B / ix N C , ( 25 )
  • where the pixel rate at node “B”, p_rateB, may be associated with the input characteristics of the 3D video data.
  • With respect to the VFD module 304 in the sixth configuration 710, the real time scheduling, vfd_rtss, may be determined as follows:
  • vfd_rts 6 = ox p_rate C · sy / ix N V . ( 26 )
  • A decision or selection as to whether to perform the scaling operation before capture, as in the fifth configuration 700, or after the captured data is retrieved from memory, as in the sixth configuration 710, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., Nc=Nv=N), the following ratio may be determined:
  • λ = BW 2 BW 1 = ox / N ix / N · sy , ( 27 )
  • where BW1 is the bandwidth associated with the fifth configuration 700, BW2 is the bandwidth associated with the sixth configuration 710, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the fifth configuration 700, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the sixth configuration 710.
  • FIGS. 8A and 8B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention. Referring to FIG. 8A, there is shown a seventh configuration 800 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The seventh configuration 800 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A.
  • With respect to the CAP module 320 in the seventh configuration 800, the real time scheduling, cap may be determined as follows:
  • cap_rts 7 = ( ix p_rate B · 1 sy ) / ox N C . ( 28 )
  • With respect to the VFD module 304 in the seventh configuration 800, the real time scheduling, vfd_rts7, may be determined as follows:
  • vfd_rts 7 = ox p_rate D / ox N V . ( 29 )
  • Referring to FIG. 8B, there is shown an eighth configuration 810 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The eighth configuration 810 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B.
  • With respect to the CAP module 320 in the eighth configuration 810, the real time scheduling, cap may be determined as follows:
  • cap_rts 8 = ix p_rate B / ix N C . ( 30 )
  • With respect to the VFD module 304 in the eighth configuration 810, the real time scheduling, vfd_rtss, may be determined as follows:
  • vfd_rts 8 = ox p_rate D · sy / ix N V . ( 31 )
  • A decision or selection as to whether to perform the scaling operation before capture, as in the seventh configuration 800, or after the captured data is retrieved from memory, as in the eighth configuration 810, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., Nc=Nv=N), the following ratio may be determined:
  • λ = BW 2 BW 1 = ox / N ix / N · sy , ( 32 )
  • where BW1 is the bandwidth associated with the seventh configuration 800, BW2 is the bandwidth associated with the eighth configuration 810, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the seventh configuration 800, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the eighth configuration 810.
  • FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080 progressive (1080p) O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention. Referring to FIG. 9, the example shown corresponds to the fifth configuration 700 described above with respect to FIG. 7A. In this example, an input picture 900, which is formatted as 1080p O/U 3D video data, is provided to the processing network 300 for scaling and/or processing. The input picture 900 is scaled by a scaling operation 910 that is performed by, for example, one of the SCL modules 308 shown in FIG. 3A. A scaled picture 920 is then captured to memory by a capture operation 930 performed by, for example, one of the CAP modules 320 shown in FIG. 3A. The captured picture is retrieved from memory through a capture retrieval operation 940 performed by, for example, one of the VFD modules 304 shown in FIG. 3A. The retrieval of the captured picture, that is, the manner in which the 3D video data is read from the memory, is performed such that an output picture 950 is generated having a 720p L/R format.
  • FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in the processing network 300 when configured for scaling 3D video data, in accordance with embodiments of the invention. Referring to FIG. 10A, there is shown a ninth configuration 1000 of the processing network 300 in which the location of the SCL module 308 is before the CAP module 320. The ninth configuration 1000 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, additional processing cores or operations may be performed on the 3D video data. For example, a first core (P1) module 1002 may be positioned before the SCL module 308, while a second core (P2) module 1004 may be positioned after the SCL module 308. Moreover, a third core (P3) module 1006 may be positioned after the VFD module 304. The various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314. Other modules not shown in FIG. 3A, but that may be included in the processing network 300, may also be utilized as core modules in the ninth configuration 1000.
  • Referring to FIG. 10B, there is shown a tenth configuration 1010 of the processing network 300 in which the location of the SCL module 308 is after the VFD module 304. The tenth configuration 1010 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, additional processing cores or operations may be performed on the 3D video data. For example, the P1 module 1002 may be positioned before the CAP module 320. The P2 module 1004 may be positioned after the VFD module 304 and before the SCL module 308. Moreover, the P3 module 1006 may be positioned after the SCL module 308. As indicated above, the various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314. Other modules not shown in FIG. 3A, but that may be included in the processing network 300, may also be utilized as core modules in the tenth configuration 1010.
  • FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in the configured processing network 300, in accordance with an embodiment of the invention. Referring to FIG. 11, there is shown a flow chart 1100 in which, at step 1110, the video processor module 104 in the SoC 100 may receive 3D video data from a source of such data. At step 1120, the video processor module 104 and/or the host processor module 120 may determine whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104.
  • At step 1130, the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A. The configuration may be based on the order or positioning determined in step 1120 regarding the scaling of the 3D video data. At step 1140, the 3D video data may be scaled by the configured processing network in the video processor module 104.
  • FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in the configured processing network 300, in accordance with an embodiment of the invention. Referring to FIG. 12, there is shown a flow chart 1200 in which, at step 1210, the video processor module 104 in the SoC 100 may receive 3D video data from multiple sources of such data. At step 1220, the video processor module 104 and/or the host processor module 120 may determine, for each of the sources, whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104.
  • At step 1230, the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A. The configuration may be based on the order or positioning determined in step 1220 regarding the scaling of the 3D video data for each of the sources. In this regard, the processing network may be configured to have multiple paths for processing the 3D video data from the various sources of such data. At step 1240, the 3D video data from each source may be scaled by the configured processing network in the video processor module 104.
  • Various embodiments of the invention relate to an integrated circuit, such as the SoC 100 described above with respect to FIG. 1, for example, which may be operable to selectively route and process 3D video data. For example, the processing network 300 described above with respect to FIG. 3A may be utilized in the SoC 100 to route and process 3D video data. The integrated circuit may comprise multiple devices, such as the various modules in the processing network 300, for example, which may be operable to be selectively interconnected to enable the routing and the processing of 3D video data. The integrated circuit may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory. Moreover, the integrated circuit may be operable to selectively interconnect one or more of the multiple devices based on the determination.
  • The integrated circuit may be operable to determine the selective interconnection of the one or more of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor. The input format of the 3D video data may be a L/R input format or an O/U input format and the output format of the 3D video data may be a L/R output format or an O/U output format. The integrated circuit may be operable to determine the selective interconnection of the one or more devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data. The integrated circuit may be operable to determine the selective interconnection of the one or more devices on a picture-by-picture basis.
  • The selectively interconnected devices in the integrated circuit may be operable to horizontally scale the 3D video data and to vertically scale the 3D video data. Moreover, the selectively interconnected devices in the integrated circuit may be operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scale, or both.
  • In another embodiment of the invention, a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for scaling 3D video.
  • Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

Claims (20)

1. A method, comprising:
in an integrated circuit operable to selectively route and process 3D video data, the integrated circuit comprising a plurality of devices that are operable to be selectively interconnected to enable the routing and the processing:
determining whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory; and
selectively interconnecting one or more of the plurality of devices based on the determination.
2. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.
3. The method of claim 2, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is a left-and-right output format.
4. The method of claim 2, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is an over-and-under output format.
5. The method of claim 2, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is a left-and-right output format.
6. The method of claim 2, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is an over-and-under output format.
7. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.
8. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined on a picture-by-picture basis.
9. The method of claim 1, comprising scaling the 3D video data in the selectively interconnected one or more of the plurality of devices in the integrated circuit, the scaling comprising a horizontal scaling and a vertical scaling.
10. The method of claim 1, comprising performing one or more operations in the selectively interconnected one or more of the plurality of devices in the integrated circuit, the one or more operations being performed before the 3D video data is scaled, after the 3D video data is scaled, or both.
11. A system, comprising:
an integrated circuit operable to selectively route and process 3D video data, the integrated circuit comprising a plurality of devices that are operable to be selectively interconnected to enable the routing and the processing;
the integrated circuit being operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory; and
the integrated circuit being operable to selectively interconnect one or more of the plurality of devices based on the determination.
12. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.
13. The system of claim 12, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is a left-and-right output format.
14. The system of claim 12, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is an over-and-under output format.
15. The system of claim 12, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is a left-and-right output format.
16. The system of claim 12, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is an over-and-under output format.
17. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.
18. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices on a picture-by-picture basis.
19. The system of claim 11, wherein the selectively interconnected one or more of the plurality of devices are operable to horizontally scale the 3D video data and to vertically scale the 3D video data.
20. The system of claim 19, wherein the selectively interconnected one or more of the plurality of devices are operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scaled, or both.
US12/963,014 2009-12-08 2010-12-08 Method and system for scaling 3d video Abandoned US20110134217A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/963,014 US20110134217A1 (en) 2009-12-08 2010-12-08 Method and system for scaling 3d video
US12/962,995 US9137513B2 (en) 2009-12-08 2010-12-08 Method and system for mixing video and graphics
US14/819,728 US9307223B2 (en) 2009-12-08 2015-08-06 Method and system for mixing video and graphics

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US26772909P 2009-12-08 2009-12-08
US29685110P 2010-01-20 2010-01-20
US33045610P 2010-05-03 2010-05-03
US12/963,014 US20110134217A1 (en) 2009-12-08 2010-12-08 Method and system for scaling 3d video

Publications (1)

Publication Number Publication Date
US20110134217A1 true US20110134217A1 (en) 2011-06-09

Family

ID=44081627

Family Applications (6)

Application Number Title Priority Date Filing Date
US12/962,995 Active 2033-01-01 US9137513B2 (en) 2009-12-08 2010-12-08 Method and system for mixing video and graphics
US12/963,014 Abandoned US20110134217A1 (en) 2009-12-08 2010-12-08 Method and system for scaling 3d video
US12/963,212 Abandoned US20110134211A1 (en) 2009-12-08 2010-12-08 Method and system for handling multiple 3-d video formats
US12/963,320 Expired - Fee Related US8947503B2 (en) 2009-12-08 2010-12-08 Method and system for processing 3-D video
US12/963,035 Abandoned US20110134218A1 (en) 2009-12-08 2010-12-08 Method and system for utilizing mosaic mode to create 3d video
US14/819,728 Active US9307223B2 (en) 2009-12-08 2015-08-06 Method and system for mixing video and graphics

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/962,995 Active 2033-01-01 US9137513B2 (en) 2009-12-08 2010-12-08 Method and system for mixing video and graphics

Family Applications After (4)

Application Number Title Priority Date Filing Date
US12/963,212 Abandoned US20110134211A1 (en) 2009-12-08 2010-12-08 Method and system for handling multiple 3-d video formats
US12/963,320 Expired - Fee Related US8947503B2 (en) 2009-12-08 2010-12-08 Method and system for processing 3-D video
US12/963,035 Abandoned US20110134218A1 (en) 2009-12-08 2010-12-08 Method and system for utilizing mosaic mode to create 3d video
US14/819,728 Active US9307223B2 (en) 2009-12-08 2015-08-06 Method and system for mixing video and graphics

Country Status (4)

Country Link
US (6) US9137513B2 (en)
EP (1) EP2462748A4 (en)
CN (1) CN102474632A (en)
WO (1) WO2011072016A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9407902B1 (en) 2011-04-10 2016-08-02 Nextvr Inc. 3D video encoding and decoding methods and apparatus
US9485494B1 (en) * 2011-04-10 2016-11-01 Nextvr Inc. 3D video encoding and decoding methods and apparatus

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008106185A (en) * 2006-10-27 2008-05-08 Shin Etsu Chem Co Ltd Method for adhering thermally conductive silicone composition, primer for adhesion of thermally conductive silicone composition and method for production of adhesion composite of thermally conductive silicone composition
CN102474632A (en) * 2009-12-08 2012-05-23 美国博通公司 Method and system for handling multiple 3-d video formats
US8565516B2 (en) * 2010-02-05 2013-10-22 Sony Corporation Image processing apparatus, image processing method, and program
US9414042B2 (en) * 2010-05-05 2016-08-09 Google Technology Holdings LLC Program guide graphics and video in window for 3DTV
US8768044B2 (en) 2010-09-14 2014-07-01 Texas Instruments Incorporated Automatic convergence of stereoscopic images based on disparity maps
US20120281064A1 (en) * 2011-05-03 2012-11-08 Citynet LLC Universal 3D Enabler and Recorder
US20130044192A1 (en) * 2011-08-17 2013-02-21 Google Inc. Converting 3d video into 2d video based on identification of format type of 3d video and providing either 2d or 3d video based on identification of display device type
US20130147912A1 (en) * 2011-12-09 2013-06-13 General Instrument Corporation Three dimensional video and graphics processing
US9069374B2 (en) 2012-01-04 2015-06-30 International Business Machines Corporation Web video occlusion: a method for rendering the videos watched over multiple windows
WO2015192557A1 (en) * 2014-06-19 2015-12-23 杭州立体世界科技有限公司 Control circuit for high-definition naked-eye portable stereo video player and stereo video conversion method
US9716913B2 (en) * 2014-12-19 2017-07-25 Texas Instruments Incorporated Generation of a video mosaic display
CN108419068A (en) * 2018-05-25 2018-08-17 张家港康得新光电材料有限公司 A kind of 3D rendering treating method and apparatus
CN111263231B (en) * 2018-11-30 2022-07-15 西安诺瓦星云科技股份有限公司 Window setting method, device, system and computer readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661427B1 (en) * 1998-11-09 2003-12-09 Broadcom Corporation Graphics display system with video scaler
US20040218269A1 (en) * 2002-01-14 2004-11-04 Divelbiss Adam W. General purpose stereoscopic 3D format conversion system and method
US20040233994A1 (en) * 2003-05-22 2004-11-25 Lsi Logic Corporation Reconfigurable computing based multi-standard video codec
US20040243947A1 (en) * 2003-05-30 2004-12-02 Neolinear, Inc. Method and apparatus for quantifying tradeoffs for multiple competing goals in circuit design
US8373802B1 (en) * 2009-09-01 2013-02-12 Disney Enterprises, Inc. Art-directable retargeting for streaming video

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5481275A (en) * 1992-11-02 1996-01-02 The 3Do Company Resolution enhancement for video display using multi-line interpolation
CN1311957A (en) * 1998-08-03 2001-09-05 赤道技术公司 Circuit and method for generating filler pixels from the original pixels in a video stream
US6704042B2 (en) * 1998-12-10 2004-03-09 Canon Kabushiki Kaisha Video processing apparatus, control method therefor, and storage medium
AU2001287258A1 (en) * 2000-03-17 2001-10-03 Thomson Licensing S.A. Method and apparatus for simultaneous recording and displaying two different video programs
TW580826B (en) * 2001-01-12 2004-03-21 Vrex Inc Method and apparatus for stereoscopic display using digital light processing
US20030103136A1 (en) * 2001-12-05 2003-06-05 Koninklijke Philips Electronics N.V. Method and system for 2D/3D illusion generation
US7236207B2 (en) * 2002-01-22 2007-06-26 Broadcom Corporation System and method of transmission and reception of progressive content with isolated fields for conversion to interlaced display
CA2380105A1 (en) * 2002-04-09 2003-10-09 Nicholas Routhier Process and system for encoding and playback of stereoscopic video sequences
US7804995B2 (en) * 2002-07-02 2010-09-28 Reald Inc. Stereoscopic format converter
KR100488804B1 (en) * 2002-10-07 2005-05-12 한국전자통신연구원 System for data processing of 2-view 3dimention moving picture being based on MPEG-4 and method thereof
US9377987B2 (en) * 2002-10-22 2016-06-28 Broadcom Corporation Hardware assisted format change mechanism in a display controller
US7113221B2 (en) * 2002-11-06 2006-09-26 Broadcom Corporation Method and system for converting interlaced formatted video to progressive scan video
US7154555B2 (en) * 2003-01-10 2006-12-26 Realnetworks, Inc. Automatic deinterlacing and inverse telecine
US7098868B2 (en) * 2003-04-08 2006-08-29 Microsoft Corporation Display source divider
JP4251907B2 (en) * 2003-04-17 2009-04-08 シャープ株式会社 Image data creation device
US20040239757A1 (en) * 2003-05-29 2004-12-02 Alden Ray M. Time sequenced user space segmentation for multiple program and 3D display
US20070216808A1 (en) * 2003-06-30 2007-09-20 Macinnis Alexander G System, method, and apparatus for scaling pictures
US7420618B2 (en) * 2003-12-23 2008-09-02 Genesis Microchip Inc. Single chip multi-function display controller and method of use thereof
US7262818B2 (en) * 2004-01-02 2007-08-28 Trumpion Microelectronic Inc. Video system with de-motion-blur processing
CA2557534A1 (en) * 2004-02-27 2005-09-09 Td Vision Corporation S.A. De C.V. Method and system for digital decoding 3d stereoscopic video images
EP1617370B1 (en) * 2004-07-15 2013-01-23 Samsung Electronics Co., Ltd. Image format transformation
KR100716982B1 (en) * 2004-07-15 2007-05-10 삼성전자주식회사 Multi-dimensional video format transforming apparatus and method
CN1756317A (en) * 2004-10-01 2006-04-05 三星电子株式会社 The equipment of transforming multidimensional video format and method
US20060139448A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. 3D displays with flexible switching capability of 2D/3D viewing modes
KR100932977B1 (en) * 2005-07-05 2009-12-21 삼성모바일디스플레이주식회사 Stereoscopic video display
KR100898287B1 (en) * 2005-07-05 2009-05-18 삼성모바일디스플레이주식회사 Stereoscopic image display device
JP2007080357A (en) * 2005-09-13 2007-03-29 Toshiba Corp Information storage medium, information reproducing method, information reproducing apparatus
US7711200B2 (en) * 2005-09-29 2010-05-04 Apple Inc. Video acquisition with integrated GPU processing
JP2007115293A (en) * 2005-10-17 2007-05-10 Toshiba Corp Information storage medium, program, information reproducing method, information reproducing apparatus, data transfer method, and data processing method
US20070140187A1 (en) * 2005-12-15 2007-06-21 Rokusek Daniel S System and method for handling simultaneous interaction of multiple wireless devices in a vehicle
WO2007117485A2 (en) * 2006-04-03 2007-10-18 Sony Computer Entertainment Inc. Screen sharing method and apparatus
JP4929819B2 (en) * 2006-04-27 2012-05-09 富士通株式会社 Video signal conversion apparatus and method
US8106917B2 (en) * 2006-06-29 2012-01-31 Broadcom Corporation Method and system for mosaic mode display of video
US8330801B2 (en) * 2006-12-22 2012-12-11 Qualcomm Incorporated Complexity-adaptive 2D-to-3D video sequence conversion
US8594180B2 (en) * 2007-02-21 2013-11-26 Qualcomm Incorporated 3D video encoding
US20080285652A1 (en) * 2007-05-14 2008-11-20 Horizon Semiconductors Ltd. Apparatus and methods for optimization of image and motion picture memory access
US8479253B2 (en) * 2007-12-17 2013-07-02 Ati Technologies Ulc Method, apparatus and machine-readable medium for video processing capability communication between a video source device and a video sink device
KR101539935B1 (en) * 2008-06-24 2015-07-28 삼성전자주식회사 Method and apparatus for processing 3D video image
KR101664419B1 (en) * 2008-10-10 2016-10-10 엘지전자 주식회사 Reception system and data processing method
JP2010140235A (en) * 2008-12-11 2010-06-24 Sony Corp Image processing apparatus, image processing method, and program
KR20110113186A (en) * 2009-01-20 2011-10-14 코닌클리케 필립스 일렉트로닉스 엔.브이. Method and system for transmitting over a video interface and for compositing 3d video and 3d overlays
US20100254453A1 (en) * 2009-04-02 2010-10-07 Qualcomm Incorporated Inverse telecine techniques
JP4748251B2 (en) * 2009-05-12 2011-08-17 パナソニック株式会社 Video conversion method and video conversion apparatus
CN102577398B (en) * 2009-06-05 2015-11-25 Lg电子株式会社 Image display device and method of operation thereof
US8614737B2 (en) * 2009-09-11 2013-12-24 Disney Enterprises, Inc. System and method for three-dimensional video capture workflow for dynamic rendering
US20110126160A1 (en) * 2009-11-23 2011-05-26 Samsung Electronics Co., Ltd. Method of providing 3d image and 3d display apparatus using the same
CN102474632A (en) * 2009-12-08 2012-05-23 美国博通公司 Method and system for handling multiple 3-d video formats
US20110157322A1 (en) * 2009-12-31 2011-06-30 Broadcom Corporation Controlling a pixel array to support an adaptable light manipulator
KR20110096494A (en) * 2010-02-22 2011-08-30 엘지전자 주식회사 Electronic device and method for displaying stereo-view or multiview sequence image
KR101699738B1 (en) * 2010-04-30 2017-02-13 엘지전자 주식회사 Operating Method for Image Display Device and Shutter Glass for the Image Display Device
US9414042B2 (en) * 2010-05-05 2016-08-09 Google Technology Holdings LLC Program guide graphics and video in window for 3DTV
US8553072B2 (en) * 2010-11-23 2013-10-08 Circa3D, Llc Blanking inter-frame transitions of a 3D signal
KR20120126458A (en) * 2011-05-11 2012-11-21 엘지전자 주식회사 Method for processing broadcasting signal and display device thereof
US20130044192A1 (en) * 2011-08-17 2013-02-21 Google Inc. Converting 3d video into 2d video based on identification of format type of 3d video and providing either 2d or 3d video based on identification of display device type
JP5319796B2 (en) * 2012-01-12 2013-10-16 株式会社東芝 Information processing apparatus and display control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661427B1 (en) * 1998-11-09 2003-12-09 Broadcom Corporation Graphics display system with video scaler
US20040218269A1 (en) * 2002-01-14 2004-11-04 Divelbiss Adam W. General purpose stereoscopic 3D format conversion system and method
US20040233994A1 (en) * 2003-05-22 2004-11-25 Lsi Logic Corporation Reconfigurable computing based multi-standard video codec
US20040243947A1 (en) * 2003-05-30 2004-12-02 Neolinear, Inc. Method and apparatus for quantifying tradeoffs for multiple competing goals in circuit design
US8373802B1 (en) * 2009-09-01 2013-02-12 Disney Enterprises, Inc. Art-directable retargeting for streaming video

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9407902B1 (en) 2011-04-10 2016-08-02 Nextvr Inc. 3D video encoding and decoding methods and apparatus
US9485494B1 (en) * 2011-04-10 2016-11-01 Nextvr Inc. 3D video encoding and decoding methods and apparatus
US10681333B2 (en) * 2011-04-10 2020-06-09 Nextvr Inc. 3D video encoding and decoding methods and apparatus
US11575870B2 (en) 2011-04-10 2023-02-07 Nevermind Capital Llc 3D video encoding and decoding methods and apparatus

Also Published As

Publication number Publication date
US20110134218A1 (en) 2011-06-09
EP2462748A1 (en) 2012-06-13
EP2462748A4 (en) 2013-11-13
US9307223B2 (en) 2016-04-05
CN102474632A (en) 2012-05-23
US20110134216A1 (en) 2011-06-09
US20110134211A1 (en) 2011-06-09
US9137513B2 (en) 2015-09-15
WO2011072016A1 (en) 2011-06-16
US20150341613A1 (en) 2015-11-26
US8947503B2 (en) 2015-02-03
US20110134212A1 (en) 2011-06-09

Similar Documents

Publication Publication Date Title
US20110134217A1 (en) Method and system for scaling 3d video
US20100177161A1 (en) Multiplexed stereoscopic video transmission
US20150348509A1 (en) Dynamic frame repetition in a variable refresh rate system
US8922622B2 (en) Image processing device, image processing method, and program
JP7262877B2 (en) Adaptive High Dynamic Range Tonemapping with Overlay Directives
US20140333838A1 (en) Image processing method
CN102291587B (en) Full high-definition 3D (Three Dimensional) video processing method
KR20120047055A (en) Display apparatus and method for providing graphic image
CN108345559B (en) Virtual reality data input device and virtual reality equipment
JP2011114861A (en) 3d image display apparatus and display method
US8184137B2 (en) System and method for ordering of scaling and capturing in a video system
US9020044B2 (en) Method and apparatus for writing video data in raster order and reading video data in macroblock order
US9239697B2 (en) Display multiplier providing independent pixel resolutions
US20140118620A1 (en) Video/Audio Signal Processing Apparatus
US20130009998A1 (en) Display control system
US7227554B2 (en) Method and system for providing accelerated video processing in a communication device
JP5015089B2 (en) Frame rate conversion device, frame rate conversion method, television receiver, frame rate conversion program, and recording medium recording the program
US20130050183A1 (en) System and Method of Rendering Stereoscopic Images
US7526186B2 (en) Method of scaling subpicture data and related apparatus
JP2006303631A (en) On-screen display device and on-screen display generation method
US8896615B2 (en) Image processing device, projector, and image processing method
JP2003189262A (en) Method for integrating three-dimensional y/c comb line filter and interlace/progressive converter into single chip and system thereof
KR20100005273A (en) Multi-vision system and picture visualizing method the same
CN103634534A (en) Image processing device, image processing method, and program
KR20190017286A (en) Method for Image Processing and Display Device using the same

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NEUMAN, DARREN;HERRICK, JASON;ZHAO, QINGHUA;AND OTHERS;SIGNING DATES FROM 20101110 TO 20101201;REEL/FRAME:025724/0254

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119