US20020030694A1 - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
US20020030694A1
US20020030694A1 US09/815,781 US81578101A US2002030694A1 US 20020030694 A1 US20020030694 A1 US 20020030694A1 US 81578101 A US81578101 A US 81578101A US 2002030694 A1 US2002030694 A1 US 2002030694A1
Authority
US
United States
Prior art keywords
image data
frame image
frame
merge
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/815,781
Other versions
US6924807B2 (en
Inventor
Hitoshi Ebihara
Kazumi Sato
Masakazu Mokuno
Hideki Hara
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Sony Network Entertainment Platform Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Assigned to SONY COMPUTER ENTERTAINMENT INC. reassignment SONY COMPUTER ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARA, HIDEKI, SATO, KAZUMI, MOKUNO, MASAKAZU, EBIHARA, HITOSHI
Publication of US20020030694A1 publication Critical patent/US20020030694A1/en
Application granted granted Critical
Publication of US6924807B2 publication Critical patent/US6924807B2/en
Assigned to SONY NETWORK ENTERTAINMENT PLATFORM INC. reassignment SONY NETWORK ENTERTAINMENT PLATFORM INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SONY COMPUTER ENTERTAINMENT INC.
Assigned to SONY COMPUTER ENTERTAINMENT INC. reassignment SONY COMPUTER ENTERTAINMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SONY NETWORK ENTERTAINMENT PLATFORM INC.
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/393Arrangements for updating the contents of the bit-mapped memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Definitions

  • the present invention relates to an apparatus for processing image data to produce an image for display on a display screen and, more particularly, to produce a high quality image for display on a relatively large display screen.
  • Moving pictures such as those produced for display on “the big screen” at cinemas are usually provided by way of 35 mm film that is projected onto a relatively large screen.
  • graphics processors deployment′ computers, microprocessors, etc.
  • conventional graphics processors are designed to produce moving pictures on a computer display screen (such as a 17-inch monitor), a conventional television screen, etc., and not on relatively large screens, such as those found in cinemas.
  • conventional graphics processors are not capable of processing image data at a sufficient speed to produce moving pictures of sufficient quality, particularly when the moving pictures are to be displayed on a relatively large display screen.
  • a conventional graphics processor 10 may include three basic components, namely, a processing unit or software processor 12 , a rendering unit 14 , and a frame buffer 16 .
  • the conventional graphics processor 10 receives image data, including data concerning primitives (e.g., polygons used to model one or more objects of the image to be displayed) and produces a series of frames of image data (or pixel data), where each frame of image data contains the information necessary to display one frame of the image on the display screen. When these frames of image data are displayed at a sufficient frame rate, the appearance of movement is achieved.
  • the software processor 12 manipulates the image data to obtain a sequence of instructions (such as polygon instructions) for input to the rendering unit 14 .
  • the rendering unit 14 receives the sequence of instructions and produces pixel data, e.g., pixel position, color, intensity, etc.
  • the pixel data is stored in the frame buffer 16 and an entire frame of the pixel data is referred to herein as frame image data.
  • the frame buffer 16 is usually sized to correspond with the resolution of the display screen such that enough pixel data to cover an entire frame of the display screen may be stored. When a full frame of image data are stored in the frame buffer 16 , the frame image data are released from the frame buffer 16 to be displayed on the display screen.
  • an apparatus for processing imaged data to produce an image for covering an image area of a display includes a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; a control processor operable to provide instructions to the plurality of graphics processors; and one or more merge units operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon.
  • the plurality of graphics processors are grouped into respective sets of graphics processors;
  • the one or more merge units include a respective local merge unit coupled to each set of graphics processors, and a core merge unit coupled to each local merge unit;
  • the respective local merge units are operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon;
  • the core merge unit is operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon.
  • the apparatus preferably includes a video hub operable to receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, the video hub being operatively coupled to the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data, and (ii) the merge unit is operable to produce a subsequent frame of the combined frame image data based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
  • the apparatus includes a memory hub operable to receive and store common image data, the memory hub being operatively coupled to the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least some of the common image data, and (ii) the merge unit is operable to produce a frame of the combined frame image data based on the common image data.
  • the core merge unit and the respective local merge units are operatively coupled to the control processor by way of separate control data lines such that exclusive communication between the control processor, the core merge unit, and the respective local merge units at least concerning the instruction to operate in the one or more modes is obtained.
  • an apparatus for processing image data to produce an image for display on a display screen includes a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer, and each graphics processor including a synchronization counter operable to produce a local synchronization count indicating when the frame image data should be released from the local frame buffer; a control processor operable to provide instructions to the plurality of graphics processors; and at least one merge unit operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon.
  • the apparatus includes a data bus operatively coupled to the control processor and respective sets of the graphics processors; a respective local merge unit coupled to each set of the graphics processors and operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; a core merge unit coupled to each local merge unit and operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; a control data bus operatively coupled to the core merge unit and the local merge units; a bus controller operatively coupled between the data bus and the control data bus and operable to transmit and receive data over the data bus and the control data bus on a priority basis.
  • the core merge unit transmits the merge synchronization signal to the respective graphics processors by way of the bus controller and the data bus.
  • the control processor communicates at least the instructions concerning the one or more modes of operation to the respective local merge units unit by way of the bus controller and the control data bus.
  • the respective local frame buffers of the respective groups of graphics processors are operatively coupled to the respective local merge units by way of separate data lines such that exclusive transmission of the frame image data from the respective local frame buffers to the respective local merge units is obtained; and the respective local merge units are operatively coupled to the core merge unit by way of separate data lines such that exclusive transmission of the local combined frame image data from the respective local merge units to the core merge unit is obtained.
  • the apparatus includes a packet switch operatively coupled to the control processor and respective sets of the graphics processors; a respective local merge unit coupled to each set of the graphics processors and operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; a core merge unit coupled to each local merge unit and operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; a control data bus operatively coupled to the core merge unit and the local merge units; a packet switch controller operatively coupled between the packet switch and the control data bus and operable to transmit and receive data over the packet switch and the control data bus on a priority basis.
  • control processor communicates at least the instructions concerning the one or more modes of operation to the respective local merge units and the core merge unit by way of packet switch, the packet switch controller and the control data bus.
  • core merge unit transmits the merge synchronization signal to the respective graphics processors by way of the packet switch controller, and the packet switch.
  • an apparatus for processing image data to produce an image for display is formed of a subset of a plurality of processing nodes coupled together on a packet-switched network.
  • the apparatus preferably includes: at least one accelerator node including a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; one or more merge nodes including one or more merge units, the one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being associated with each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being associated with each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; at least one configuration node oper
  • control node is an (n)th level control node
  • the at least one merge node is an (n)th level merge node
  • the packet switch node is an (n)th level packet switch node
  • at least one accelerator node is an (n)th level accelerator node that includes an (n ⁇ 1)th level control node and a plurality of (n ⁇ 1)th level accelerator nodes coupled together by way of an (n ⁇ 1)th level packet switch node over the packet-switched network.
  • a method for processing image data to produce an image for covering an image area of a display includes: rendering the image data into frame image data using a plurality of graphics processors; storing the frame image data in respective local frame buffers; and synchronously merging the frame image data from the respective local frame buffers to synchronously produce combined frame image data based thereon.
  • a method for processing image data to produce an image for display on a display screen includes: rendering the image data into frame image data using a plurality of graphics processors; storing the frame image data in respective local frame buffers; producing respective local synchronization counts indicating when the frame image data should be released from the respective local frame buffers; and synchronously producing combined frame image data from the frame image data.
  • a method for processing image data to produce an image for display is carried out using a subset of a plurality of processing nodes coupled together on a packet-switched network.
  • the method includes: selecting from among the plurality of processing nodes at least one accelerator node including a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; selecting from among the plurality of processing nodes one or more merge nodes including one or more merge units, the one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being associated with each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being associated with each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge
  • FIG. 1 is a block diagram of a graphics processor in accordance with the prior art
  • FIG. 2 is a block diagram of an alternative graphics processor in accordance with the prior art.
  • FIG. 3 is a block diagram illustrating an apparatus for processing image data in accordance with one or more aspects of the present invention
  • FIG. 4 is a timing graph illustrating one or more aspects of the invention consistent with the apparatus of FIG. 3;
  • FIG. 5 is a timing graph illustrating one or more further aspects of the invention consistent with the apparatus of FIG. 3;
  • FIGS. 6 A-B are timing graphs illustrating one or more still further aspects of the invention consistent with the apparatus of FIG. 3;
  • FIGS. 7 A- 7 E are illustrative examples of various modes of operation consistent with one or more aspects of the present invention suitable for implementation using the apparatus for processing image data of FIG. 3;
  • FIG. 8 is a block diagram illustrating additional details concerning a merge unit that may be employed in the apparatuses for processing image data shown in FIG. 3;
  • FIG. 9 is a process flow diagram illustrating actions that may be carried out by the apparatus of FIG. 3;
  • FIG. 10 is a block diagram illustrating an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIG. 11 is a block diagram of a preferred graphics processor in accordance with one or more aspects of the present invention that may be employed in the apparatus of FIG. 3;
  • FIG. 12 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIGS. 13 A- 13 B are timing graphs that may be employed by the apparatus of FIG. 12;
  • FIG. 14 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIG. 15 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIG. 16 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIG. 17 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIG. 18 is a system for processing image data in accordance with one or more further aspects of the present invention.
  • FIG. 19 is a process flow diagram suitable for use with the system of FIG. 18;
  • FIG. 20 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention.
  • FIGS. 21 A- 21 B are block diagrams of instruction formats that may be employed by the apparatus of FIG. 20.
  • FIG. 3 there is shown a block diagram of an apparatus 100 for processing image data for producing an image for display on a display screen (not shown).
  • the image data may be obtained from a memory of the apparatus 100 or from a secondary memory, such as a hard-disk drive, CD-ROM, DVD-ROM, a memory associated with a communications network, etc.
  • the image data may represent object models, such as 3D and/or 2D polygonal models of objects used in the image to be displayed.
  • the image data may represent sequences of polygonal image instructions produced in accordance with a software program running on a suitable processor.
  • the apparatus 100 preferably includes a control processor 102 (and associated memory), a plurality of graphics processors 104 , at least one merge unit 106 , and a synchronization unit 108 .
  • the control processor 102 preferably communicates with the plurality of graphics processors 104 , the merge unit 106 , and the synchronization unit 108 by way of bus 126 .
  • separate data lines 127 couple each of the graphics processors 104 to the merge unit 106 such that exclusive communication between the respective graphics processors 104 and the merge unit 106 is obtained.
  • separate synchronization lines 129 preferably couple the synchronization unit 108 to each of the graphics processors 104 and the merge unit 106 such that exclusive communication therebetween is obtained.
  • each, of the graphics processors 104 is operable to render image data into frame image data (e.g., pixel data) and to store the frame image data in a respective local frame buffer 112 . More particularly, at least some of the graphics processors 104 preferably include a rendering unit 110 for performing the rendering function and a local frame buffer 112 for at least temporarily storing the frame image data. In addition, it is preferred that each graphics processor 104 include a processing unit 114 operable to facilitate performing various processes on the image data, such as polygonal spatial transformation (e.g., translation, rotation, scaling, etc.); 3D to 2D transformation (e.g., perspective view transformation, etc.); and generating sequence instructions (e.g., polygon instructions). The graphics processors 104 preferably also each include an input/output interface 116 suitable for facilitating communication over the bus 126 . The graphics processors 104 may each include an optional local synchronization circuit 118 , which will be discussed in more detail hereinbelow.
  • the merge unit 106 is preferably operable to synchronously receive frame image data from each of the respective local frame buffers 112 and to synchronously produce combined frame image data based thereon.
  • the frame image data are synchronously received by the merge unit 106 in response to a merge synchronization signal produced by one of the merge unit 106 and the synchronization unit 108 and received by the graphics processors 104 over synchronization lines 129 .
  • the plurality of graphics processors 104 preferably synchronously release the frame image data from the respective local frame buffers 112 to the merge unit 106 in response to the merge synchronization signal.
  • the merge synchronization signal is synchronized in accordance with a display protocol (or standard) defining how respective frames of the combined frame image data are to be displayed.
  • display protocols may include the well known NTSC protocol, the HDTV protocol, the 35 mm movie protocol, the PAL protocol, etc.
  • the display protocol defines, among other things, the frame rate at which successive frames of the combined image data are to be displayed, and a blanking period, defining when the frames of the combined frame image data are to be refreshed.
  • the blanking period may dictate how long a given frame of the combined frame image data should dwell on the display prior to refresh (as is the case with the NTSC protocol).
  • the blanking period may dictate when a given frame of the combined frame image data should be removed from the display prior to refresh (as is the case with the 35 mm movie protocol).
  • the display protocol may be the NTSC protocol and the blanking periods may be vertical blanking periods.
  • the NTSC protocol provides that the vertical blanking periods occur on a periodic basis consistent with when a scanning beam of the CRT is de-energized and returned to an upper left-hand portion of the display screen.
  • the combined frame image data from the merge unit 106 are preferably ready for display just prior to the end of each vertical blanking period of the CRT display screen.
  • blanking period may be applied in a broader sense, such as representing the blanking periods utilized in cinematography (e.g., 35 mm moving images), or other display protocols.
  • the merge synchronization signal is either derived from or sets a desired frame rate at which the combined frame image data are provided.
  • the merge synchronization signal is preferably synchronized with the vertical blanking period (or frame rate) of a CRT display screen.
  • the merge synchronization signal is preferably synchronized with the frame rate (or blanking period) consistent with that protocol.
  • FIG. 4 is a timing diagram that illustrates the relationship between the operation of the apparatus 100 of FIG. 3 and certain aspects of a display protocol. For example the relationship between the blanking period, the merge synchronization signal, the rendering period, and the merge period is shown.
  • a frame includes an interval during which an image frame is to be displayed or refreshed (indicated by a low logic state) and a period during which an image frame is to be made ready for display, e.g., the blanking period (indicated by a high logic state).
  • the merge synchronization signal 132 includes transitions, such as one of rising and falling edges, that are proximate to the ends 142 , 144 of the blanking periods (i.e., the transitions from high logic levels to low logic levels).
  • the merge synchronization signal 132 may include transitions that lead the ends 142 , 144 of the blanking periods.
  • the merge synchronization signal 132 may include transitions that are substantially coincident with the ends 142 , 144 of the blanking periods.
  • the merge synchronization signal 132 is preferably synchronized with the frame rate dictated by the display protocol such that the merge unit 106 initiates the production of, or releases, the combined frame image data for display at the end of at least one of the blanking periods.
  • At least some of the graphics processors 104 preferably initiate rendering the image data into the frame buffers 112 prior to the ends 142 , 144 of the blanking periods, such as at 136 A-D. As shown, four rendering processes are initiated at respective times 136 A-D, although any number of rendering processes may be employed using any number of graphics processors 104 without departing from the scope of the invention. Further, the rendering processes in each of the graphics processors 104 may be initiated at the same time or may be initiated at different times.
  • the rendering units 110 of the graphics processors 104 are preferably operable to begin rendering the image data into the respective frame buffers 112 asynchronously with respect to the merge synchronization signal 132 .
  • the control processor 102 may issue a “begin rendering” command (or trigger) to the graphics processors 104 , where the begin rendering command is asynchronous with respect to the merge synchronization signal 132 .
  • the begin rendering command may be given any suitable name, such as DRAWNEXT. It is also contemplated that the processing units 114 may initiate rendering as a result of software program execution without specific triggering from the control processor 102 .
  • Each (or a group) of the graphics processors 104 may complete the rendering process of the image data into the respective frame buffers 112 at any time (e.g., at 138 A-D). It is preferred that at least one of the graphics processors 104 is operable to issue a rendering complete signal to the control processor 102 when it has completed rendering a frame of the frame image data.
  • the rendering complete command may be given any suitable name, such as DRAWDONE.
  • the respective graphics processors 104 In response to the merge synchronization signal 132 , the respective graphics processors 104 preferably release the frame image data from the respective local frame buffers 112 to the merge unit 106 such that the merge unit 106 may synchronously produce the combined frame image data therefrom.
  • the combined frame image data are preferably produced by the merge unit 106 during the interval between the end of one blanking period and the beginning of the next blanking period as shown by the logic transitions labeled 140 .
  • one or more of the graphics processors 104 preferably include a respective local synchronization circuit 118 operable to receive the merge synchronization signal 132 over synchronization lines 129 from the synchronization unit 108 or from the merge unit 106 .
  • the respective local synchronization units 118 are preferably further operable to deliver the synchronization signal to the various portions of the graphics processors 104 to enable them to synchronously release the respective frame image data from the local frame buffers 112 to the merge unit 106 .
  • the local synchronization circuits 118 preferably employ counters to count between respective transitions of the merge synchronization signal 132 .
  • control processor 102 is preferably operable to instruct the respective graphics processors 104 and the merge unit 106 to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers 112 , and when the frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
  • one of these modes provides that at least some of the graphics processors complete rendering the image data into the respective frame buffers 112 prior to the end of each blanking period.
  • FIG. 5 is a timing diagram showing the relationship between the frame rate, the blanking period, the merge synchronization signal 132 , the rendering period, and the merge period carried out by the apparatus 100 of FIG. 3.
  • the plurality of graphics processors 104 may initiate the rendering of the image data into the respective local frame buffers 112 (e.g., at 136 ) either synchronously, asynchronously, in response to a begin rendering command (DRAWNEXT) from the control processor 102 or processing units 114 , etc., it being most preferred that at least a group of the graphics processors 104 initiate rendering at substantially the same time and at some point during a blanking period.
  • Each of the graphics processors 104 may complete the rendering process at different times, such as at 138 A-D.
  • the rendering processes need not be completed prior to the termination of each blanking period; rather, they may be completed prior to the termination of an integral number of blanking periods (such as two blanking periods as shown).
  • the graphics processors 104 may not complete a given rendering process prior to the end 142 of a first blanking period, they may complete the rendering process prior to the end 144 of a second blanking period.
  • each of the graphics processors 104 enjoys the benefit of additional time during which to render the image data into the respective local frame buffers 112 .
  • the respective graphics processors 104 In response to the merge synchronization signal 132 , the respective graphics processors 104 preferably release the frame image data from the respective local frame buffers 112 to the merge unit 106 at the end of every second blanking period (such as at 144 ). New image data may be rendered into the local frame buffers 112 beginning at, for example, 146 .
  • the integral number of blanking periods is two, any integral number may be employed in this mode of operation without departing from the scope of the invention.
  • one or more of the graphics processors 104 may each include an integral number of local frame buffers 112 (e.g., two local frame buffers used in so-called “double buffering”).
  • an integral number of local frame buffers 112 e.g., two local frame buffers used in so-called “double buffering”.
  • only one of the integral number of local frame buffers 112 of a given graphics processor 104 need contain a full frame of frame image data and, therefore, that graphics processor 104 need only complete rendering the image data into the respective integral number of local frame buffers 112 prior to the end of a corresponding integral number of blanking periods.
  • a first graphics processor 104 may include two local frame buffers 112 A, 112 A′ and a second graphics processor 104 may include two more local frame buffers 112 B, 112 B′.
  • the first graphics processor 104 should complete rendering the image data into local frame buffer 112 A prior to the end 142 of a first blanking period (e.g., at 138 A) and preferably before a next trigger to begin rendering is received. It is noted that the first graphics processor 104 need not complete rendering the image data into local frame buffer 112 A′ until prior to the end 144 of a second blanking period (e.g., at 138 A′) and preferably before yet another trigger to begin rendering is received. Indeed, the first graphics processor 104 need not even begin rendering the image data into local frame buffer 112 A′ until after the image data are rendered into the first local frame buffer 112 A.
  • the second graphics processor 104 should complete rendering the image data into local frame buffer 112 B prior to the end 142 of the first blanking period (e.g., at 138 B), while the second graphics processor 104 need not complete rendering the image data into local frame buffer 112 B′ until prior to the end 144 of the second blanking period (e.g., at 138 B′).
  • frame image data are available for release from the local frame buffer 112 A and the local frame buffer 112 B prior to the end 142 of the first blanking period, such frame image data are preferably released in response to the merge synchronization signal 132 to the merge unit 106 as shown in FIG. 6A at 140 AB.
  • an error condition could occur if one or both of the first and second graphics processors 104 fail to complete rendering into one or both of the local frame buffers 112 A′, 112 B′, respectively, prior to 144 .
  • release of the frame image data from local frame buffer 112 A′ (and/or 112 B′) to the merge unit 106 is not permitted (i.e., just after 144 ) and the contents of local frame buffers 112 A and 112 B are re-released to the merge unit 106 (at 140 AB 2 ).
  • the frame image data therein may be released at the end of the next blanking period, e.g., at 140 A′B′. This advantageously minimizes the effects of the error condition.
  • the invention contemplates any number of graphics processors 104 each containing any number of local frame buffers 112 . It is noted that this mode may also apply to a group of graphics processors 104 each containing only one local frame buffer 112 . In this case, each graphics processor 104 would enjoy a time period during which rendering could be performed that corresponds to the number of graphics processors 104 (or local frame buffers 112 ) participating in the mode of operation.
  • Each graphics processor 104 would complete rendering into a given local frame buffer 112 prior to the end of an integral number of blanking periods that corresponds with the integral number of graphics processors 104 (or local frame buffers 112 ) participating in the mode of operation. For example, when four graphics processors 104 participate in the mode of operation and each graphics processor 104 includes a single local frame buffer 112 , each graphics processor 104 could enjoy a time period during which rendering may be performed that corresponds to four frames of the display protocol.
  • the modes of operation in accordance with one or more aspects of the present invention preferably include at least one of area division, averaging, layer blending, Z-sorting and layer blending, and flip animation.
  • the area division mode of operation preferably provides that at least two of the local frame buffers 112 (as shown, four local frame buffers 112 A-D are contemplated) are partitioned into respective rendering areas 120 A-D that correspond with respective areas of the display screen that will be covered by the combined frame image data 122 , and non-rendering areas 124 A-D that are not utilized to carry frame image data.
  • an aggregate of the rendering areas 120 A-D results in a total rendering area that corresponds with a total area of the display screen that will be covered by the combined frame image data 122 .
  • the merge unit 106 is preferably operable to synchronously aggregate the respective frame image data from the respective rendering areas 120 A-C of the graphics processors 104 based on the known alpha blending technique to produce the combined frame image data.
  • the local frame buffers 112 A-D may be employed by separate graphics processors 104 or may be employed by, and distributed among, a lesser number of graphics processors 104 .
  • the area division mode of operation may take advantage of one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5 and 6 A, it being preferred that the one or more graphics processors 104 complete rendering the image data into the respective rendering areas 120 A-D of the local frame buffers 112 A-D prior to the end of each blanking period (for example, see FIG. 4).
  • the averaging mode of operation preferably provides that the local frame buffers 112 of at least two of the graphics processors 104 (as shown, four local frame buffers 112 A-D are contemplated) include rendering areas 120 A-D that each correspond with the total area covered by the combined frame image data 122 .
  • the averaging mode further provides that the merge unit 106 averages the respective frame image data from the local frame buffers 112 to produce the combined frame image data for display.
  • the averaging process may include at least one of scene anti-aliasing, alpha-blending (e.g., scene-by-scene), weighted averaging, etc. as are well within the knowledge of those skilled in the art.
  • the averaging mode of operation may employ one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6 A, it being preferred that each of the graphics processors participating in this mode of operation complete rendering the image data into the respective rendering areas 120 A-D of the local frame buffers 112 A-D prior to the end of each blanking period (for example, see FIG. 4).
  • the layer blending mode preferably provides that at least some of the image data are rendered into the local frame buffers 112 (as shown, four such local frame buffers 112 A-D are contemplated) of at least two of the graphics processors 104 such that each of the local frame buffers 112 A-D includes frame image data representing at least a portion of the combined frame image data.
  • each portion of the combined frame image data may be an object of the final image.
  • the layer blending mode preferably further provides that each of the portions of the combined frame image data area prioritized, such as by Z-data representing depth.
  • the layer blending mode preferably further provides that the merge unit 106 is operable to synchronously produce the combined frame image data by layering each of the frame image data of the respective local frame buffers 112 A-D in an order according to the priority thereof. For example, layering a second frame of frame image data over a first frame of frame image data may result in overwriting some of the first frame image data depending on the priority of the layers.
  • the priority of the layers corresponds with the relative depth of the portions of the combined frame image data (e.g., objects of the image)
  • one layer of frame image data may be designated as being nearer to a point of view than another layer of frame image data.
  • an illusion of depth may be created by causing the layer of frame image data designated as being nearer to the point of view to overwrite at least portions of a layer of frame image data being farther from the point of view (for example, see FIG. 7C at 122 ).
  • the layering mode of operation may employ one or more of timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6 A, it being preferred that the graphics processors 104 participating in the layer blending mode complete rendering the image data into the respective local frame buffers 112 A-D prior to the end of each blanking period.
  • the Z-sorting and layer blending mode preferably provides that at least some of the image data are rendered into the local frame buffers (as shown, four such local frame buffers 112 A-D are contemplated) of at least two of the graphics processors 104 such that the local frame buffers 112 A-D include frame image data representing at least a portion of the combined image data.
  • the portions of the combined frame image data may represent objects of the image to displayed.
  • the Z-sorting and layer blending mode preferably further provides that the frame image data include Z-values representing image depth (e.g., on a pixel-by-pixel basis) and that the merge unit 106 is operable to synchronously produce the combined frame image data by Z-sorting and layering each of the frame image data in accordance with the image depth.
  • some portions of the combined frame image data such as that stored in the local frame buffer 112 A may overwrite other portions of the frame image data, such as that stored in local frame buffer 112 B assuming that the relative depths dictated by the Z-values provide for such overwriting.
  • the Z-sorting and layer blending mode may employ one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6 A, wherein it is preferred that the graphics processors 104 complete rendering the image data into the respective local frame buffers 112 prior to the end of each blanking period (for example, see FIG. 4).
  • the flip animation mode preferably provides that the local frame buffers 112 (as shown, four local frame buffers 112 A-D are contemplated) of at least two graphics processors 104 include frame image data that are capable of covering the total area covered by the combined frame image data and that the merge unit 106 is operable to produce the combined frame image data by sequentially releasing the respective frame image data from the local frame buffers 112 of the graphics processors 104 .
  • the flip animation mode may employ one or more of the timing relationships discussed hereinabove with respect to FIGS.
  • the graphics processors 104 complete rendering the image data into the respective frame buffers 112 A-D prior to the ends of an integral number of blanking periods, where the integral number of blanking periods corresponds to the number of graphics processors 104 participating in the flip animation mode (for example, see FIGS. 5 and 6A).
  • the integral number of blanking periods may correspond to the number of local frame buffers 112 participating in the flip animation mode.
  • the merge unit 106 preferably includes a scissoring block 202 , an alpha-test block 204 , a Z-sort block 206 , and an alpha-blending block 208 .
  • the scissoring block 202 preferably includes a plurality of sub-blocks 202 A, 202 B, 202 C, 202 D, etc., one such sub-block for each of the frame image data (or path) received.
  • the scissoring block 202 in total, and the sub-scissoring blocks 202 A-D in particular, is preferably operable to validate certain of the frame image data and to invalidate other portions of the frame image data (e.g., validating a portion of the frame image data corresponding to a given rectangular area for use in producing the combined frame image data).
  • This functionality is particularly useful in operating in the area division mode (see FIG. 7A and corresponding discussion hereinabove).
  • the output of the scissoring block 202 is received by the alpha-test block 204 , where certain of the frame image data are invalidated based on a comparison between alpha values of the frame image data and a constant.
  • the alpha-test block 204 preferably includes a plurality of sub-blocks 204 A, 204 B, 204 C, 204 D, etc., one such sub-block for each of the frame image data (or path) received.
  • the functionality of the alpha-test block 204 is particularly useful when the frame image data includes useful information concerning one or more objects utilized in producing the combined frame image data, but where other portions of the frame image data do not contain useful information.
  • the alpha values associated with the pixels of the tree may be of sufficient magnitude to indicate that the data is valid, but the pixels of the frame image data associated with areas other than the tree may be unimportant and unusable and, therefore, advantageously discarded.
  • the Z-sort block 206 preferably performs Z-sorting of the frame image data on a pixel-by-pixel basis.
  • the Z-sort block 206 may perform Z-sorting on a frame image data basis, e.g., selecting one of the four sources (or paths) of frame image data as representing a nearest image, selecting a next one of the sources of frame image data as being the next nearest image, etc.
  • the alpha-blend block 208 preferably performs alpha-blending according to known formulas such as: (1 ⁇ ) ⁇ (pixel,n ⁇ 1)+ ⁇ (pixel, n) As shown, alpha-blending is preferably performed among each of the frame data received.
  • Actions 170 and 172 relate to initializing the plurality of graphics processors 104 , the merge unit 106 , and the synchronization unit 108 .
  • the control processor 102 preferably transmits program data and format data to the graphics processors 104 .
  • the program data and the format data preferably imbue the graphics processors 104 with the necessary capability to carry out one or more of the modes of operation discussed hereinabove with respect to FIGS. 4 - 7 .
  • the program data may include at least portions of one or more software application programs used by the graphics processors 104 to processing the image data.
  • software application programs may facilitate certain processing functions, such as polygonal spatial transformation, 3D-2D conversion, magnification/reduction (i.e., scaling), rotation, lighting, shading, coloring, etc.
  • the program data preferably facilitates the production of the sequence instructions, such as a sequence of polygonal instructions produced from the image data and suitable for rendering into the local frame buffers 112 .
  • the program data are transmitted to the plurality of graphics processors 104 by way of the bus 126 .
  • the format data preferably relate to at least one of the modes of operation (e.g., the timing relationships, area division, averaging, layer blending, z-sorting and layer blending, flip animation, etc.); display protocol information, such as the NTSC protocol, the HDTV protocol, the 35 mm film cinematography protocol, etc. More particularly, the display protocol may include information concerning the desired frame rate, blanking period, the display size, the display aspect ratio (e.g., the ratio between the display height to width), the image resolution, etc.
  • the format data ensure that the graphics processors 104 are configured to execute a particular mode or modes of operation before called upon to render the image data or release frame image data to the merge unit 106 .
  • the format data are transmitted to the plurality of graphics processors 104 by way of the bus 126 .
  • initial set-up data are transmitted to the merge unit 106 .
  • the initial setup data preferably includes a substantially similar set of data as the format data.
  • the graphics processors 104 and the merge unit 106 are configured to execute the same mode or modes of operation.
  • the initial setup data are transmitted to the merge unit 106 by way of bus 126 .
  • the graphics processors 104 preferably transmit system ready signals to at least one of the respective local synchronization units 118 and the control processor 102 when they have initialized themselves consistent with the program data and format data (action 174 ).
  • the system ready signals may have any suitable name, such as SYSREADY.
  • the system ready signals are preferably transmitted to the control processor 102 by way of the bus 126 .
  • the synchronization circuit 108 preferably periodically transmits the merge synchronization signal 132 to the control processor 102 and to the graphics processors 104 .
  • the merge unit 106 periodically transmits the merge synchronization signal 132 to the graphics processors 104 .
  • the merge synchronization signal 132 is synchronized to a desired frame rate and blanking period associated with the desired display protocol (action 176 ).
  • the merge synchronization signal 132 is preferably transmitted to the graphic processors 104 by way of the dedicated synchronization lines 129 (or by way of the merge unit 106 ) and to the control processor 102 by way of the bus 126 .
  • the graphics processors 104 include a local synchronization circuit 118 , they preferably count a predetermined number of clock signals (or cycles) between transitions of the merge synchronization signal 132 to ensure that the frame image data are released at the proper time (action 178 ).
  • the control processor 102 preferably transmits a begin rendering command instruction (or trigger) DRAWNEXT indicating that the graphics processors 104 should initiate the rendering of the image data into the respective local frame buffers 112 .
  • This trigger may be transmitted to the graphics processors 104 over the bus 126 , it being preferred that the trigger is issued through the synchronization unit 108 to the respective local synchronization units 118 of the graphics processors 104 .
  • the graphics processors 104 need not receive an explicit trigger from the control processor 102 if an application software program running locally on the processing units 114 provides a begin rendering command itself.
  • control processor 102 issues a trigger to at least some of the graphics processors 104 indicating when the graphics processors 104 should begin rendering the image data into the respective local frame buffers 112 .
  • the respective graphics processors 104 perform the rendering function on the image data to produce the frame image data and store the same in the respective local frame buffers 112 .
  • control processor 102 may transmit subsequent setup data to the graphics processors 104 and the merge circuit 106 . For example, to change the mode of operation from frame to frame.
  • At action 186 at least one of the graphics processors 104 preferably issues a signal to the control processor 102 indicating that rendering is complete (e.g., DRAWDONE).
  • the rendering complete signal may be transmitted to the control processor 102 by way of the bus 126 , it being preferred that the signal is transmitted through the local synchronization circuit 118 to the synchronization unit 108 prior to being delivered to the control processor 102 .
  • the appropriate graphics processors 104 release the frame image data from the respective local frame buffers 112 to the merge unit 106 (e.g., when the merge synchronization signal 132 indicates that the end of an appropriate blanking period has been reached) and the merge unit 106 produces the combined frame image data for display (action 190 ). At least some of these processes are repeated frame-by-frame to produce a high quality moving image for display on the display screen.
  • the apparatus 100 readily lends itself to scalability when additional data throughput is required.
  • the apparatus 200 includes a control processor 102 , a plurality of graphics apparatuses 100 A-D, a core merge unit 106 N, and a core synchronization unit 108 N.
  • the respective graphics apparatuses 100 A, 100 B, 100 C, etc. in FIG. 10 are preferably substantially similar to the apparatus 100 shown in FIG. 3.
  • the merge unit 106 of FIG. 3 corresponds to respective local merge units 106 A, 106 B, 106 C, etc. that are coupled to respective sets of graphics processors 104 .
  • the local merge units 106 A, 106 B, 106 C, etc. each preferably produce local combined frame image data consistent with the modes of operation discussed hereinabove.
  • Each of the local merge units 106 A, 106 B, 106 C, etc. are preferably coupled to the core merge unit 106 N, which is operable to synchronously receive the local combined frame image data and to synchronously produce combined frame image data to be displayed.
  • the core merge unit 106 N is preferably operable to produce the merge synchronization signal 132 and each apparatus 100 A, 100 B, 100 C, etc. preferably includes a local synchronization unit 108 A, 108 B, 108 C, etc., respectively, (that corresponds to the synchronization unit 108 of FIG. 3).
  • the respective local synchronization units 108 A, 108 B, 108 C, etc. preferably utilize the merge synchronization signal 132 to ensure that the frame image data are synchronously released to each of the local merge units 106 A, 106 B, 106 C, etc. and ultimately released to the core merge unit 106 N.
  • the respective local frame buffers 112 of the respective groups of graphics processors 104 A-D are operatively coupled to the respective local merge units 106 A-D by way of separate data lines 127 such that exclusive communication between the respective local frame buffers and the respective local merge units is obtained (e.g., such that exclusive transmission of the frame image data to the respective local merge units 106 A, 106 B, 106 C, etc. is obtained).
  • separate data lines 127 A-D couple the respective local merge units 106 A-D to the core merge unit 106 N such that exclusive communication therebetween is obtained (e.g., such that exclusive transmission of the local combined frame image data from the respective local merge units 106 A-D to the core merge unit 106 N is obtained).
  • separate synchronization lines preferably couple the respective local synchronization units 108 A-D to the core synchronization unit 108 N such that exclusive communication therebetween is obtained.
  • a bus substantially similar to the bus 126 of FIG. 3, preferably extends from the control processor 102 to each of the graphics processors 104 A, 104 B, 104 C, etc., the core merge unit 106 N, and the core synchronization unit 108 N.
  • the apparatus 200 preferably operates according to the high level flow diagram shown such that one or more of the modes of operation (see FIGS. 4 - 7 ) may be carried out, preferably on a frame-by-frame basis.
  • the control processor 102 is preferably operable to transmit subsequent setup data to the respective apparatuses 100 A-D, e.g., to the graphics processors 104 , the local merge units 106 A-D, and the core merge unit 106 N on a frame-by-frame basis.
  • the core merge unit 106 N and the respective local merge units 106 A-D are operatively coupled to the control processor 102 by way of separate control data lines 131 such that exclusive communication between the control processor 102 , the core merge unit 106 N, and the respective local merge units 106 A-D at least concerning the instructions to operate in different modes from one frame to the next is obtained
  • each of the graphics processors 104 is preferably implemented consistent with the block diagram shown.
  • the salient functions of rendering 110 , frame buffering 112 , processing 114 , etc. discussed above are labeled as such, it being understood that the specific details concerning the operation of this configuration may be found in co-pending U.S. patent application Ser. No. 09/502,671, filed Feb. 11, 2000, entitled GAME MACHINE WITH GRAPHICS PROCESSOR, assigned to the assignee of the instant invention, and the entire disclosure of which is hereby incorporated by reference.
  • FIG. 12 illustrates an apparatus 300 for processing image data to produce an image for display in accordance with one or more further aspects of the present invention.
  • the apparatus 300 preferably includes a control processor 302 , a plurality of graphics processors 304 , and a merge unit 306 .
  • the control processor 302 is preferably operable to provide instructions to the plurality of graphics processors 304 .
  • the control processor 302 preferably includes a master timing generator 308 , a controller 310 , and a counter 312 .
  • the master timing generator 308 is preferably operable to produce a synchronization signal that is synchronized with blanking periods of a display protocol, such as one or more of the display protocols discussed hereinabove.
  • the controller 310 is preferably operable to provide instructions to the plurality of graphics processors 304 similar to the instructions that the control processor 102 of FIG. 3 provides to the graphics processors 104 of the apparatus 100 .
  • the control processor 302 preferably provides reset signals to the plurality of graphics processors 304 (utilized in combination with the synchronization signal) to synchronize the release of frame image data to the merge unit 306 as will be discussed in more detail below.
  • Each of the graphics processors 304 preferably includes a rendering unit 314 operable to render image data into frame image data and to store the frame image data in a respective local frame buffer (not shown).
  • each graphics processor 304 preferably includes at least some of the functional blocks employed in the graphics processors 104 of FIG. 3.
  • each graphics processor 304 preferably includes a synchronization counter 318 operable to produce a local synchronization count indicating when the frame image data should be released from the local frame buffer.
  • the respective synchronization counters 318 preferably one of increment and decrement their respective synchronization counts based on the synchronization signal from the timing generator 308 .
  • the graphics processors 304 are preferably operable to release their respective frame image data to the merge unit 306 when their respective local synchronization counts reach a threshold.
  • the reset signals issued by the control 310 of the control processor 302 are preferably utilized to reset the synchronization counters 318 of the respective graphics processors 304 .
  • the control processor 302 is operable to manipulate the timing of the release of the frame image data from the respective rendering units 314 .
  • the apparatus 300 may facilitate the modes of operation discussed hereinabove with respect to FIGS. 4 - 7 , e.g., the timing relationships, area division, averaging, layer blending, z-sorting and layer blending, and flip animation.
  • timing relationships between the synchronization signal issued by the timing generator 308 , the reset signal, and a respective one of the synchronization counters 318 are illustrated.
  • the synchronization counter 318 increments (or decrements) on transitions of the synchronization signal.
  • the reset signal resets the synchronization counter 318 to begin counting from a predetermined level, such as zero.
  • FIG. 13B an alternative configuration may be employed to achieve a finer timing accuracy.
  • the synchronization counters 318 may be substituted by, or operate in conjunction with, sub-synchronization counters (not shown) and the timing generator 308 may produce a sub-synchronization signal operating at a higher frequency than the synchronization signal.
  • the sub-synchronization counters preferably one of increment and decrement in response to transitions of the sub-synchronization signal and are reset upon receipt of the reset signal from the control processor 302 .
  • the graphics processors 304 are preferably operable to release their respective frame image data to the merge unit 306 when the respective sub-synchronization counters reach a threshold (which would be higher than the threshold employed using the timing of FIG. 13A).
  • the apparatus 300 may be scaled in a substantially similar way as shown in FIG. 10, i.e., where the plurality of graphics processors 304 are grouped into respective sets, each set being coupled with a local merge unit; and the respective local merge units being coupled to a core merge unit.
  • the core merge unit is preferably operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data thereon.
  • FIG. 14 is a block diagram of an apparatus 350 that operates in accordance with one or more further aspects of the present invention.
  • the apparatus 350 is substantially similar to the apparatus 100 of FIG. 3.
  • the apparatus 350 includes a plurality of graphics processors 104 , one or more of which are preferably operable to render image data into frame image data and to store the frame image data in a respective local frame buffer.
  • the graphics processors 104 of apparatus 350 are preferably substantially similar to the graphics processors 104 of FIG. 3.
  • the apparatus 350 also includes at least one merge unit 106 operatively coupled to the plurality of graphics processors 104 and operable to synchronously receive the frame image data from the respective local frame buffers of the graphics processors 104 and to synchronously produce combined frame image data based thereon.
  • the apparatus 350 also includes a control processor 102 operatively coupled to the graphics processors and the merge unit 106 . It is most preferred that the portions of the apparatus 350 discussed to this point are substantially similar to those of FIG. 3 and, further, that the functionality of the apparatus 350 is substantially similar to the functionality of the apparatus 100 of FIG. 3.
  • the apparatus 350 also preferably includes a video transfer hub (VTH, or simply video hub) 352 operatively coupled to the output of the merge unit 106 , i.e., operable to receive at least one frame of the combined frame image data.
  • the video hub 352 is preferably further operable to receive at least one externally provided frame of frame image data 356 .
  • the video hub 352 is preferably coupled to the plurality of graphics processors 104 and the control processor 102 by way of the bus 126 .
  • the apparatus 350 also preferably includes a capture memory 354 operatively coupled to the video hub 352 and capable of receiving, storing, and subsequently releasing either or both of the at least one frame of combined frame image data and the at least one frame of external frame image data 356 .
  • the video hub 352 Upon command by one or more of the graphics processors 104 , or by the control processor 102 , the video hub 352 preferably facilitates the transmission of the at least one frame of the combined frame image data and/or the at least one frame of external frame image data 356 to one or more of the graphics processors 104 .
  • the one or more graphics processors 104 may then utilize the frame image data transmitted by the video hub 352 to produce a successive frame or frames of frame image data.
  • the merge unit 106 may receive these frame(s) of frame image data and produce a next frame of the combined frame image data based thereon.
  • the video hub 352 may be utilized to combine frame image data from an external source (e.g., from film that has been digitized, from a separate graphics processing unit, from a source of streaming video, etc.) with one or more frames of frame image data being processed by the apparatus 350 .
  • an external source e.g., from film that has been digitized, from a separate graphics processing unit, from a source of streaming video, etc.
  • This functionality is particularly advantageous when images from film, such as 35 mm cinematography, are to be combined with digitally generated images to achieve desirable special effects in the final moving image.
  • the apparatus 350 is capable of synchronizing the merging function performed by the merge unit 106 in accordance with a display protocol, such as the display protocol of the external frame image data 356 , high quality combined images may be achieved. Furthermore, as the apparatus 350 may be readily scaled, very high data throughput may be achieved to match or exceed the image quality standards of external images, such as film. It is noted that the apparatus 350 of FIG. 14 may be scaled in a substantially similar way as that of the apparatus 100 of FIG. 3 as discussed hereinabove with respect to FIG. 10. Indeed, one or more video hubs 352 (and one or more capture memories 354 ) may be employed in a circuit topology substantially similar to FIG. 10 to achieve higher data throughputs.
  • a display protocol such as the display protocol of the external frame image data 356
  • very high data throughput may be achieved to match or exceed the image quality standards of external images, such as film.
  • the apparatus 350 of FIG. 14 may be scaled in a substantially similar way as that of the apparatus 100 of FIG. 3 as discussed hereinabo
  • FIG. 15 illustrates an apparatus 360 suitable for use in accordance with one or more further aspects of the present invention.
  • the apparatus 360 preferably includes several functional and/or circuit blocks that are substantially similar to those of FIG. 3, namely, a plurality of graphics processors 104 coupled to at least one merge unit 106 and to a control processor 102 .
  • the apparatus 360 is preferably substantially similar to the apparatus 100 of FIG. 3 with respect to these functional blocks.
  • the apparatus 360 further includes a memory transfer hub (MTH or simply memory hub) 362 operable to receive (for example, over the bus 126 ) and store common image data.
  • a memory 364 is preferably operatively coupled to the memory hub 362 such that the common image data may be stored and retrieved therefrom.
  • the common image data may take any form, such as texture data used when polygons are rendered into the respective local frame buffers. It is noted that such texture data often occupies a relatively large portion of memory, particularly when it has been decompressed.
  • the common image data (such as texture data) is preferably centrally stored in memory 364 .
  • the memory hub 362 preferably retrieves the common image data from the memory 364 and transfers that data to the one or more graphics processors 104 .
  • memory hub 362 and associated memory 364 may be utilized in a scaled system, such as that shown in FIG. 10 and/or FIG. 14 depending on the exigencies of the application.
  • FIG. 16 is a block diagram illustrating an apparatus 400 suitable for use in accordance with one or more further aspects of the present invention.
  • the apparatus 400 is substantially similar to the apparatus 200 of FIG. 10.
  • the apparatus 400 includes a control processor 102 , a plurality of graphics apparatuses 100 A-D, and a core merge unit 106 N.
  • the apparatus 400 does not require a core synchronization unit 108 N as did the apparatus 200 of FIG. 10.
  • the functionality of these functional and/or circuit blocks are substantially similar to those discussed hereinabove with respect to FIGS. 3 and 10 and, therefore, will not be repeated here.
  • the core merge unit 106 N of apparatus 400 preferably includes a bus controller function 402 operatively coupled between the bus 126 and a control data bus 404 , where the control data bus 404 communicates with the local merge units 106 A-D and the core merge unit 106 N.
  • the local merge units 106 A-D also preferably include a bus controller function 402 .
  • the bus controller functions 402 are shown as being implemented within the core merge unit 106 N and within the local merge units 106 A-D, it is understood that these functions may be separate from the core merge unit 106 N and the local merge units 106 A-D without departing from the scope of the invention.
  • the respective local merge units 106 A-D are operatively coupled to the core merge unit 106 N by way of the separate data lines 127 A-D such that exclusive transmission of the local combined frame image data from the respective local merge units 106 A-D to the core merge unit 106 N is obtained.
  • the apparatus 400 like the apparatus 200 , preferably operates according to the high level flow diagram of FIG. 9 such that one or more of the modes of operation (see FIGS. 4 - 7 ) may be carried out, preferably on a frame-by-frame basis. More particularly, the control processor 102 is preferably operable to transmit subsequent setup data to the respective apparatuses 100 A-D on a frame-by-frame basis, where the setup data establishes the mode of operation of the graphics processors 104 and the merge units 106 .
  • the control processor 102 preferably communicates at least the instructions concerning the one or more modes of operation (i.e., the setup data) to the respective local merge units 106 A-D and the core merge unit 106 N by way of the bus controller functions 402 and the control data bus 404 .
  • the core merge unit 106 N is preferably operable to produce the merge synchronization signal 132 to facilitate synchronous release of the frame image data, the release of the local combined frame image data, and the production of the combined frame image data.
  • the core merge unit 106 N may transmit the merge synchronization signal 132 to the respective graphics processors 104 by way of the control data bus 404 , and the data bus 126 .
  • the bus controller function 402 is preferably operable to seize control of the data bus 126 upon receipt of the merge synchronization signal 132 over the control data bus 404 and transmit the merge synchronization signal 132 to the respective apparatuses 100 A-D on a priority basis.
  • the bus controller function 402 provides the function of a master bus controller.
  • the core merge unit 106 N may transmit the merge synchronization signal 132 to the respective graphics processors 104 by way of the control data bus 404 and the local merge units 106 A-D.
  • FIG. 17 is a block diagram illustrating an apparatus 500 suitable for use in accordance with one or more further aspects of the present invention.
  • the apparatus 500 of FIG. 17 is substantially similar to the apparatuses of FIGS. 10 and 16 as will be readily apparent to one skilled in the art when the common functional and/or circuit blocks are noted, such as the plurality of graphics processing apparatuses 100 A-D, the plurality of graphics processors 104 , the plurality of local merge units 106 A-D, the core merge unit 106 N, and the control processor 102 . Accordingly, details concerning these functional and/or circuit blocks will be omitted for purposes of clarity.
  • the apparatus 500 is preferably operable to function using a “connectionless network” in which the data transmitted between certain functional and/or circuit blocks are routed over a packet-switched network.
  • the data are organized into packets in accordance with a transmission control protocol (TCP) layer of the packet-switched network.
  • TCP transmission control protocol
  • Each packet includes an identification number identifying it as being associated with a particular group of packets and an address of the destination functional and/or circuit block on the packet-switched network.
  • the TCP at the destination permits the reassembly of the packets into the original data.
  • a packet switch 504 is operatively coupled to the control processor 102 , the respective apparatuses 100 A-D, the plurality of graphics processors 104 , and one or more packet switch controller functions 502 within the core merge unit 106 N and the respective local merge units 106 A-D. It is noted that the packet switch controller functions 502 may be implemented separately from the core merge unit 106 N and/or the local merge units 106 A-D without departing from the scope of the invention.
  • the graphics processors 104 are coupled to the control processor 102 by way of the packet-switched network, the program data and the format data may be transmitted to the graphics processors 104 in packetized form.
  • the one or more software application programs facilitating, for example, polygonal spatial transformation, 3D-2D conversion, magnification, reduction, rotation, lighting, shading, coloring, etc., may be transmitted to the graphics processors through the packet switch 504 .
  • the plurality of graphics processors 104 are coupled to the respective local merge units 106 by way of separate dedicated data lines 127 .
  • the plurality of local merge units 106 are coupled to the core merge unit 106 N by way of dedicated data lines 127 A-D.
  • the transfer of instructions from the control processor 102 to the local merge units 106 A-D and the core merge unit 106 N are at least in part transmitted over the common control bus 404 discussed above with respect to FIG. 16.
  • the control processor 102 is preferably operable to instruct the graphics processors 104 , the local merge units 106 A-D, and the core merge unit 106 N to operate in the one or more modes of operation on a frame-by-frame basis as discussed in detail hereinabove with respect to other configurations of the invention.
  • the control processor 102 preferably communicates instructions concerning the one or more modes of operation to the respective destination function and/or circuit blocks by way of the packet switch 504 , the instructions having been packetized in accordance with the TCP layer.
  • the core merge unit 106 N is preferably operable to produce the merge synchronization signal 132 as discussed in detail hereinabove.
  • the core merge unit 106 N may be operable to transmit the merge synchronization signal 132 to various destination functional and/or circuit blocks by way of the control data bus 404 , the packet switch controller function 502 , and the packet switch 504 .
  • the packet switch controller function 504 may be capable of seizing control of the packet switch 504 when the core merge unit 106 N transmits the merge synchronization signal 132 over the control data bus 404 such that the merge synchronization signal 132 may be promptly transmitted to the destination functional and/or circuit blocks.
  • the packet switch controller 502 need not seize the packet switch 504 . Rather, a design assumption may be made that the merge synchronization signal 132 will be received by the destination functional and/or circuit blocks (such as the graphics processors 104 ) within an acceptable tolerance.
  • the core merge unit 106 N could transmit the merge synchronization signal 132 to the respective graphics processors 104 by way of the control data bus 404 and the local merge units 106 A-D.
  • FIG. 18 is block diagram of an apparatus 600 for processing image data to produce an image on a display in accordance with one or more further aspects of the present invention.
  • the apparatus 600 preferably includes a plurality of processing nodes 602 , at least one of which is a control node 604 , that are coupled together on a packet-switched network, such as a local area network (LAN).
  • a packet-switched network such as a local area network (LAN).
  • LAN local area network
  • the packet-switched network (here a LAN) and an associated TCP layer is assumed to enjoy a data transmission rate (from source node to destination node) that is sufficiently high to support the synchronization schema discussed hereinabove to produce the image.
  • the control node 604 is preferably operable to select a subset of processing nodes from among the plurality of processing nodes 602 to participate in processing the image data to produce the image for display.
  • the subset of processing nodes preferably includes at least one accelerator node 606 , at least one merge node 608 , at least one packet switch node 610 , and at least one configuration node 612 .
  • the apparatus 600 may optionally include at least one core merge node 614 , at least one video hub node 616 , and at least one memory hub node 618 .
  • the at least one accelerator node 606 preferably includes one or more of the graphics processors 104 discussed hereinabove with respect to at least FIGS. 3, 10, and 14 - 17 . More particularly, the at least one accelerator node 606 preferably enjoys the functionality discussed hereinabove with respect to the one or more graphics processors 104 , although a repeated detailed discussion of that functionality is omitted for purposes of simplicity.
  • the at least one merge node 608 preferably includes at least one of the merge units 106 discussed hereinabove with respect to at least FIGS. 3, 10, and 14 - 17 .
  • the at least one merge node 608 enjoys the functionality of the at least one merge node 106 , although a repeated detailed description of that functionality is omitted for the purposes of simplicity.
  • the control node 604 preferably includes the control processor 102 discussed hereinabove with respect to at least FIGS. 3, 10, and 14 - 17 , although a detailed description of the control processor 102 is omitted for the purposes of simplicity.
  • the packet switch node 610 is preferably operable to route the image data, frame image data, combined frame image data, any command instructions, and/or any other data among the plurality of processing nodes 602 .
  • the plurality of processing nodes 602 represent resources that may be tapped to achieve the production of the image for display. As these resources are distributed over the package-switched network, it is desirable to engage in a process of selecting a subset of these resources to participate in the production of the image. Indeed, some of the resources may not be available to participate in producing the image, other resources may be available but may not have the requisite capabilities to participate in producing the image, etc.
  • the configuration node 612 is preferably operable to issue one or more node configuration requests to the plurality of processing nodes 602 over the packet-switched network (action 650 ). These node configuration requests preferably prompt the processing nodes 602 to transmit at least some information concerning their data processing capabilities, their availability, their destination address, etc.
  • the information sought in terms of their data processing capability includes at least one (i) a rate at which the image data can be processed into frame image data (e.g., a capability of a graphics processor 104 ), (ii) a number of frame buffers available (e.g., a capability of a graphics processor 104 ), (iii) frame image data resolution (e.g., a capability of a graphics processor 104 ), (iv) an indication of respective modes of operation supported by the graphics processors of a given node, (v) a number of parallel paths into which respective frame image data may be input for merging into the combined frame image data (e.g., a capability of a merge node 608 ), (vi) an indication of respective modes of operation supported by a merge unit, (vii) memory size available for storing data (e.g., a capability of a memory hub node 618 ), (viii) memory access speed (e.g., a capability of a memory hub node 618 ), and
  • At action 652 at least some of the processing nodes 602 preferably transmit information concerning their data processing capabilities, their destination addresses and their availability to the configuration node 612 over the packet-switched network.
  • the configuration node 612 is preferably further operable to distribute the received destination addresses to each processing node that responded to the node configuration requests (action 654 ).
  • the configuration node 612 is preferably operable to transmit the information provided by each processing node 602 in response to the node configuration requests to the control node 604 .
  • the control node is preferably operable to select the subset of processing nodes from among the processing nodes 602 that responded to the node configuration requests to participate in processing the image data to produce the image for display. This selection process is preferably based on the responses from the processing nodes 602 to the node configuration requests.
  • the control node 604 is preferably operable to transmit a request to participate in processing the image data to each of the subset of processing nodes 602 , thereby promptly them to accept participation.
  • the control node 604 is preferably further operable to transmit one or more further node configuration requests to one or more of the subset of processing nodes 602 that responded to the request to participate (action 662 ).
  • the further node configuration request includes at least a request for the node to provide information concerning a format in which the node expects to transmit and receive data, such as the image data, the frame image data, combined frame image data, processing instructions, etc.
  • control node is preferably operable to determine which of the processing nodes 602 are to be retained in the subset of processing nodes to participate in processing the image data. This determination is preferably based on at least one of the data processing capabilities, format information, availability, etc. provided by the processing nodes 602 in response to the node configuration requests.
  • apparatus 600 preferably operates in substantial accordance with the process flow discussed hereinabove with respect to FIG.
  • the apparatus 600 of FIG. 18 is preferably operable to provide the requisite resources to implement any of the previously discussed apparatuses hereinabove, for example, apparatus 100 of FIG. 3, apparatus 200 of FIG. 10, apparatus 350 of FIG. 14, apparatus 360 of FIG. 15, apparatus 400 of FIG. 16, apparatus 500 of FIG. 17, or any combination thereof.
  • the apparatus 600 of FIG. 18 is easily modified to include substantially those resources necessary to achieve the desired quality of the image for display.
  • accelerator 606 merge nodes 608 , core merge nodes 614 , video hub modes 616 , memory hub nodes 618 , etc. may be readily added or removed from the subset of processing nodes 602 participating in the production of the image, a desirable quantum of processing power may be achieved on a case-by-case basis, no matter what level of processing power is required.
  • FIG. 20 is a block diagram of an apparatus 700 for processing image data to produce an image on a display.
  • the apparatus 700 preferably includes substantially the same general configuration of processing nodes 602 as illustrated in FIG. 18.
  • the apparatus 700 preferably includes the control node 604 , the accelerator node 606 , the merge node 608 , the packet switch node 610 , the configuration node 612 and optionally the core merge node 614 , the video hub node 616 , and the memory hub node 618 as shown in FIG. 18, where only certain of these nodes are shown in FIG. 20 for clarity.
  • the apparatus 700 enjoys the functionality discussed hereinabove with respect to the apparatus 600 of FIG. 18 and the process flow of FIG. 19, although a repeated discussion concerning the details of these functions is omitted for the purposes of clarity.
  • the processing nodes of apparatus 700 are coupled together on an open packet-switched network, such as the Internet.
  • the packet-switched network preferably employs suitable hardware and a TCP layer that enjoys a data transmission rate (e.g., from source node to destination node) that is sufficiently high to support the synchronization schema discussed hereinabove to produce the image.
  • the apparatus 700 contemplates that the processing nodes are arranged in a hierarchy, where the one or more accelerator nodes 606 (only one such accelerator 606 being shown for simplicity) are coupled to the packet switch node 610 on an (n)th level.
  • the control node 604 , the merge nodes 608 , 614 (not shown), the packet switch node 610 , the configuration node 612 (not shown), the video hub node 616 (not shown), and the memory hub node 618 (not shown) are also on the (n)th level.
  • At least one of the (n)th level accelerator nodes 606 preferably includes an (n ⁇ 1)th level control node 604 and a plurality of (n ⁇ 1)th level accelerator nodes 606 coupled together by way of an (n ⁇ 1)th level packet switch node 610 .
  • the (n)th level accelerator node 606 includes an (n ⁇ 1)th level control node 604 A, an (n ⁇ 1)th level packet switch node 610 A, and seven (n ⁇ 1)th level accelerator nodes 606 A 1 - 7 .
  • the (n)th level accelerator node 606 performs substantially the same functions in the aggregate as discussed hereinabove with respect to the at least one accelerator node 606 of FIG.
  • the (n)th level accelerator node 606 would enjoy a higher level of processing power inasmuch as it includes seven accelerator nodes 606 A 1 - 7 , each including one or more graphics processors 104 . Moreover, each of the (n ⁇ 1)th level accelerator nodes 606 A 1 - 7 preferably enjoy the functionality of the accelerator nodes 606 discussed hereinabove with respect to FIG. 18 inasmuch as each one preferably includes one or more graphics processors 104 , etc. Although seven (n ⁇ 1)th level accelerator nodes 606 A 1 - 7 are illustrated, any number of such (n ⁇ 1)th level accelerator nodes may be employed without departing from the scope of the invention.
  • one or more of the (n ⁇ 1)th level accelerator nodes 606 A 1 - 7 such as (n ⁇ 1)th level accelerator node 606 A 6 , includes an (n ⁇ 2)th level control node 604 A 6 and a plurality of (n ⁇ 2)th level accelerator nodes 606 A 6 , 1 - m (where m may be any number, such as seven) coupled together by way of an (n ⁇ 2)th level packet switch node 610 A 6 .
  • the (n ⁇ 2)th level accelerator nodes 606 A 6 , 1 - m enjoy the functionality discussed hereinabove with respect to the accelerator nodes 606 of FIG. 18.
  • the number of levels, n may be any integer number.
  • one or more of the (n ⁇ 2)th level accelerator nodes 606 A 6 , 1 - m such as (n ⁇ 2)th level accelerator node 606 A 6 , 4 , includes an (n ⁇ 3)th level control node 604 A 6 , 4 and a plurality of (n ⁇ 3)th level accelerator nodes 606 A 6 , 4 , 1 - m (where the number may be any number, such as seven) coupled together by way of an (n ⁇ 3)th level packet switch node 610 A 6 , 4 .
  • the (n ⁇ 3)th level accelerator nodes 606 A 6 , 4 , 1 - m enjoy the functionality discussed hereinabove with respect to the accelerator nodes 606 of FIG. 18.
  • the number of levels, n may be any integer number.
  • control nodes 604 are preferably operable to select a subset of processing nodes from among the plurality of processing nodes 602 to participate in processing the image data to produce the image for display (see discussion of FIG. 19 above).
  • the selection process results in a subset of processing nodes 602 that may include n levels.
  • the (n)th level control node 604 is preferably operable to transmit one or more (n)th level information sets 750 over the packet-switched network to one or more of the subset of processing nodes 602 .
  • the information sets 750 preferably include at least one of instructions and data used in the production of the image.
  • the instructions preferably include information concerning the one or more modes of operation in which the graphics processors 104 , local merge units 106 , core merge unit 108 , etc. are to operate, preferably on a frame-by-frame basis as discussed hereinabove.
  • the data may include texture data, video data (such as externally provided frames of image data to be combined to produce the final image), etc.
  • Each of the (n)th level information sets 750 preferably includes an (n)th level header 752 indicating at least that the given information set was issued from the (n)th level control nodes 604 .
  • the header 752 may also identify the information set as including instruction information, data, or a combination of both.
  • the (n)th level information sets 750 also preferably include a plurality of (n)th level blocks 754 , each block including information and/or data for one or more of the (n)th level nodes, such as an (n)th level accelerator node 756 , an (n)th level merge node 758 , an (n)th level video hub node 760 , an (n)th level memory hub node 762 , etc.
  • the (n)th level switch node 610 is preferably operable to dissect the instructions set 750 such that the respective (n)th level information blocks 756 - 762 are routed to the respective (n)th level nodes.
  • one or more of the (n)th level information sets 750 may include an (n)th level information block for an (n)th level accelerator node, such as (n)th level information block 606 A, that includes a plurality of (n ⁇ 1)th level information sub-blocks 606 A 1 - m .
  • the (n)th level information block 756 A also preferably includes a header 770 that indicates whether the information sub-blocks 772 contain instructions, data, etc.
  • the (n)th level switch node 610 is preferably operable to receive the information set 750 A, dissect the (n)th level information block 756 A for the (n)th level accelerator node 606 therefrom, and transmit the (n)th level information block 756 A to the (n ⁇ 1)th level control node 604 A.
  • the (n ⁇ 1)th level control node 604 A is preferably operable to transmit the respective (n ⁇ 1)th information sub-blocks 772 of the (n)th level information block 756 A to the (n ⁇ 1)th level packet switch node 610 A such that the respective (n ⁇ 1)th level information sub-blocks 772 may be transmitted to the respective (n ⁇ 1)th level accelerator nodes 606 A 1 - m .
  • This functionality and process is preferably repeated for each level of accelerator nodes.
  • the apparatus 700 is preferably operable to provide the requisite resources to implement any of the previously discussed apparatuses hereinabove.
  • the apparatus 700 of FIG. 20 is easily modified to include substantially those resources necessary to achieve the desired quality of the image for display. Indeed, as the number and level of processing nodes may be readily increased or decreased, a desirable quantum of processing power may be achieved on a case-by-case basis, no matter what level of processing power is required.
  • a method of processing image data to produce an image for display on a display screen may be achieved utilizing suitable hardware, such as that illustrated in FIGS. 3, 8, 10 - 18 and 20 , and/or utilizing a manual or automatic process.
  • An automatic process may be implemented utilizing any of the known processors that are operable to execute instructions of a software program.
  • the software program preferably causes the processor (and/or any peripheral systems) to execute certain steps in accordance with one or more aspects of the present invention.
  • the steps themselves would be performed using manual techniques.
  • the steps and/or actions of the method preferably correspond with the functions discussed hereinabove with respect to at least portions of the hardware of FIGS. 3, 8, 10 - 18 and 20 .

Abstract

An apparatus for processing image data to produce an image for covering an image area of a display includes a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; a control processor operable to provide instructions to the plurality of graphics processors; and at least one merge unit operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Patent Application No. ______, entitled SYSTEM AND METHOD FOR IMAGE PROCESSING, filed Mar. 21, 2001, the entire disclosure of which is hereby incorporated herein by reference. This application further incorporates herein by reference the entire disclosure of U.S. Provisional Patent Application No. ______, entitled IMAGE PROCESSING APPARATUS, filed Mar. 22, 2001.[0001]
  • BACKGROUND OF THE INVENTION
  • The present invention relates to an apparatus for processing image data to produce an image for display on a display screen and, more particularly, to produce a high quality image for display on a relatively large display screen. [0002]
  • Moving pictures, such as those produced for display on “the big screen” at cinemas are usually provided by way of 35 mm film that is projected onto a relatively large screen. The use of conventional graphics processors (employing′ computers, microprocessors, etc.) to produce moving pictures is becoming more popular inasmuch as state of the art graphics processors are becoming more sophisticated and capable of producing quality images. As a practical matter, however, conventional graphics processors are designed to produce moving pictures on a computer display screen (such as a 17-inch monitor), a conventional television screen, etc., and not on relatively large screens, such as those found in cinemas. Indeed, conventional graphics processors are not capable of processing image data at a sufficient speed to produce moving pictures of sufficient quality, particularly when the moving pictures are to be displayed on a relatively large display screen. [0003]
  • The processing limitations of conventional graphics processors affects image quality in two basic ways, namely, the resolution (e.g., the total number of pixels available for each frame of image data) is insufficient for relatively large display screens, and the frame rate (e.g., the number of frames of image data produced each second) is insufficient to meet or exceed the frame rate for the 35 mm film cinematography protocol. [0004]
  • With reference to FIG. 1, a [0005] conventional graphics processor 10 may include three basic components, namely, a processing unit or software processor 12, a rendering unit 14, and a frame buffer 16. At its core, the conventional graphics processor 10 receives image data, including data concerning primitives (e.g., polygons used to model one or more objects of the image to be displayed) and produces a series of frames of image data (or pixel data), where each frame of image data contains the information necessary to display one frame of the image on the display screen. When these frames of image data are displayed at a sufficient frame rate, the appearance of movement is achieved.
  • The [0006] software processor 12 manipulates the image data to obtain a sequence of instructions (such as polygon instructions) for input to the rendering unit 14. The rendering unit 14 receives the sequence of instructions and produces pixel data, e.g., pixel position, color, intensity, etc. The pixel data is stored in the frame buffer 16 and an entire frame of the pixel data is referred to herein as frame image data. The frame buffer 16 is usually sized to correspond with the resolution of the display screen such that enough pixel data to cover an entire frame of the display screen may be stored. When a full frame of image data are stored in the frame buffer 16, the frame image data are released from the frame buffer 16 to be displayed on the display screen.
  • Among the processing bottlenecks caused by the [0007] conventional graphics processor 10 of FIG. 1 that affect processing speed is the rate at which the software processor 12 and rendering unit 14 can produce the frame image data. In order to ameliorate this bottleneck, it has been proposed to provide a graphics processor 20 having parallel software processing/rendering units 18 as illustrated in FIG. 2. The output from these parallel processing units 18 are input to a single frame buffer 16 from which the frame image data are released for display on the display screen. Although the graphics processor 20 of FIG. 2 ameliorates one of the processing bottlenecks by increasing the rate at which the image data are processed and rendered, just like peeling back the layers of an onion, an additional processing bottleneck emerges. Indeed, the rate at which the frame image data can be stored into the frame buffer 16 and released therefrom is insufficient to adequately meet the data throughput requirements for producing a high quality moving picture on a relatively large display screen.
  • Accordingly, there is a need in the art for a new apparatus and/or method for providing graphics processing that significantly increases processing speed such that high resolution moving pictures may be produced for display on a large display screen. [0008]
  • SUMMARY OF THE INVENTION
  • In accordance with at least one aspect of the present invention, an apparatus for processing imaged data to produce an image for covering an image area of a display includes a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; a control processor operable to provide instructions to the plurality of graphics processors; and one or more merge units operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon. [0009]
  • Preferably, the plurality of graphics processors are grouped into respective sets of graphics processors; the one or more merge units include a respective local merge unit coupled to each set of graphics processors, and a core merge unit coupled to each local merge unit; the respective local merge units are operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; and the core merge unit is operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon. [0010]
  • In accordance with at least one further aspect of the present invention, the apparatus preferably includes a video hub operable to receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, the video hub being operatively coupled to the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data, and (ii) the merge unit is operable to produce a subsequent frame of the combined frame image data based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data. [0011]
  • Preferably, the apparatus includes a memory hub operable to receive and store common image data, the memory hub being operatively coupled to the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least some of the common image data, and (ii) the merge unit is operable to produce a frame of the combined frame image data based on the common image data. [0012]
  • In accordance with at least one further aspect of the present invention, the core merge unit and the respective local merge units are operatively coupled to the control processor by way of separate control data lines such that exclusive communication between the control processor, the core merge unit, and the respective local merge units at least concerning the instruction to operate in the one or more modes is obtained. [0013]
  • In accordance with at least one further aspect of the present invention, an apparatus for processing image data to produce an image for display on a display screen, includes a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer, and each graphics processor including a synchronization counter operable to produce a local synchronization count indicating when the frame image data should be released from the local frame buffer; a control processor operable to provide instructions to the plurality of graphics processors; and at least one merge unit operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon. [0014]
  • In accordance with at least one further aspect of the present invention, the apparatus includes a data bus operatively coupled to the control processor and respective sets of the graphics processors; a respective local merge unit coupled to each set of the graphics processors and operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; a core merge unit coupled to each local merge unit and operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; a control data bus operatively coupled to the core merge unit and the local merge units; a bus controller operatively coupled between the data bus and the control data bus and operable to transmit and receive data over the data bus and the control data bus on a priority basis. [0015]
  • Preferably, the core merge unit transmits the merge synchronization signal to the respective graphics processors by way of the bus controller and the data bus. Further, it is preferred that the control processor communicates at least the instructions concerning the one or more modes of operation to the respective local merge units unit by way of the bus controller and the control data bus. [0016]
  • In accordance with at least one further aspect of the present invention, the respective local frame buffers of the respective groups of graphics processors are operatively coupled to the respective local merge units by way of separate data lines such that exclusive transmission of the frame image data from the respective local frame buffers to the respective local merge units is obtained; and the respective local merge units are operatively coupled to the core merge unit by way of separate data lines such that exclusive transmission of the local combined frame image data from the respective local merge units to the core merge unit is obtained. [0017]
  • In accordance with at least one further aspect of the present invention, the apparatus includes a packet switch operatively coupled to the control processor and respective sets of the graphics processors; a respective local merge unit coupled to each set of the graphics processors and operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; a core merge unit coupled to each local merge unit and operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; a control data bus operatively coupled to the core merge unit and the local merge units; a packet switch controller operatively coupled between the packet switch and the control data bus and operable to transmit and receive data over the packet switch and the control data bus on a priority basis. [0018]
  • Preferably, the control processor communicates at least the instructions concerning the one or more modes of operation to the respective local merge units and the core merge unit by way of packet switch, the packet switch controller and the control data bus. Further, it is preferred that the core merge unit transmits the merge synchronization signal to the respective graphics processors by way of the packet switch controller, and the packet switch. [0019]
  • In accordance with at least one further aspect of the present invention, an apparatus for processing image data to produce an image for display is formed of a subset of a plurality of processing nodes coupled together on a packet-switched network. The apparatus preferably includes: at least one accelerator node including a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; one or more merge nodes including one or more merge units, the one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being associated with each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being associated with each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; at least one configuration node operable to facilitate selecting the subset of processing nodes; a control node including a control processor operable to provide instructions to the subset of processing nodes over the packet-switched network, and operable to select the subset of processing nodes to participate in processing the image data to produce the image for display; and at least one packet switch node operable route data packets between the subset of nodes, the data packets forming at least the image data, frame image data, and combined frame image data. [0020]
  • In accordance with at least one further aspect of the invention, the control node is an (n)th level control node, the at least one merge node is an (n)th level merge node, the packet switch node is an (n)th level packet switch node, and at least one accelerator node is an (n)th level accelerator node that includes an (n−1)th level control node and a plurality of (n−1)th level accelerator nodes coupled together by way of an (n−1)th level packet switch node over the packet-switched network. [0021]
  • In accordance with at least one further aspect of the present invention, a method for processing image data to produce an image for covering an image area of a display, includes: rendering the image data into frame image data using a plurality of graphics processors; storing the frame image data in respective local frame buffers; and synchronously merging the frame image data from the respective local frame buffers to synchronously produce combined frame image data based thereon. [0022]
  • In accordance with at least one further aspect of the present invention, a method for processing image data to produce an image for display on a display screen, includes: rendering the image data into frame image data using a plurality of graphics processors; storing the frame image data in respective local frame buffers; producing respective local synchronization counts indicating when the frame image data should be released from the respective local frame buffers; and synchronously producing combined frame image data from the frame image data. [0023]
  • In accordance with at least one further aspect of the present invention, a method for processing image data to produce an image for display is carried out using a subset of a plurality of processing nodes coupled together on a packet-switched network. Preferably, the method includes: selecting from among the plurality of processing nodes at least one accelerator node including a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer; selecting from among the plurality of processing nodes one or more merge nodes including one or more merge units, the one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being associated with each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being associated with each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon; establishing a control node including a control processor operable to provide instructions to the subset of processing nodes over the packet-switched network, and operable to select the subset of processing nodes to participate in processing the image data to produce the image for display; and employing at least one packet switch node operable route data packets between the subset of nodes, the data packets forming at least the image data, frame image data, and combined frame image data. [0024]
  • Other features, aspects, and advantages of the present invention will become apparent to one skilled in the art from the disclosure herein when taken in conjunction with the accompanying drawings.[0025]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For the purpose of illustrating the various aspects of the invention, there are shown in the drawings forms which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown. [0026]
  • FIG. 1 is a block diagram of a graphics processor in accordance with the prior art; [0027]
  • FIG. 2 is a block diagram of an alternative graphics processor in accordance with the prior art. [0028]
  • FIG. 3 is a block diagram illustrating an apparatus for processing image data in accordance with one or more aspects of the present invention; [0029]
  • FIG. 4 is a timing graph illustrating one or more aspects of the invention consistent with the apparatus of FIG. 3; [0030]
  • FIG. 5 is a timing graph illustrating one or more further aspects of the invention consistent with the apparatus of FIG. 3; [0031]
  • FIGS. [0032] 6A-B are timing graphs illustrating one or more still further aspects of the invention consistent with the apparatus of FIG. 3;
  • FIGS. [0033] 7A-7E are illustrative examples of various modes of operation consistent with one or more aspects of the present invention suitable for implementation using the apparatus for processing image data of FIG. 3;
  • FIG. 8 is a block diagram illustrating additional details concerning a merge unit that may be employed in the apparatuses for processing image data shown in FIG. 3; [0034]
  • FIG. 9 is a process flow diagram illustrating actions that may be carried out by the apparatus of FIG. 3; [0035]
  • FIG. 10 is a block diagram illustrating an apparatus for processing image data in accordance with one or more further aspects of the present invention; [0036]
  • FIG. 11 is a block diagram of a preferred graphics processor in accordance with one or more aspects of the present invention that may be employed in the apparatus of FIG. 3; [0037]
  • FIG. 12 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention; [0038]
  • FIGS. [0039] 13A-13B are timing graphs that may be employed by the apparatus of FIG. 12;
  • FIG. 14 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention; [0040]
  • FIG. 15 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention; [0041]
  • FIG. 16 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention; [0042]
  • FIG. 17 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention; [0043]
  • FIG. 18 is a system for processing image data in accordance with one or more further aspects of the present invention; [0044]
  • FIG. 19 is a process flow diagram suitable for use with the system of FIG. 18; [0045]
  • FIG. 20 is a block diagram of an apparatus for processing image data in accordance with one or more further aspects of the present invention; and [0046]
  • FIGS. [0047] 21A-21B are block diagrams of instruction formats that may be employed by the apparatus of FIG. 20.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Referring to FIG. 3, there is shown a block diagram of an [0048] apparatus 100 for processing image data for producing an image for display on a display screen (not shown). It is noted that the image data may be obtained from a memory of the apparatus 100 or from a secondary memory, such as a hard-disk drive, CD-ROM, DVD-ROM, a memory associated with a communications network, etc. The image data may represent object models, such as 3D and/or 2D polygonal models of objects used in the image to be displayed. Alternatively, the image data may represent sequences of polygonal image instructions produced in accordance with a software program running on a suitable processor.
  • The [0049] apparatus 100 preferably includes a control processor 102 (and associated memory), a plurality of graphics processors 104, at least one merge unit 106, and a synchronization unit 108. The control processor 102 preferably communicates with the plurality of graphics processors 104, the merge unit 106, and the synchronization unit 108 by way of bus 126. In accordance with at least one aspect of the present invention, separate data lines 127 couple each of the graphics processors 104 to the merge unit 106 such that exclusive communication between the respective graphics processors 104 and the merge unit 106 is obtained. Further, separate synchronization lines 129 preferably couple the synchronization unit 108 to each of the graphics processors 104 and the merge unit 106 such that exclusive communication therebetween is obtained.
  • At least some, and preferably each, of the [0050] graphics processors 104 is operable to render image data into frame image data (e.g., pixel data) and to store the frame image data in a respective local frame buffer 112. More particularly, at least some of the graphics processors 104 preferably include a rendering unit 110 for performing the rendering function and a local frame buffer 112 for at least temporarily storing the frame image data. In addition, it is preferred that each graphics processor 104 include a processing unit 114 operable to facilitate performing various processes on the image data, such as polygonal spatial transformation (e.g., translation, rotation, scaling, etc.); 3D to 2D transformation (e.g., perspective view transformation, etc.); and generating sequence instructions (e.g., polygon instructions). The graphics processors 104 preferably also each include an input/output interface 116 suitable for facilitating communication over the bus 126. The graphics processors 104 may each include an optional local synchronization circuit 118, which will be discussed in more detail hereinbelow.
  • It is understood that the functional blocks illustrated in FIG. 3 are partitioned in a preferred fashion, although it is contemplated that the functional blocks may be partitioned in any other of a plurality of ways without departing from the spirit and scope of the invention as claimed. [0051]
  • The [0052] merge unit 106 is preferably operable to synchronously receive frame image data from each of the respective local frame buffers 112 and to synchronously produce combined frame image data based thereon. Preferably, the frame image data are synchronously received by the merge unit 106 in response to a merge synchronization signal produced by one of the merge unit 106 and the synchronization unit 108 and received by the graphics processors 104 over synchronization lines 129. Indeed, the plurality of graphics processors 104 preferably synchronously release the frame image data from the respective local frame buffers 112 to the merge unit 106 in response to the merge synchronization signal.
  • Preferably, the merge synchronization signal is synchronized in accordance with a display protocol (or standard) defining how respective frames of the combined frame image data are to be displayed. These display protocols may include the well known NTSC protocol, the HDTV protocol, the 35 mm movie protocol, the PAL protocol, etc. The display protocol defines, among other things, the frame rate at which successive frames of the combined image data are to be displayed, and a blanking period, defining when the frames of the combined frame image data are to be refreshed. For example, the blanking period may dictate how long a given frame of the combined frame image data should dwell on the display prior to refresh (as is the case with the NTSC protocol). Alternatively, the blanking period may dictate when a given frame of the combined frame image data should be removed from the display prior to refresh (as is the case with the 35 mm movie protocol). [0053]
  • By way of further example, when the display screen is a conventional CRT, the display protocol may be the NTSC protocol and the blanking periods may be vertical blanking periods. The NTSC protocol provides that the vertical blanking periods occur on a periodic basis consistent with when a scanning beam of the CRT is de-energized and returned to an upper left-hand portion of the display screen. When the invention is used in the NTSC context, the combined frame image data from the [0054] merge unit 106 are preferably ready for display just prior to the end of each vertical blanking period of the CRT display screen.
  • Although the above example relates to the use of CRT display screens, it is understood that the phrase “blanking period” may be applied in a broader sense, such as representing the blanking periods utilized in cinematography (e.g., 35 mm moving images), or other display protocols. [0055]
  • Advantageously, producing the combined frame image data from the [0056] merge unit 106 in synchronous fashion with the blanking period of desired display protocols permits the integration of frame image data from different display protocols and/or from different sources of the frame image data. For example, frame image data from the 35 mm film display protocol may be combined with frame image data produced by way of computer graphics. In a broad sense, therefore, the merge synchronization signal is either derived from or sets a desired frame rate at which the combined frame image data are provided. When the combined frame image data are to be displayed on a CRT display screen, the merge synchronization signal is preferably synchronized with the vertical blanking period (or frame rate) of a CRT display screen. Alternatively, when the combined frame image data are to be merged with or utilized to produce a moving image consistent with a 35 mm film cinematography, the merge synchronization signal is preferably synchronized with the frame rate (or blanking period) consistent with that protocol.
  • Reference is now made to FIG. 4, which is a timing diagram that illustrates the relationship between the operation of the [0057] apparatus 100 of FIG. 3 and certain aspects of a display protocol. For example the relationship between the blanking period, the merge synchronization signal, the rendering period, and the merge period is shown. As illustrated at the top timing waveform, a frame includes an interval during which an image frame is to be displayed or refreshed (indicated by a low logic state) and a period during which an image frame is to be made ready for display, e.g., the blanking period (indicated by a high logic state).
  • In accordance with at least one aspect of the present invention, the [0058] merge synchronization signal 132 includes transitions, such as one of rising and falling edges, that are proximate to the ends 142, 144 of the blanking periods (i.e., the transitions from high logic levels to low logic levels). For example, the merge synchronization signal 132 may include transitions that lead the ends 142, 144 of the blanking periods. Alternatively, the merge synchronization signal 132 may include transitions that are substantially coincident with the ends 142, 144 of the blanking periods. In any case, the merge synchronization signal 132 is preferably synchronized with the frame rate dictated by the display protocol such that the merge unit 106 initiates the production of, or releases, the combined frame image data for display at the end of at least one of the blanking periods.
  • In order for the [0059] local frame buffers 112 to be ready to release the combined frame image data for display at the ends 142, 144 of the blanking periods, at least some of the graphics processors 104 preferably initiate rendering the image data into the frame buffers 112 prior to the ends 142, 144 of the blanking periods, such as at 136A-D. As shown, four rendering processes are initiated at respective times 136A-D, although any number of rendering processes may be employed using any number of graphics processors 104 without departing from the scope of the invention. Further, the rendering processes in each of the graphics processors 104 may be initiated at the same time or may be initiated at different times. In accordance with at least one aspect of the present invention, the rendering units 110 of the graphics processors 104 are preferably operable to begin rendering the image data into the respective frame buffers 112 asynchronously with respect to the merge synchronization signal 132. For example, the control processor 102 may issue a “begin rendering” command (or trigger) to the graphics processors 104, where the begin rendering command is asynchronous with respect to the merge synchronization signal 132. The begin rendering command may be given any suitable name, such as DRAWNEXT. It is also contemplated that the processing units 114 may initiate rendering as a result of software program execution without specific triggering from the control processor 102. Each (or a group) of the graphics processors 104 may complete the rendering process of the image data into the respective frame buffers 112 at any time (e.g., at 138A-D). It is preferred that at least one of the graphics processors 104 is operable to issue a rendering complete signal to the control processor 102 when it has completed rendering a frame of the frame image data. The rendering complete command may be given any suitable name, such as DRAWDONE.
  • In response to the [0060] merge synchronization signal 132, the respective graphics processors 104 preferably release the frame image data from the respective local frame buffers 112 to the merge unit 106 such that the merge unit 106 may synchronously produce the combined frame image data therefrom. The combined frame image data are preferably produced by the merge unit 106 during the interval between the end of one blanking period and the beginning of the next blanking period as shown by the logic transitions labeled 140.
  • With reference to FIGS. 3 and 4, one or more of the [0061] graphics processors 104 preferably include a respective local synchronization circuit 118 operable to receive the merge synchronization signal 132 over synchronization lines 129 from the synchronization unit 108 or from the merge unit 106. The respective local synchronization units 118 are preferably further operable to deliver the synchronization signal to the various portions of the graphics processors 104 to enable them to synchronously release the respective frame image data from the local frame buffers 112 to the merge unit 106. The local synchronization circuits 118 preferably employ counters to count between respective transitions of the merge synchronization signal 132.
  • In accordance with one or more further aspects of the present invention, the [0062] control processor 102 is preferably operable to instruct the respective graphics processors 104 and the merge unit 106 to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers 112, and when the frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data. As discussed above with respect to FIG. 4, one of these modes provides that at least some of the graphics processors complete rendering the image data into the respective frame buffers 112 prior to the end of each blanking period.
  • Another of these modes preferably provides that one of more of the [0063] graphics processors 104 completes rendering the image data into the respective frame buffers 112 prior to the end of an integral number of blanking periods. This mode is illustrated in FIG. 5, which is a timing diagram showing the relationship between the frame rate, the blanking period, the merge synchronization signal 132, the rendering period, and the merge period carried out by the apparatus 100 of FIG. 3. The plurality of graphics processors 104 may initiate the rendering of the image data into the respective local frame buffers 112 (e.g., at 136) either synchronously, asynchronously, in response to a begin rendering command (DRAWNEXT) from the control processor 102 or processing units 114, etc., it being most preferred that at least a group of the graphics processors 104 initiate rendering at substantially the same time and at some point during a blanking period. Each of the graphics processors 104 may complete the rendering process at different times, such as at 138A-D.
  • According to this mode of operation, however, the rendering processes need not be completed prior to the termination of each blanking period; rather, they may be completed prior to the termination of an integral number of blanking periods (such as two blanking periods as shown). Although at least some of the [0064] graphics processors 104 may not complete a given rendering process prior to the end 142 of a first blanking period, they may complete the rendering process prior to the end 144 of a second blanking period. Advantageously, in this mode of operation, each of the graphics processors 104 enjoys the benefit of additional time during which to render the image data into the respective local frame buffers 112. In response to the merge synchronization signal 132, the respective graphics processors 104 preferably release the frame image data from the respective local frame buffers 112 to the merge unit 106 at the end of every second blanking period (such as at 144). New image data may be rendered into the local frame buffers 112 beginning at, for example, 146. Although in the example illustrated in FIG. 5, the integral number of blanking periods is two, any integral number may be employed in this mode of operation without departing from the scope of the invention.
  • In accordance with yet another mode of operation of the invention, and with reference to FIG. 6A, one or more of the [0065] graphics processors 104 may each include an integral number of local frame buffers 112 (e.g., two local frame buffers used in so-called “double buffering”). In a given frame of the display protocol, only one of the integral number of local frame buffers 112 of a given graphics processor 104 need contain a full frame of frame image data and, therefore, that graphics processor 104 need only complete rendering the image data into the respective integral number of local frame buffers 112 prior to the end of a corresponding integral number of blanking periods. For example, a first graphics processor 104 may include two local frame buffers 112A, 112A′ and a second graphics processor 104 may include two more local frame buffers 112B, 112B′. In accordance with this mode of operation, the first graphics processor 104 should complete rendering the image data into local frame buffer 112A prior to the end 142 of a first blanking period (e.g., at 138A) and preferably before a next trigger to begin rendering is received. It is noted that the first graphics processor 104 need not complete rendering the image data into local frame buffer 112A′ until prior to the end 144 of a second blanking period (e.g., at 138A′) and preferably before yet another trigger to begin rendering is received. Indeed, the first graphics processor 104 need not even begin rendering the image data into local frame buffer 112A′ until after the image data are rendered into the first local frame buffer 112A.
  • Similarly, the [0066] second graphics processor 104 should complete rendering the image data into local frame buffer 112B prior to the end 142 of the first blanking period (e.g., at 138B), while the second graphics processor 104 need not complete rendering the image data into local frame buffer 112B′ until prior to the end 144 of the second blanking period (e.g., at 138B′). As frame image data are available for release from the local frame buffer 112A and the local frame buffer 112B prior to the end 142 of the first blanking period, such frame image data are preferably released in response to the merge synchronization signal 132 to the merge unit 106 as shown in FIG. 6A at 140AB. As frame image data are available in each of local frame buffer 112A′ and local frame buffer 112B′ prior to the end 144 of the second blanking period, such frame image data are preferably released in response to the merge synchronization signal 132 to the merge unit 106 as shown at 140A′B′.
  • With reference to FIG. 6B, an error condition could occur if one or both of the first and [0067] second graphics processors 104 fail to complete rendering into one or both of the local frame buffers 112A′, 112B′, respectively, prior to 144. In this case, release of the frame image data from local frame buffer 112A′ (and/or 112B′) to the merge unit 106 is not permitted (i.e., just after 144) and the contents of local frame buffers 112A and 112B are re-released to the merge unit 106 (at 140AB2). When the first graphic processor 104 completes rendering into the respective local frame buffers 112A′, 112B′, the frame image data therein may be released at the end of the next blanking period, e.g., at 140A′B′. This advantageously minimizes the effects of the error condition.
  • Although in the above examples only two [0068] graphics processors 104 each containing two local frame buffers 112 have been discussed, the invention contemplates any number of graphics processors 104 each containing any number of local frame buffers 112. It is noted that this mode may also apply to a group of graphics processors 104 each containing only one local frame buffer 112. In this case, each graphics processor 104 would enjoy a time period during which rendering could be performed that corresponds to the number of graphics processors 104 (or local frame buffers 112) participating in the mode of operation. Each graphics processor 104 would complete rendering into a given local frame buffer 112 prior to the end of an integral number of blanking periods that corresponds with the integral number of graphics processors 104 (or local frame buffers 112) participating in the mode of operation. For example, when four graphics processors 104 participate in the mode of operation and each graphics processor 104 includes a single local frame buffer 112, each graphics processor 104 could enjoy a time period during which rendering may be performed that corresponds to four frames of the display protocol.
  • With reference to FIGS. [0069] 7A-E, the modes of operation in accordance with one or more aspects of the present invention preferably include at least one of area division, averaging, layer blending, Z-sorting and layer blending, and flip animation. With reference to FIG. 7A, the area division mode of operation preferably provides that at least two of the local frame buffers 112 (as shown, four local frame buffers 112A-D are contemplated) are partitioned into respective rendering areas 120A-D that correspond with respective areas of the display screen that will be covered by the combined frame image data 122, and non-rendering areas 124A-D that are not utilized to carry frame image data. In accordance with the area division mode, an aggregate of the rendering areas 120A-D results in a total rendering area that corresponds with a total area of the display screen that will be covered by the combined frame image data 122. In this mode, the merge unit 106 is preferably operable to synchronously aggregate the respective frame image data from the respective rendering areas 120A-C of the graphics processors 104 based on the known alpha blending technique to produce the combined frame image data. The local frame buffers 112A-D may be employed by separate graphics processors 104 or may be employed by, and distributed among, a lesser number of graphics processors 104. The area division mode of operation may take advantage of one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5 and 6A, it being preferred that the one or more graphics processors 104 complete rendering the image data into the respective rendering areas 120A-D of the local frame buffers 112A-D prior to the end of each blanking period (for example, see FIG. 4).
  • With reference to FIG. 7B, the averaging mode of operation preferably provides that the [0070] local frame buffers 112 of at least two of the graphics processors 104 (as shown, four local frame buffers 112A-D are contemplated) include rendering areas 120A-D that each correspond with the total area covered by the combined frame image data 122. The averaging mode further provides that the merge unit 106 averages the respective frame image data from the local frame buffers 112 to produce the combined frame image data for display. The averaging process may include at least one of scene anti-aliasing, alpha-blending (e.g., scene-by-scene), weighted averaging, etc. as are well within the knowledge of those skilled in the art. The averaging mode of operation may employ one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6A, it being preferred that each of the graphics processors participating in this mode of operation complete rendering the image data into the respective rendering areas 120A-D of the local frame buffers 112A-D prior to the end of each blanking period (for example, see FIG. 4).
  • With reference to [0071] 7C, the layer blending mode preferably provides that at least some of the image data are rendered into the local frame buffers 112 (as shown, four such local frame buffers 112A-D are contemplated) of at least two of the graphics processors 104 such that each of the local frame buffers 112A-D includes frame image data representing at least a portion of the combined frame image data. For example, each portion of the combined frame image data may be an object of the final image. The layer blending mode preferably further provides that each of the portions of the combined frame image data area prioritized, such as by Z-data representing depth. The layer blending mode preferably further provides that the merge unit 106 is operable to synchronously produce the combined frame image data by layering each of the frame image data of the respective local frame buffers 112A-D in an order according to the priority thereof. For example, layering a second frame of frame image data over a first frame of frame image data may result in overwriting some of the first frame image data depending on the priority of the layers. When the priority of the layers corresponds with the relative depth of the portions of the combined frame image data (e.g., objects of the image), one layer of frame image data may be designated as being nearer to a point of view than another layer of frame image data. Therefore, an illusion of depth may be created by causing the layer of frame image data designated as being nearer to the point of view to overwrite at least portions of a layer of frame image data being farther from the point of view (for example, see FIG. 7C at 122). The layering mode of operation may employ one or more of timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6A, it being preferred that the graphics processors 104 participating in the layer blending mode complete rendering the image data into the respective local frame buffers 112A-D prior to the end of each blanking period.
  • With reference to FIG. 7D, the Z-sorting and layer blending mode preferably provides that at least some of the image data are rendered into the local frame buffers (as shown, four such [0072] local frame buffers 112A-D are contemplated) of at least two of the graphics processors 104 such that the local frame buffers 112A-D include frame image data representing at least a portion of the combined image data. As discussed above with respect to the layer blending mode, the portions of the combined frame image data may represent objects of the image to displayed. The Z-sorting and layer blending mode preferably further provides that the frame image data include Z-values representing image depth (e.g., on a pixel-by-pixel basis) and that the merge unit 106 is operable to synchronously produce the combined frame image data by Z-sorting and layering each of the frame image data in accordance with the image depth. Advantageously, as shown in FIG. 7D, some portions of the combined frame image data, such as that stored in the local frame buffer 112A may overwrite other portions of the frame image data, such as that stored in local frame buffer 112B assuming that the relative depths dictated by the Z-values provide for such overwriting. The Z-sorting and layer blending mode may employ one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6A, wherein it is preferred that the graphics processors 104 complete rendering the image data into the respective local frame buffers 112 prior to the end of each blanking period (for example, see FIG. 4).
  • With reference to FIG. 7E, the flip animation mode preferably provides that the local frame buffers [0073] 112 (as shown, four local frame buffers 112A-D are contemplated) of at least two graphics processors 104 include frame image data that are capable of covering the total area covered by the combined frame image data and that the merge unit 106 is operable to produce the combined frame image data by sequentially releasing the respective frame image data from the local frame buffers 112 of the graphics processors 104. The flip animation mode may employ one or more of the timing relationships discussed hereinabove with respect to FIGS. 4, 5, and 6A, it being preferred that the graphics processors 104 complete rendering the image data into the respective frame buffers 112A-D prior to the ends of an integral number of blanking periods, where the integral number of blanking periods corresponds to the number of graphics processors 104 participating in the flip animation mode (for example, see FIGS. 5 and 6A). Alternatively, the integral number of blanking periods may correspond to the number of local frame buffers 112 participating in the flip animation mode.
  • With reference to FIG. 8, an example of one circuit or function configuration suitable for implementing the [0074] merge unit 106 employed by the invention is shown. The merge unit 106 preferably includes a scissoring block 202, an alpha-test block 204, a Z-sort block 206, and an alpha-blending block 208. The scissoring block 202 preferably includes a plurality of sub-blocks 202A, 202B, 202C, 202D, etc., one such sub-block for each of the frame image data (or path) received. The scissoring block 202 in total, and the sub-scissoring blocks 202A-D in particular, is preferably operable to validate certain of the frame image data and to invalidate other portions of the frame image data (e.g., validating a portion of the frame image data corresponding to a given rectangular area for use in producing the combined frame image data). This functionality is particularly useful in operating in the area division mode (see FIG. 7A and corresponding discussion hereinabove).
  • The output of the [0075] scissoring block 202 is received by the alpha-test block 204, where certain of the frame image data are invalidated based on a comparison between alpha values of the frame image data and a constant. The alpha-test block 204 preferably includes a plurality of sub-blocks 204A, 204B, 204C, 204D, etc., one such sub-block for each of the frame image data (or path) received. The functionality of the alpha-test block 204 is particularly useful when the frame image data includes useful information concerning one or more objects utilized in producing the combined frame image data, but where other portions of the frame image data do not contain useful information. For example, if the frame image data is intended to represent a tree (i.e., an object utilized in the final image displayed), the alpha values associated with the pixels of the tree may be of sufficient magnitude to indicate that the data is valid, but the pixels of the frame image data associated with areas other than the tree may be unimportant and unusable and, therefore, advantageously discarded.
  • The Z-[0076] sort block 206 preferably performs Z-sorting of the frame image data on a pixel-by-pixel basis. Alternatively, the Z-sort block 206 may perform Z-sorting on a frame image data basis, e.g., selecting one of the four sources (or paths) of frame image data as representing a nearest image, selecting a next one of the sources of frame image data as being the next nearest image, etc. The alpha-blend block 208 preferably performs alpha-blending according to known formulas such as: (1−α)·(pixel,n−1)+α·(pixel, n) As shown, alpha-blending is preferably performed among each of the frame data received.
  • With reference to FIGS. 3, 4, and [0077] 9, the high level flow diagram of FIG. 9 provides additional details concerning one or more further aspects of the apparatus 110 of the present invention. Actions 170 and 172 relate to initializing the plurality of graphics processors 104, the merge unit 106, and the synchronization unit 108. In particular, at action 170, the control processor 102 preferably transmits program data and format data to the graphics processors 104. In accordance with the invention, the program data and the format data preferably imbue the graphics processors 104 with the necessary capability to carry out one or more of the modes of operation discussed hereinabove with respect to FIGS. 4-7.
  • The program data, for example, may include at least portions of one or more software application programs used by the [0078] graphics processors 104 to processing the image data. Such software application programs may facilitate certain processing functions, such as polygonal spatial transformation, 3D-2D conversion, magnification/reduction (i.e., scaling), rotation, lighting, shading, coloring, etc. In general, the program data preferably facilitates the production of the sequence instructions, such as a sequence of polygonal instructions produced from the image data and suitable for rendering into the local frame buffers 112. Preferably, the program data are transmitted to the plurality of graphics processors 104 by way of the bus 126.
  • The format data preferably relate to at least one of the modes of operation (e.g., the timing relationships, area division, averaging, layer blending, z-sorting and layer blending, flip animation, etc.); display protocol information, such as the NTSC protocol, the HDTV protocol, the 35 mm film cinematography protocol, etc. More particularly, the display protocol may include information concerning the desired frame rate, blanking period, the display size, the display aspect ratio (e.g., the ratio between the display height to width), the image resolution, etc. In general, the format data ensure that the [0079] graphics processors 104 are configured to execute a particular mode or modes of operation before called upon to render the image data or release frame image data to the merge unit 106. Preferably, the format data are transmitted to the plurality of graphics processors 104 by way of the bus 126.
  • At [0080] action 172, initial set-up data are transmitted to the merge unit 106. The initial setup data preferably includes a substantially similar set of data as the format data. In this way, the graphics processors 104 and the merge unit 106 are configured to execute the same mode or modes of operation. Preferably, the initial setup data are transmitted to the merge unit 106 by way of bus 126.
  • The [0081] graphics processors 104 preferably transmit system ready signals to at least one of the respective local synchronization units 118 and the control processor 102 when they have initialized themselves consistent with the program data and format data (action 174). The system ready signals may have any suitable name, such as SYSREADY. The system ready signals are preferably transmitted to the control processor 102 by way of the bus 126.
  • The [0082] synchronization circuit 108 preferably periodically transmits the merge synchronization signal 132 to the control processor 102 and to the graphics processors 104. Alternatively, the merge unit 106 periodically transmits the merge synchronization signal 132 to the graphics processors 104. In either case, the merge synchronization signal 132 is synchronized to a desired frame rate and blanking period associated with the desired display protocol (action 176). The merge synchronization signal 132 is preferably transmitted to the graphic processors 104 by way of the dedicated synchronization lines 129 (or by way of the merge unit 106) and to the control processor 102 by way of the bus 126. When the graphics processors 104 include a local synchronization circuit 118, they preferably count a predetermined number of clock signals (or cycles) between transitions of the merge synchronization signal 132 to ensure that the frame image data are released at the proper time (action 178).
  • At [0083] action 180, the control processor 102 preferably transmits a begin rendering command instruction (or trigger) DRAWNEXT indicating that the graphics processors 104 should initiate the rendering of the image data into the respective local frame buffers 112. This trigger may be transmitted to the graphics processors 104 over the bus 126, it being preferred that the trigger is issued through the synchronization unit 108 to the respective local synchronization units 118 of the graphics processors 104. As discussed above, however, the graphics processors 104 need not receive an explicit trigger from the control processor 102 if an application software program running locally on the processing units 114 provides a begin rendering command itself. Certain advantages, however, are achieved when the control processor 102 issues a trigger to at least some of the graphics processors 104 indicating when the graphics processors 104 should begin rendering the image data into the respective local frame buffers 112. At action 182, the respective graphics processors 104 perform the rendering function on the image data to produce the frame image data and store the same in the respective local frame buffers 112.
  • At [0084] action 184, the control processor 102 may transmit subsequent setup data to the graphics processors 104 and the merge circuit 106. For example, to change the mode of operation from frame to frame.
  • At [0085] action 186, at least one of the graphics processors 104 preferably issues a signal to the control processor 102 indicating that rendering is complete (e.g., DRAWDONE). The rendering complete signal may be transmitted to the control processor 102 by way of the bus 126, it being preferred that the signal is transmitted through the local synchronization circuit 118 to the synchronization unit 108 prior to being delivered to the control processor 102.
  • At [0086] action 188, the appropriate graphics processors 104 release the frame image data from the respective local frame buffers 112 to the merge unit 106 (e.g., when the merge synchronization signal 132 indicates that the end of an appropriate blanking period has been reached) and the merge unit 106 produces the combined frame image data for display (action 190). At least some of these processes are repeated frame-by-frame to produce a high quality moving image for display on the display screen.
  • Advantageously, the [0087] apparatus 100 readily lends itself to scalability when additional data throughput is required. Referring now to FIG. 10, an apparatus 200 for processing image data to produce an image for display in accordance with one or more further aspects of the present invention is shown. The apparatus 200 includes a control processor 102, a plurality of graphics apparatuses 100A-D, a core merge unit 106N, and a core synchronization unit 108N. The respective graphics apparatuses 100A, 100B, 100C, etc. in FIG. 10 are preferably substantially similar to the apparatus 100 shown in FIG. 3. The merge unit 106 of FIG. 3 corresponds to respective local merge units 106A, 106B, 106C, etc. that are coupled to respective sets of graphics processors 104. The local merge units 106A, 106B, 106C, etc. each preferably produce local combined frame image data consistent with the modes of operation discussed hereinabove. Each of the local merge units 106A, 106B, 106C, etc. are preferably coupled to the core merge unit 106N, which is operable to synchronously receive the local combined frame image data and to synchronously produce combined frame image data to be displayed.
  • The [0088] core merge unit 106N is preferably operable to produce the merge synchronization signal 132 and each apparatus 100A, 100B, 100C, etc. preferably includes a local synchronization unit 108A, 108B, 108C, etc., respectively, (that corresponds to the synchronization unit 108 of FIG. 3). The respective local synchronization units 108A, 108B, 108C, etc. preferably utilize the merge synchronization signal 132 to ensure that the frame image data are synchronously released to each of the local merge units 106A, 106B, 106C, etc. and ultimately released to the core merge unit 106N.
  • Preferably, the respective [0089] local frame buffers 112 of the respective groups of graphics processors 104A-D are operatively coupled to the respective local merge units 106A-D by way of separate data lines 127 such that exclusive communication between the respective local frame buffers and the respective local merge units is obtained (e.g., such that exclusive transmission of the frame image data to the respective local merge units 106A, 106B, 106C, etc. is obtained). Preferably, separate data lines 127A-D couple the respective local merge units 106A-D to the core merge unit 106N such that exclusive communication therebetween is obtained (e.g., such that exclusive transmission of the local combined frame image data from the respective local merge units 106A-D to the core merge unit 106N is obtained). Further, separate synchronization lines preferably couple the respective local synchronization units 108A-D to the core synchronization unit 108N such that exclusive communication therebetween is obtained. Although not explicitly shown, a bus, substantially similar to the bus 126 of FIG. 3, preferably extends from the control processor 102 to each of the graphics processors 104A, 104B, 104C, etc., the core merge unit 106N, and the core synchronization unit 108N.
  • As was discussed above with respect to FIG. 9, the [0090] apparatus 200 preferably operates according to the high level flow diagram shown such that one or more of the modes of operation (see FIGS. 4-7) may be carried out, preferably on a frame-by-frame basis. More particularly, the control processor 102 is preferably operable to transmit subsequent setup data to the respective apparatuses 100A-D, e.g., to the graphics processors 104, the local merge units 106A-D, and the core merge unit 106N on a frame-by-frame basis. Preferably, the core merge unit 106N and the respective local merge units 106A-D are operatively coupled to the control processor 102 by way of separate control data lines 131 such that exclusive communication between the control processor 102, the core merge unit 106N, and the respective local merge units 106A-D at least concerning the instructions to operate in different modes from one frame to the next is obtained
  • With reference to FIG. 11, each of the [0091] graphics processors 104 is preferably implemented consistent with the block diagram shown. The salient functions of rendering 110, frame buffering 112, processing 114, etc. discussed above are labeled as such, it being understood that the specific details concerning the operation of this configuration may be found in co-pending U.S. patent application Ser. No. 09/502,671, filed Feb. 11, 2000, entitled GAME MACHINE WITH GRAPHICS PROCESSOR, assigned to the assignee of the instant invention, and the entire disclosure of which is hereby incorporated by reference.
  • Reference is now made FIG. 12, which illustrates an apparatus [0092] 300 for processing image data to produce an image for display in accordance with one or more further aspects of the present invention. The apparatus 300 preferably includes a control processor 302, a plurality of graphics processors 304, and a merge unit 306. The control processor 302 is preferably operable to provide instructions to the plurality of graphics processors 304. The control processor 302 preferably includes a master timing generator 308, a controller 310, and a counter 312. The master timing generator 308 is preferably operable to produce a synchronization signal that is synchronized with blanking periods of a display protocol, such as one or more of the display protocols discussed hereinabove. The controller 310 is preferably operable to provide instructions to the plurality of graphics processors 304 similar to the instructions that the control processor 102 of FIG. 3 provides to the graphics processors 104 of the apparatus 100. In addition, the control processor 302 preferably provides reset signals to the plurality of graphics processors 304 (utilized in combination with the synchronization signal) to synchronize the release of frame image data to the merge unit 306 as will be discussed in more detail below.
  • Each of the [0093] graphics processors 304 preferably includes a rendering unit 314 operable to render image data into frame image data and to store the frame image data in a respective local frame buffer (not shown). Indeed, each graphics processor 304 preferably includes at least some of the functional blocks employed in the graphics processors 104 of FIG. 3. In addition, each graphics processor 304 preferably includes a synchronization counter 318 operable to produce a local synchronization count indicating when the frame image data should be released from the local frame buffer. The respective synchronization counters 318 preferably one of increment and decrement their respective synchronization counts based on the synchronization signal from the timing generator 308. The graphics processors 304 are preferably operable to release their respective frame image data to the merge unit 306 when their respective local synchronization counts reach a threshold. The reset signals issued by the control 310 of the control processor 302 are preferably utilized to reset the synchronization counters 318 of the respective graphics processors 304. In this way, the control processor 302 is operable to manipulate the timing of the release of the frame image data from the respective rendering units 314. Thus, the apparatus 300 may facilitate the modes of operation discussed hereinabove with respect to FIGS. 4-7, e.g., the timing relationships, area division, averaging, layer blending, z-sorting and layer blending, and flip animation.
  • With reference to FIG. 13A, timing relationships between the synchronization signal issued by the timing generator [0094] 308, the reset signal, and a respective one of the synchronization counters 318 are illustrated. In particular, the synchronization counter 318 increments (or decrements) on transitions of the synchronization signal. The reset signal, however, resets the synchronization counter 318 to begin counting from a predetermined level, such as zero. With reference to FIG. 13B, an alternative configuration may be employed to achieve a finer timing accuracy. In particular, the synchronization counters 318 may be substituted by, or operate in conjunction with, sub-synchronization counters (not shown) and the timing generator 308 may produce a sub-synchronization signal operating at a higher frequency than the synchronization signal. The sub-synchronization counters preferably one of increment and decrement in response to transitions of the sub-synchronization signal and are reset upon receipt of the reset signal from the control processor 302. Using this arrangement, the graphics processors 304 are preferably operable to release their respective frame image data to the merge unit 306 when the respective sub-synchronization counters reach a threshold (which would be higher than the threshold employed using the timing of FIG. 13A).
  • With reference to FIGS. 10 and 12, the apparatus [0095] 300 may be scaled in a substantially similar way as shown in FIG. 10, i.e., where the plurality of graphics processors 304 are grouped into respective sets, each set being coupled with a local merge unit; and the respective local merge units being coupled to a core merge unit. The core merge unit is preferably operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data thereon.
  • Reference is now made to FIG. 14, which is a block diagram of an [0096] apparatus 350 that operates in accordance with one or more further aspects of the present invention. In some respects, the apparatus 350 is substantially similar to the apparatus 100 of FIG. 3. For example, the apparatus 350 includes a plurality of graphics processors 104, one or more of which are preferably operable to render image data into frame image data and to store the frame image data in a respective local frame buffer. Indeed, the graphics processors 104 of apparatus 350 are preferably substantially similar to the graphics processors 104 of FIG. 3. The apparatus 350 also includes at least one merge unit 106 operatively coupled to the plurality of graphics processors 104 and operable to synchronously receive the frame image data from the respective local frame buffers of the graphics processors 104 and to synchronously produce combined frame image data based thereon. The apparatus 350 also includes a control processor 102 operatively coupled to the graphics processors and the merge unit 106. It is most preferred that the portions of the apparatus 350 discussed to this point are substantially similar to those of FIG. 3 and, further, that the functionality of the apparatus 350 is substantially similar to the functionality of the apparatus 100 of FIG. 3.
  • The [0097] apparatus 350 also preferably includes a video transfer hub (VTH, or simply video hub) 352 operatively coupled to the output of the merge unit 106, i.e., operable to receive at least one frame of the combined frame image data. The video hub 352 is preferably further operable to receive at least one externally provided frame of frame image data 356. The video hub 352 is preferably coupled to the plurality of graphics processors 104 and the control processor 102 by way of the bus 126. The apparatus 350 also preferably includes a capture memory 354 operatively coupled to the video hub 352 and capable of receiving, storing, and subsequently releasing either or both of the at least one frame of combined frame image data and the at least one frame of external frame image data 356.
  • Upon command by one or more of the [0098] graphics processors 104, or by the control processor 102, the video hub 352 preferably facilitates the transmission of the at least one frame of the combined frame image data and/or the at least one frame of external frame image data 356 to one or more of the graphics processors 104. The one or more graphics processors 104 may then utilize the frame image data transmitted by the video hub 352 to produce a successive frame or frames of frame image data. Moreover, the merge unit 106 may receive these frame(s) of frame image data and produce a next frame of the combined frame image data based thereon.
  • By way of example, the [0099] video hub 352 may be utilized to combine frame image data from an external source (e.g., from film that has been digitized, from a separate graphics processing unit, from a source of streaming video, etc.) with one or more frames of frame image data being processed by the apparatus 350. This functionality is particularly advantageous when images from film, such as 35 mm cinematography, are to be combined with digitally generated images to achieve desirable special effects in the final moving image.
  • As the [0100] apparatus 350 is capable of synchronizing the merging function performed by the merge unit 106 in accordance with a display protocol, such as the display protocol of the external frame image data 356, high quality combined images may be achieved. Furthermore, as the apparatus 350 may be readily scaled, very high data throughput may be achieved to match or exceed the image quality standards of external images, such as film. It is noted that the apparatus 350 of FIG. 14 may be scaled in a substantially similar way as that of the apparatus 100 of FIG. 3 as discussed hereinabove with respect to FIG. 10. Indeed, one or more video hubs 352 (and one or more capture memories 354) may be employed in a circuit topology substantially similar to FIG. 10 to achieve higher data throughputs.
  • Reference is now made to FIG. 15, which illustrates an [0101] apparatus 360 suitable for use in accordance with one or more further aspects of the present invention. The apparatus 360 preferably includes several functional and/or circuit blocks that are substantially similar to those of FIG. 3, namely, a plurality of graphics processors 104 coupled to at least one merge unit 106 and to a control processor 102. Indeed, the apparatus 360 is preferably substantially similar to the apparatus 100 of FIG. 3 with respect to these functional blocks. The apparatus 360, however, further includes a memory transfer hub (MTH or simply memory hub) 362 operable to receive (for example, over the bus 126) and store common image data. A memory 364 is preferably operatively coupled to the memory hub 362 such that the common image data may be stored and retrieved therefrom. The common image data may take any form, such as texture data used when polygons are rendered into the respective local frame buffers. It is noted that such texture data often occupies a relatively large portion of memory, particularly when it has been decompressed. In order to reduce the amount of memory required at each of the plurality of graphics processors 104, the common image data (such as texture data) is preferably centrally stored in memory 364. When one or more of the graphics processors 104 requires the common image data, the memory hub 362 preferably retrieves the common image data from the memory 364 and transfers that data to the one or more graphics processors 104.
  • It is noted that the [0102] memory hub 362 and associated memory 364 may be utilized in a scaled system, such as that shown in FIG. 10 and/or FIG. 14 depending on the exigencies of the application.
  • Reference is now made to FIG. 16, which is a block diagram illustrating an [0103] apparatus 400 suitable for use in accordance with one or more further aspects of the present invention. In many respects, the apparatus 400 is substantially similar to the apparatus 200 of FIG. 10. In particular, the apparatus 400 includes a control processor 102, a plurality of graphics apparatuses 100A-D, and a core merge unit 106N. Notably, the apparatus 400 does not require a core synchronization unit 108N as did the apparatus 200 of FIG. 10. The functionality of these functional and/or circuit blocks are substantially similar to those discussed hereinabove with respect to FIGS. 3 and 10 and, therefore, will not be repeated here.
  • The [0104] core merge unit 106N of apparatus 400 preferably includes a bus controller function 402 operatively coupled between the bus 126 and a control data bus 404, where the control data bus 404 communicates with the local merge units 106A-D and the core merge unit 106N. The local merge units 106A-D also preferably include a bus controller function 402. Although the bus controller functions 402 are shown as being implemented within the core merge unit 106N and within the local merge units 106A-D, it is understood that these functions may be separate from the core merge unit 106N and the local merge units 106A-D without departing from the scope of the invention. It is noted that the respective local merge units 106A-D are operatively coupled to the core merge unit 106N by way of the separate data lines 127A-D such that exclusive transmission of the local combined frame image data from the respective local merge units 106A-D to the core merge unit 106N is obtained.
  • As discussed hereinabove with respect to FIG. 9, the [0105] apparatus 400, like the apparatus 200, preferably operates according to the high level flow diagram of FIG. 9 such that one or more of the modes of operation (see FIGS. 4-7) may be carried out, preferably on a frame-by-frame basis. More particularly, the control processor 102 is preferably operable to transmit subsequent setup data to the respective apparatuses 100A-D on a frame-by-frame basis, where the setup data establishes the mode of operation of the graphics processors 104 and the merge units 106. The control processor 102 preferably communicates at least the instructions concerning the one or more modes of operation (i.e., the setup data) to the respective local merge units 106A-D and the core merge unit 106N by way of the bus controller functions 402 and the control data bus 404.
  • As discussed above with respect to [0106] apparatus 200 of FIG. 10 and apparatus 100 of FIG. 3, the core merge unit 106N is preferably operable to produce the merge synchronization signal 132 to facilitate synchronous release of the frame image data, the release of the local combined frame image data, and the production of the combined frame image data. The core merge unit 106N may transmit the merge synchronization signal 132 to the respective graphics processors 104 by way of the control data bus 404, and the data bus 126. In order to ensure that the merge synchronization signal 132 is transmitted quickly and received promptly by the apparatuses 100A-D, the bus controller function 402 is preferably operable to seize control of the data bus 126 upon receipt of the merge synchronization signal 132 over the control data bus 404 and transmit the merge synchronization signal 132 to the respective apparatuses 100A-D on a priority basis. Thus, the bus controller function 402 provides the function of a master bus controller. Alternately, the core merge unit 106N may transmit the merge synchronization signal 132 to the respective graphics processors 104 by way of the control data bus 404 and the local merge units 106A-D.
  • In accordance with the one or more aspects of the invention embodied in the [0107] apparatus 400 of FIG. 16, the advantageous elimination of separate control data lines between the control processor 102, the local merge units 106A-D and the core merge unit 106N in favor of a common control data bus 404 is achieved. Thus, scalability of the overall apparatus is simplified and freedom to partition the functional blocks of the apparatus in different ways is obtained.
  • Reference is now made to FIG. 17, which is a block diagram illustrating an apparatus [0108] 500 suitable for use in accordance with one or more further aspects of the present invention. In many ways, the apparatus 500 of FIG. 17 is substantially similar to the apparatuses of FIGS. 10 and 16 as will be readily apparent to one skilled in the art when the common functional and/or circuit blocks are noted, such as the plurality of graphics processing apparatuses 100A-D, the plurality of graphics processors 104, the plurality of local merge units 106A-D, the core merge unit 106N, and the control processor 102. Accordingly, details concerning these functional and/or circuit blocks will be omitted for purposes of clarity.
  • The apparatus [0109] 500 is preferably operable to function using a “connectionless network” in which the data transmitted between certain functional and/or circuit blocks are routed over a packet-switched network. The data are organized into packets in accordance with a transmission control protocol (TCP) layer of the packet-switched network. Each packet includes an identification number identifying it as being associated with a particular group of packets and an address of the destination functional and/or circuit block on the packet-switched network. The TCP at the destination permits the reassembly of the packets into the original data.
  • Turning to the particular features of the apparatus [0110] 500, a packet switch 504 is operatively coupled to the control processor 102, the respective apparatuses 100A-D, the plurality of graphics processors 104, and one or more packet switch controller functions 502 within the core merge unit 106N and the respective local merge units 106A-D. It is noted that the packet switch controller functions 502 may be implemented separately from the core merge unit 106N and/or the local merge units 106A-D without departing from the scope of the invention. As the graphics processors 104 are coupled to the control processor 102 by way of the packet-switched network, the program data and the format data may be transmitted to the graphics processors 104 in packetized form. Thus, the one or more software application programs facilitating, for example, polygonal spatial transformation, 3D-2D conversion, magnification, reduction, rotation, lighting, shading, coloring, etc., may be transmitted to the graphics processors through the packet switch 504.
  • It is noted that certain interconnections between functional and/or circuit blocks of the apparatus [0111] 500 are not made by way of the packet-switched network. For example, the plurality of graphics processors 104 are coupled to the respective local merge units 106 by way of separate dedicated data lines 127. Similarly, the plurality of local merge units 106 are coupled to the core merge unit 106N by way of dedicated data lines 127A-D. Still further, the transfer of instructions from the control processor 102 to the local merge units 106A-D and the core merge unit 106N are at least in part transmitted over the common control bus 404 discussed above with respect to FIG. 16.
  • The [0112] control processor 102 is preferably operable to instruct the graphics processors 104, the local merge units 106A-D, and the core merge unit 106N to operate in the one or more modes of operation on a frame-by-frame basis as discussed in detail hereinabove with respect to other configurations of the invention. In particular, the control processor 102 preferably communicates instructions concerning the one or more modes of operation to the respective destination function and/or circuit blocks by way of the packet switch 504, the instructions having been packetized in accordance with the TCP layer. Further, the core merge unit 106N is preferably operable to produce the merge synchronization signal 132 as discussed in detail hereinabove. The core merge unit 106N may be operable to transmit the merge synchronization signal 132 to various destination functional and/or circuit blocks by way of the control data bus 404, the packet switch controller function 502, and the packet switch 504.
  • The packet [0113] switch controller function 504 may be capable of seizing control of the packet switch 504 when the core merge unit 106N transmits the merge synchronization signal 132 over the control data bus 404 such that the merge synchronization signal 132 may be promptly transmitted to the destination functional and/or circuit blocks. When the TCP layer is capable of transmitting very quickly and guarantees the routing of all packets of an instruction to any of the destination functional and/or circuit blocks within a given latency, then the packet switch controller 502 need not seize the packet switch 504. Rather, a design assumption may be made that the merge synchronization signal 132 will be received by the destination functional and/or circuit blocks (such as the graphics processors 104) within an acceptable tolerance. Alternately, the core merge unit 106N could transmit the merge synchronization signal 132 to the respective graphics processors 104 by way of the control data bus 404 and the local merge units 106A-D.
  • Reference is now made to FIG. 18, which is block diagram of an [0114] apparatus 600 for processing image data to produce an image on a display in accordance with one or more further aspects of the present invention. The apparatus 600 preferably includes a plurality of processing nodes 602, at least one of which is a control node 604, that are coupled together on a packet-switched network, such as a local area network (LAN). The packet-switched network (here a LAN) and an associated TCP layer is assumed to enjoy a data transmission rate (from source node to destination node) that is sufficiently high to support the synchronization schema discussed hereinabove to produce the image.
  • The [0115] control node 604 is preferably operable to select a subset of processing nodes from among the plurality of processing nodes 602 to participate in processing the image data to produce the image for display. The subset of processing nodes preferably includes at least one accelerator node 606, at least one merge node 608, at least one packet switch node 610, and at least one configuration node 612. The apparatus 600 may optionally include at least one core merge node 614, at least one video hub node 616, and at least one memory hub node 618.
  • The at least one [0116] accelerator node 606 preferably includes one or more of the graphics processors 104 discussed hereinabove with respect to at least FIGS. 3, 10, and 14-17. More particularly, the at least one accelerator node 606 preferably enjoys the functionality discussed hereinabove with respect to the one or more graphics processors 104, although a repeated detailed discussion of that functionality is omitted for purposes of simplicity. The at least one merge node 608 preferably includes at least one of the merge units 106 discussed hereinabove with respect to at least FIGS. 3, 10, and 14-17. Preferably, the at least one merge node 608 enjoys the functionality of the at least one merge node 106, although a repeated detailed description of that functionality is omitted for the purposes of simplicity. The control node 604 preferably includes the control processor 102 discussed hereinabove with respect to at least FIGS. 3, 10, and 14-17, although a detailed description of the control processor 102 is omitted for the purposes of simplicity. The packet switch node 610 is preferably operable to route the image data, frame image data, combined frame image data, any command instructions, and/or any other data among the plurality of processing nodes 602.
  • The plurality of processing nodes [0117] 602 represent resources that may be tapped to achieve the production of the image for display. As these resources are distributed over the package-switched network, it is desirable to engage in a process of selecting a subset of these resources to participate in the production of the image. Indeed, some of the resources may not be available to participate in producing the image, other resources may be available but may not have the requisite capabilities to participate in producing the image, etc.
  • With reference to FIGS. 18 and 19, the [0118] configuration node 612 is preferably operable to issue one or more node configuration requests to the plurality of processing nodes 602 over the packet-switched network (action 650). These node configuration requests preferably prompt the processing nodes 602 to transmit at least some information concerning their data processing capabilities, their availability, their destination address, etc. More particularly, the information sought in terms of their data processing capability includes at least one (i) a rate at which the image data can be processed into frame image data (e.g., a capability of a graphics processor 104), (ii) a number of frame buffers available (e.g., a capability of a graphics processor 104), (iii) frame image data resolution (e.g., a capability of a graphics processor 104), (iv) an indication of respective modes of operation supported by the graphics processors of a given node, (v) a number of parallel paths into which respective frame image data may be input for merging into the combined frame image data (e.g., a capability of a merge node 608), (vi) an indication of respective modes of operation supported by a merge unit, (vii) memory size available for storing data (e.g., a capability of a memory hub node 618), (viii) memory access speed (e.g., a capability of a memory hub node 618), and (ix) memory throughput (e.g., a capability of a memory hub node 618). It is understood that the above list is given by way of example only and may not be exhaustive.
  • At [0119] action 652, at least some of the processing nodes 602 preferably transmit information concerning their data processing capabilities, their destination addresses and their availability to the configuration node 612 over the packet-switched network. The configuration node 612 is preferably further operable to distribute the received destination addresses to each processing node that responded to the node configuration requests (action 654).
  • At [0120] action 656, the configuration node 612 is preferably operable to transmit the information provided by each processing node 602 in response to the node configuration requests to the control node 604. At action 658, the control node is preferably operable to select the subset of processing nodes from among the processing nodes 602 that responded to the node configuration requests to participate in processing the image data to produce the image for display. This selection process is preferably based on the responses from the processing nodes 602 to the node configuration requests. At action 660, the control node 604 is preferably operable to transmit a request to participate in processing the image data to each of the subset of processing nodes 602, thereby promptly them to accept participation. The control node 604 is preferably further operable to transmit one or more further node configuration requests to one or more of the subset of processing nodes 602 that responded to the request to participate (action 662). Preferably, the further node configuration request includes at least a request for the node to provide information concerning a format in which the node expects to transmit and receive data, such as the image data, the frame image data, combined frame image data, processing instructions, etc.
  • At [0121] action 664, the control node is preferably operable to determine which of the processing nodes 602 are to be retained in the subset of processing nodes to participate in processing the image data. This determination is preferably based on at least one of the data processing capabilities, format information, availability, etc. provided by the processing nodes 602 in response to the node configuration requests. At action 660, the apparatus 600 preferably operates in substantial accordance with the process flow discussed hereinabove with respect to FIG. 9, particularly in terms of the use of the merge synchronization signal to synchronize the release of frame image data and combined frame image data to the one or more merge units 106 of the at least one merge node 608, it being understood that the flow of data between the graphics processors 104, merge units 106, control processor 102, synchronization units 108, etc. are facilitated over the packet-switched network. Moreover, the apparatus 600 of FIG. 18 is preferably operable to provide the requisite resources to implement any of the previously discussed apparatuses hereinabove, for example, apparatus 100 of FIG. 3, apparatus 200 of FIG. 10, apparatus 350 of FIG. 14, apparatus 360 of FIG. 15, apparatus 400 of FIG. 16, apparatus 500 of FIG. 17, or any combination thereof.
  • Advantageously, the [0122] apparatus 600 of FIG. 18 is easily modified to include substantially those resources necessary to achieve the desired quality of the image for display. Indeed, as accelerator 606, merge nodes 608, core merge nodes 614, video hub modes 616, memory hub nodes 618, etc. may be readily added or removed from the subset of processing nodes 602 participating in the production of the image, a desirable quantum of processing power may be achieved on a case-by-case basis, no matter what level of processing power is required.
  • Reference is now made to FIG. 20, which is a block diagram of an [0123] apparatus 700 for processing image data to produce an image on a display. The apparatus 700 preferably includes substantially the same general configuration of processing nodes 602 as illustrated in FIG. 18. In particular, the apparatus 700 preferably includes the control node 604, the accelerator node 606, the merge node 608, the packet switch node 610, the configuration node 612 and optionally the core merge node 614, the video hub node 616, and the memory hub node 618 as shown in FIG. 18, where only certain of these nodes are shown in FIG. 20 for clarity. It is preferred that the apparatus 700 enjoys the functionality discussed hereinabove with respect to the apparatus 600 of FIG. 18 and the process flow of FIG. 19, although a repeated discussion concerning the details of these functions is omitted for the purposes of clarity.
  • It is preferred that the processing nodes of [0124] apparatus 700 are coupled together on an open packet-switched network, such as the Internet. The packet-switched network, preferably employs suitable hardware and a TCP layer that enjoys a data transmission rate (e.g., from source node to destination node) that is sufficiently high to support the synchronization schema discussed hereinabove to produce the image.
  • The [0125] apparatus 700 contemplates that the processing nodes are arranged in a hierarchy, where the one or more accelerator nodes 606 (only one such accelerator 606 being shown for simplicity) are coupled to the packet switch node 610 on an (n)th level. By extension, the control node 604, the merge nodes 608, 614 (not shown), the packet switch node 610, the configuration node 612 (not shown), the video hub node 616 (not shown), and the memory hub node 618 (not shown) are also on the (n)th level. At least one of the (n)th level accelerator nodes 606 preferably includes an (n−1)th level control node 604 and a plurality of (n−1)th level accelerator nodes 606 coupled together by way of an (n−1)th level packet switch node 610. As shown in FIG. 20, the (n)th level accelerator node 606 includes an (n−1)th level control node 604A, an (n−1)th level packet switch node 610A, and seven (n−1)th level accelerator nodes 606A1-7. Preferably, the (n)th level accelerator node 606 performs substantially the same functions in the aggregate as discussed hereinabove with respect to the at least one accelerator node 606 of FIG. 18, although it is contemplated that the (n)th level accelerator node 606 would enjoy a higher level of processing power inasmuch as it includes seven accelerator nodes 606A1-7, each including one or more graphics processors 104. Moreover, each of the (n−1)th level accelerator nodes 606A1-7 preferably enjoy the functionality of the accelerator nodes 606 discussed hereinabove with respect to FIG. 18 inasmuch as each one preferably includes one or more graphics processors 104, etc. Although seven (n−1)th level accelerator nodes 606A1-7 are illustrated, any number of such (n−1)th level accelerator nodes may be employed without departing from the scope of the invention.
  • Preferably, one or more of the (n−1)th level accelerator nodes [0126] 606A1-7, such as (n−1)th level accelerator node 606A6, includes an (n−2)th level control node 604A6 and a plurality of (n−2)th level accelerator nodes 606A6,1-m (where m may be any number, such as seven) coupled together by way of an (n−2)th level packet switch node 610A6. The (n−2)th level accelerator nodes 606A6,1-m enjoy the functionality discussed hereinabove with respect to the accelerator nodes 606 of FIG. 18. In accordance with the invention, the number of levels, n, may be any integer number.
  • Preferably, one or more of the (n−2)th level accelerator nodes [0127] 606A6,1-m, such as (n−2)th level accelerator node 606A6,4, includes an (n−3)th level control node 604A6,4 and a plurality of (n−3)th level accelerator nodes 606A6,4,1-m (where the number may be any number, such as seven) coupled together by way of an (n−3)th level packet switch node 610A6,4. The (n−3)th level accelerator nodes 606A6,4,1-m enjoy the functionality discussed hereinabove with respect to the accelerator nodes 606 of FIG. 18. In accordance with the invention, the number of levels, n, may be any integer number.
  • As was the case with the [0128] apparatus 600 of FIG. 18, the control nodes 604 are preferably operable to select a subset of processing nodes from among the plurality of processing nodes 602 to participate in processing the image data to produce the image for display (see discussion of FIG. 19 above). With the apparatus 700 of FIG. 20, the selection process results in a subset of processing nodes 602 that may include n levels.
  • With reference to FIGS. 20 and 21A, once the selection of the subset of processing nodes [0129] 602 has been completed, the (n)th level control node 604 is preferably operable to transmit one or more (n)th level information sets 750 over the packet-switched network to one or more of the subset of processing nodes 602. The information sets 750 preferably include at least one of instructions and data used in the production of the image. For example, the instructions preferably include information concerning the one or more modes of operation in which the graphics processors 104, local merge units 106, core merge unit 108, etc. are to operate, preferably on a frame-by-frame basis as discussed hereinabove. The data may include texture data, video data (such as externally provided frames of image data to be combined to produce the final image), etc.
  • Each of the (n)th level information sets [0130] 750 preferably includes an (n)th level header 752 indicating at least that the given information set was issued from the (n)th level control nodes 604. The header 752 may also identify the information set as including instruction information, data, or a combination of both. The (n)th level information sets 750 also preferably include a plurality of (n)th level blocks 754, each block including information and/or data for one or more of the (n)th level nodes, such as an (n)th level accelerator node 756, an (n)th level merge node 758, an (n)th level video hub node 760, an (n)th level memory hub node 762, etc. The (n)th level switch node 610 is preferably operable to dissect the instructions set 750 such that the respective (n)th level information blocks 756-762 are routed to the respective (n)th level nodes.
  • With reference to FIG. 21B, one or more of the (n)th level information sets [0131] 750, such as information set 750A, may include an (n)th level information block for an (n)th level accelerator node, such as (n)th level information block 606A, that includes a plurality of (n−1)th level information sub-blocks 606A1-m. The (n)th level information block 756A also preferably includes a header 770 that indicates whether the information sub-blocks 772 contain instructions, data, etc.
  • Referring also to FIG. 20, the (n)th [0132] level switch node 610 is preferably operable to receive the information set 750A, dissect the (n)th level information block 756A for the (n)th level accelerator node 606 therefrom, and transmit the (n)th level information block 756A to the (n−1)th level control node 604A. The (n−1)th level control node 604A is preferably operable to transmit the respective (n−1)th information sub-blocks 772 of the (n)th level information block 756A to the (n−1)th level packet switch node 610A such that the respective (n−1)th level information sub-blocks 772 may be transmitted to the respective (n−1)th level accelerator nodes 606A1-m. This functionality and process is preferably repeated for each level of accelerator nodes.
  • The [0133] apparatus 700 is preferably operable to provide the requisite resources to implement any of the previously discussed apparatuses hereinabove. For example, apparatus 100 of FIG. 3, apparatus 200 of FIG. 10, apparatus 350 of FIG. 14, apparatus 360 of FIG. 15, apparatus 400 of FIG. 16, apparatus 500 of FIG. 17, apparatus 600 of FIG. 18, or any combination thereof.
  • Advantageously, the [0134] apparatus 700 of FIG. 20 is easily modified to include substantially those resources necessary to achieve the desired quality of the image for display. Indeed, as the number and level of processing nodes may be readily increased or decreased, a desirable quantum of processing power may be achieved on a case-by-case basis, no matter what level of processing power is required.
  • In accordance with at least one further aspect of the present invention, a method of processing image data to produce an image for display on a display screen may be achieved utilizing suitable hardware, such as that illustrated in FIGS. 3, 8, [0135] 10-18 and 20, and/or utilizing a manual or automatic process. An automatic process may be implemented utilizing any of the known processors that are operable to execute instructions of a software program. The software program preferably causes the processor (and/or any peripheral systems) to execute certain steps in accordance with one or more aspects of the present invention. In a manual process, the steps themselves would be performed using manual techniques. In either case, the steps and/or actions of the method preferably correspond with the functions discussed hereinabove with respect to at least portions of the hardware of FIGS. 3, 8, 10-18 and 20.
  • Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. [0136]

Claims (176)

1. An apparatus for processing image data to produce an image for covering an image area of a display, comprising:
a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer;
a control processor operable to provide instructions to the plurality of graphics processors; and
one or more merge units operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon.
2. The apparatus of claim 1, wherein at least one of the one or more merge units is operable to produce a merge synchronization signal used by the graphics processors to release the frame image data from the respective local frame buffers to the one or more merge units.
3. The apparatus of claim 2, further comprising:
at least one synchronization unit operable to receive the merge synchronization signal from the at least one merge unit; and
respective local synchronization units coupled to the graphics processors and operable to receive the merge synchronization signal and to cause the release of the frame image signal from the local frame buffer to the at least one merge unit.
4. The apparatus of claim 2, wherein the merge synchronization signal is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
5. The apparatus of claim 4, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
6. The apparatus of claim 5, wherein the merge synchronization signal includes transitions that are proximate to ends of the blanking periods such that the at least one merge unit initiates producing the combined frame image data for display at the end of at least one of the blanking periods.
7. The apparatus of claim 6, wherein the transitions of the merge synchronization signal are substantially coincident with the ends of the blanking periods.
8. The apparatus of claim 6, wherein the merge synchronization signal includes transitions that lead the ends of the blanking periods.
9. The apparatus of claim 2, wherein:
the plurality of graphics processors are grouped into respective sets of graphics processors;
the one or more merge units include a respective local merge unit coupled to each set of graphics processors, and a core merge unit coupled to each local merge unit;
the respective local merge units are operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; and
the core merge unit is operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon.
10. The apparatus of claim 9, further comprising:
a respective local synchronization unit coupled to each set of graphics processors and to each local merge unit; and
a core synchronization unit coupled each local synchronization unit and to the core merge unit,
wherein the core merge unit is operable to produce the merge synchronization signal used by the core synchronization unit and at least some of the local synchronization units to permit the respective sets of graphics processors to synchronously release the frame image data from the respective local frame buffers to the respective local merge units, and to permit the respective local merge units to synchronously release the local combined frame image data to the core merge unit.
11. The apparatus of claim 9, wherein the control processor is operable to instruct the graphics processors and the one or more merge units to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
12. The apparatus of claim 11, wherein the core merge unit and the respective local merge units are operatively coupled to the control processor by way of separate control data lines such that exclusive communication between the control processor, the core merge unit, and the respective local merge units at least concerning the instruction to operate in the one or more modes is obtained.
13. An apparatus for processing image data to produce an image for display on a display screen, comprising:
a plurality of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer, and each graphics processor including a synchronization counter operable to produce a local synchronization count indicating when the frame image data should be released from the local frame buffer;
a control processor operable to provide instructions to the plurality of graphics processors; and
at least one merge unit operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce combined frame image data based thereon.
14. The apparatus of claim 13, wherein the control processor includes a timing generator operable to produce a synchronization signal that is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
15. The apparatus of claim 14, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
16. The apparatus of claim 14, wherein the respective synchronization counters one of increment and decrement their synchronization count based on the synchronization signal from the timing generator.
17. The apparatus of claim 16, wherein the graphics processors are further operable to release their respective frame image data to the at least one merge unit when their respective local synchronization counts reach a threshold.
18. The apparatus of claim 17, wherein the instructions from the control processor include respective reset signals indicative of when the respective synchronization counters should reset their respective synchronization count.
19. The apparatus of claim 13, wherein the control processor is operable to instruct the graphics processors to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
20. The apparatus of claim 19, wherein at least one of the modes provides that:
one or more of the graphics processors completes rendering the image data into the respective frame buffers prior to the end of each blanking period;
one or more of the graphics processors completes rendering the image data into the respective frame buffers prior to the end of an integral number of blanking periods; and
one or more of the graphics processors includes an integral number of local frame buffers and that the one or more graphics processors completes rendering the image data into the respective integral number of frame buffers prior to the ends of a corresponding integral number of blanking periods.
21. The apparatus of claim 19, wherein the modes include at least one of area division, averaging, layer blending, Z-sorting and layer blending, and flip animation.
22. The apparatus of claim 13, wherein:
the plurality of graphics processors are grouped into respective sets of graphics processors;
the at least one merge unit includes a respective local merge unit coupled to each set of graphics processors, and a core merge unit coupled to each local merge unit;
the respective local merge units are operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; and
the core merge unit is operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon.
23. An apparatus for processing image data to produce an image for covering an image area of a display, comprising:
one or more sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer;
a control processor operable to provide instructions to the graphics processors;
a data bus operatively coupled to the control processor and the graphics processors;
one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being coupled to each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being coupled to each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon;
a control data bus operatively coupled to the core merge unit and the local merge units; and
a bus controller operatively coupled between the data bus and the control data bus and operable to transmit and receive data over the data bus and the control data bus on a priority basis.
24. The apparatus of claim 23, wherein the bus controller is an integral part of the core merge unit.
25. The apparatus of claim 23, wherein the control processor is operable to instruct the graphics processors, the local merge unit, and the core merge unit to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
26. The apparatus of claim 25, wherein the control processor communicates at least the instructions concerning the one or more modes of operation to the respective local merge units and the core merge unit by way of the bus controller and the control data bus.
27. The apparatus of claim 23, wherein the core merge unit is operable to produce a merge synchronization signal used by at least some of the graphics processors to synchronously release the frame image data from the respective local frame buffers to the respective local merge units, and to permit the respective local merge units to synchronously release the local combined frame image data to the core merge unit.
28. The apparatus of claim 27, wherein the core merge unit transmits the merge synchronization signal to the respective graphics processors by way the control data bus, the bus controller, and the data bus.
29. The apparatus of claim 28, wherein the bus controller seizes the data bus when the core merge unit produces the merge synchronization signal such that the merge synchronization signal can be transmitted to the respective graphics processors by way of the data bus on a priority basis.
30. An apparatus for processing image data to produce an image for covering an image area of a display, comprising:
a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer;
a control processor operable to provide instructions to the plurality of graphics processors;
a packet switch operatively coupled to the control processor and the respective sets of graphics processors;
one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being coupled to each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being coupled to each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon;
a control data bus operatively coupled to the core merge unit and the local merge units; and
a packet switch controller operatively coupled between the packet switch and the control data bus and operable to transmit and receive data over the packet switch and the control data bus on a priority basis.
31. The apparatus of claim 30, wherein the packet switch controller is coupled to the core merge unit by way of a dedicated data line.
32. The apparatus of claim 30, wherein the packet switch controller is an integral part of the core merge unit .
33. The apparatus of claim 32, wherein the control processor is operable to instruct the graphics processors, the local merge unit, and the core merge unit to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
34. The apparatus of claim 33, wherein the control processor communicates at least the instructions concerning the one or more modes of operation to the respective local merge units and the core merge unit by way of the packet switch, the packet switch controller and the control data bus.
35. The apparatus of claim 32, wherein the core merge unit is operable to produce a merge synchronization signal used by at least some of the graphics processors to synchronously release the frame image data from the respective local frame buffers to the respective local merge units, and to permit the respective local merge units to synchronously release the local combined frame image data to the core merge unit.
36. The apparatus of claim 35, wherein the core merge unit transmits the merge synchronization signal to the respective graphics processors by way of the control data bus, the packet switch controller, and the packet switch.
37. The apparatus of claim 36, wherein the packet switch controller seizes the packet switch when the core merge unit produces the merge synchronization signal such that the merge synchronization signal can be transmitted to the respective graphics processors by way of the packet switch on a priority basis.
38. An apparatus for processing image data to produce an image for covering an image area of a display, the apparatus being formed of a subset of a plurality of processing nodes coupled together on a packet-switched network, the apparatus comprising:
at least one accelerator node including a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer;
one or more merge nodes including one or more merge units, the one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being associated with each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being associated with each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon;
at least one configuration node operable to facilitate selecting the subset of processing nodes;
a control node including a control processor operable to provide instructions to the subset of processing nodes over the packet-switched network, and operable to select the subset of processing nodes to participate in processing the image data to produce the image for display; and
at least one packet switch node operable route data packets between the subset of nodes, the data packets forming at least the image data, frame image data, and combined frame image data.
39. The apparatus of claim 38, wherein the configuration node is operable to issue one or more node configuration requests to the plurality of processing nodes on the packet-switched network, the one or more node configuration requests prompting the processing nodes to transmit at least some information concerning their data processing capabilities and their destination address to the configuration node.
40. The apparatus of claim 39, wherein the at least some information concerning data processing capabilities includes at least one of a rate at which the image data can be processed into frame image data, a number of frame buffers available, frame image data resolution, an indication of respective modes of operation supported by the graphics processors, a number of parallel paths into which respective frame image data may be input for merging into the combined frame image data, an indication of respective modes of operation supported by the merge units, memory size available for storing data, memory access speed, and memory throughput.
41. The apparatus of claim 39, wherein the configuration node is further operable to transmit the destination addresses of substantially all of the processing nodes that responded to the one or more node configuration requests to each of those processing nodes.
42. The apparatus of claim 39, wherein the configuration node is further operable to transmit the information provided by each processing node in response to the one or more node configuration requests and the destination addresses thereof to the control node.
43. The apparatus of claim 42, wherein the control node is operable to select the subset of processing nodes from among the processing nodes that responded to the one or more node configuration requests to participate in processing the image data to produce the image for display based their responses to the one or more node configuration requests.
44. The apparatus of claim 43, wherein the control node is further operable to transmit a request to participate in processing the image data to produce the image for display to each of the subset of processing nodes prompting them to accept such participation.
45. The apparatus of claim 44, wherein the control node is further operable to transmit one or more further node configuration requests to one or more of the subset of processing nodes that responded to the request to participate.
46. The apparatus of claim 45, wherein the one or more further node configuration requests includes at least a request for the node to provide information concerning a format in which the node expects to transmit and receive at least one of the image data, the frame image data, the combined frame image data, and processing instructions.
47. The apparatus of claim 46, wherein the control node is further operable to determine which of the processing nodes are to be retained in the subset of processing nodes to participate in processing the image data to produce the image for display based on at least one of the data processing capabilities and format information provided in response to the one or more node configuration requests.
48. The apparatus of claim 38, wherein the packet-switched network is formed on a local area network.
49. The apparatus of claim 38, further comprising at least one video hub node operable to (i) receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, (ii) transmit at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data to at least one of the graphics processors such that one or more of the successive frames of frame image data a next frame of the combined frame image data may be based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
50. The apparatus of claim 38, wherein the core merge unit is operable to produce a merge synchronization signal used by the graphics processors to release the frame image data from the respective local frame buffers to the local merge units, wherein the merge synchronization signal is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
51. The apparatus of claim 50, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
52. The apparatus of claim 50, wherein the at least one externally provided frame of frame image data adheres to one of the display protocols and the merge synchronization signal is synchronized in accordance with that display protocol such that the core merge unit is operable to produce a subsequent frame of the combined frame image data based on the at least one externally provided frame of frame image data.
53. The apparatus of claim 38, further comprising at least one memory hub node operable to (i) receive and store common image data, (ii) transmit the common image data to at least one of the graphics processors such that at least one of the successive frames of frame image data and a successive frame of the combined frame image data may include at least some of the common image data.
54. The apparatus of claim 53, wherein the common image data include texture data.
55. The apparatus of claim 38, wherein the packet-switched network is formed on an open network.
56. The apparatus of claim 55, wherein the open network is the Internet.
57. The apparatus of claim 38, wherein the control node is an (n)th level control node, the at least one merge node is an (n)th level merge node, the packet switch node is an (n)th level packet switch node, and at least one accelerator node is an (n)th level accelerator node that includes an (n−1)th level control node and a plurality of (n−1)th level accelerator nodes coupled together by way of an (n−1)th level packet switch node over the packet-switched network.
58. The apparatus of claim 57, wherein at least one of the (n−i)th accelerator nodes includes an (n−i−1)th level control node and a plurality of (n−i−1)th level accelerator nodes coupled together by way of an (n−i−1)th level packet switch node over the packet-switched network.
59. The apparatus of claim 58, wherein:
at least some of the accelerator nodes are grouped to form respective sets of the graphics processors;
at least some of the merge nodes include respective local merge units associated with the respective sets of the graphics processors such that the respective local merge units are operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon; and
at least one of the processing nodes of the subset includes a core merge unit operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon.
60. The apparatus of claim 59, wherein the (n)th level control node is operable to transmit (n)th level information sets over the packet-switched network to one or more of the subset of processing nodes that at least one of (i) contain instructions, and (ii) contain various data used in the production of the image.
61. The apparatus of claim 60, wherein the instructions include information that causes the graphics processors, the local merge units, and the core merge unit to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
62. The apparatus of claim 60, wherein the various data used in the production of the image includes at least one of texture data, video data, and an externally provided frame of frame image data.
63. The apparatus of claim 60, wherein each of the (n)th level information sets includes (i) an (n)th level header indicating that the given information set was issued from the (n)th level control node; (ii) a plurality of (n)th level information blocks, each including information for one or more of the (n)th level nodes.
64. The apparatus of claim 63, wherein the (n)th level packet switch node is operable to route the plurality of (n)th level information blocks to the respective (n)th level nodes.
65. The apparatus of claim 63, wherein at least one of the plurality of (n)th level information blocks includes information for the (n)th level accelerator node.
66. The apparatus of claim 65, wherein the (n)th level information block for the (n)th level accelerator node includes a plurality of respective (n−1)th information sub-blocks for the respective (n−1)th nodes.
67. The apparatus of claim 66, wherein the (n−1)th level packet switch node is operable to route the plurality of (n−1)th level information sub-blocks to the respective (n−1)th level nodes.
68. The apparatus of claim 66, wherein at least one of the (n−i)th level information sub-blocks for an (n−i)th level accelerator node includes a plurality of respective information sub-blocks for respective (n−i−1)th nodes.
69. The apparatus of claim 65, wherein the subset of processing nodes includes at least one (n)th level video hub node operable to (i) receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, (ii) transmit at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data to at least one of the graphics processors such that one or more of the successive frames of frame image data a next frame of the combined frame image data may be based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
70. The apparatus of claim 69, wherein at least one of the plurality of (n)th level information blocks includes information for the at least one (n)th level video hub node.
71. The apparatus of claim 65, wherein the subset of processing nodes includes at least one (n)th level memory hub node operable to (i) receive and store common image data, (ii) transmit the common image data to at least one of the graphics processors such that at least one of the successive frames of frame image data and a successive frame of the combined frame image data may include at least some of the common image data.
72. The apparatus of claim 71, wherein at least one of the plurality of (n)th level information blocks includes information for the at least one (n)th level memory hub node.
73. The apparatus of any of claims 1, 23, 30, 38, or 57, wherein the respective local frame buffers of the respective graphics processors are operatively coupled to the one or more merge units by way of separate data lines such that exclusive transmission of the frame image data from the respective local frame buffers to the respective one or more merge units is obtained.
74. The apparatus of any of claims 23, 30, 38, or 57, wherein:
the respective local frame buffers of the respective groups of graphics processors are operatively coupled to the respective local merge units by way of separate data lines such that exclusive transmission of the frame image data from the respective local frame buffers to the respective local merge units is obtained; and
the respective local merge units are operatively coupled to the core merge unit by way of separate data lines such that exclusive transmission of the local combined frame image data from the respective local merge units to the core merge unit is obtained.
75. The apparatus of any of claims 1, 23, 30, 38, or 57, wherein the core merge unit is operable to produce a merge synchronization signal used by at least some of the respective sets of graphics processors to synchronously release the frame image data from the respective local frame buffers to the respective local merge units, and to permit the respective local merge units to synchronously release the local combined frame image data to the core merge unit.
76. The apparatus of claim 75, wherein the merge synchronization signal is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
77. The apparatus of claim 76, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
78. The apparatus of claim 77, wherein the merge synchronization signal includes transitions that are proximate to ends of the blanking periods such that the at least one merge unit initiates producing the combined frame image data for display at the end of at least one of the blanking periods.
79. The apparatus of claim 78, wherein the transitions of the merge synchronization signal are substantially coincident with the ends of the blanking periods.
80. The apparatus of claim 78, wherein the merge synchronization signal includes transitions that lead the ends of the blanking periods.
81. The apparatus of claim 75, wherein each of the graphics processors is further operable to render the image data into a respective one of the frame buffers asynchronously with respect to the merge synchronization signal.
82. The apparatus of claim 81, wherein each of the graphics processors is further operable to begin rendering the image data into the respective frame buffers in response to one or more rendering instructions from the control processor.
83. The apparatus of claim 82, wherein at least one of the graphics processors is further operable to issue a rendering complete signal to the control processor when it has completed rendering a frame of the frame image data and to store it in a respective one of the frame buffers.
84. The apparatus of any of claims 1, 23, 30, 38, or 57, wherein the control processor is operable to instruct the graphics processors and the one or more merge units to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
85. The apparatus of claim 84, wherein control processor is operable to instruct the graphics processors and the one or more merge units to operate in the one or more modes on a frame-by-frame basis.
86. The apparatus of claim 84, wherein at least one of the modes provides that:
one or more of the graphics processors completes rendering the image data into the respective frame buffers prior to the end of each blanking period;
one or more of the graphics processors completes rendering the image data into the respective frame buffers prior to the end of an integral number of blanking periods; and
one or more of the graphics processors includes an integral number of local frame buffers and that the one or more graphics processors completes rendering the image data into the respective integral number of frame buffers prior to the ends of a corresponding integral number of blanking periods.
87. The apparatus of claim 84, wherein the modes include at least one of area division, averaging, layer blending, Z-sorting and layer blending, and flip animation.
88. The apparatus of claim 84, wherein at least one of the modes is an area division mode providing that at least two of the local frame buffers are partitioned into respective rendering areas that correspond with respective portions of the image area and non-rendering areas, and an aggregate of the rendering areas results in a total rendering area that corresponds with all portions of the image area.
89. The apparatus of claim 88, wherein the area division mode further provides that the at least two graphics processors complete rendering the image data into the respective rendering areas of the frame buffers prior to the end of each blanking period.
90. The apparatus of claim 88, wherein the one or more merge units are further operable to synchronously aggregate the frame image data from the respective rendering areas of the at least two graphics processors based on alpha blending values to produce the combined frame image data, and the combined frame image data are capable of covering the image area.
91. The apparatus of claim 84, wherein at least one of the modes is an averaging mode providing that the local frame buffers of at least two of the graphics processors include rendering areas that each correspond with all portions of the image area and that the respective frame image data from the at least two local frame buffers are averaged to produce the combined frame image data.
92. The apparatus of claim 91, wherein the averaging mode further provides that the at least two graphics processors complete rendering the image data into the respective rendering areas of the frame buffers prior to the end of each blanking period.
93. The apparatus of claim 91, wherein the one or more merge unit are further operable to synchronously average the frame image data from the respective rendering areas of the at least two graphics processors based on alpha blending values to produce the combined frame image data, and the combined frame image data are capable of covering the image area.
94. The apparatus of claim 84, wherein at least one of the modes is a layer blending mode providing that: (i) at least some of the image data are rendered into the local frame buffers of at least two of the graphics processors such that each of these local frame buffers includes frame image data representing a portion of the combined frame image data; (ii) each of the portions of the combined frame image data are prioritized; and (iii) the one or more merge units are further operable to synchronously produce the combined frame image data by layering each of the frame image data in an order according to the priority thereof.
95. The apparatus of claim 94, wherein the one or more merge units are further operable to synchronously layer the frame image data such that one of the layers of frame image data may overwrite another of the layers of frame image data depending on the priority of the layers.
96. The apparatus of claim 94, wherein the layer blending mode further provides that the at least two graphics processors complete rendering the image data into the respective frame buffers prior to the end of each blanking period.
97. The apparatus of claim 84, wherein at least one of the modes is a Z-sorting and layer blending mode providing that: (i) at least some of the image data are rendered into the local frame buffers of at least two of the graphics processors such that each of the at least two local frame buffers includes frame image data representing a portion of the combined frame image data; (ii) the frame image data include Z-values representing image depth; and (iii) the one or more merge units are further operable to synchronously produce the combined frame image data by Z-sorting and layering each of the frame image data in accordance with the image depth.
98. The apparatus of claim 97, wherein the one or more merge units are further operable to synchronously layer the frame image data such that at least a portion of one of frame image data may overwrite another portion of frame image data depending on the Z-values thereof.
99. The apparatus of claim 97, wherein the Z-sorting and layer blending mode further provides that the at least two graphics processors complete rendering the image data into the respective frame buffers prior to the end of each blanking period.
100. The apparatus of claim 84, wherein at least one of the modes is a flip animation mode providing that: (i) the local frame buffers of at least two graphics processors include frame image data that are capable of covering the image area; and (ii) the one or more merge units are further operable to produce the combined frame image data by sequentially releasing the respective frame image data from the at least two graphics processors.
101. The apparatus of claim 100, wherein the flip animation mode further provides that the at least two graphics processors complete rendering the image data into the respective frame buffers prior to the ends of an integral number of blanking periods.
102. The apparatus of claim 101, wherein the integral number of blanking periods corresponds to the number of graphics processors participating in the flip animation mode.
103. The apparatus of claim 101, wherein the integral number of blanking periods corresponds to the number of local frame buffers participating in the flip animation mode.
104. The apparatus of any of claims 38 or 57, further comprising at least one video hub operable to receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, the at least one video hub being operatively coupled to the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data, and (ii) one or more merge units are operable to produce a successive frame of the combined frame image data based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
105. The apparatus of claim 104, wherein the merge synchronization signal is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
106. The apparatus of claim 105, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
107. The apparatus of claim 105, wherein the at least one externally provided frame of frame image data adheres to one of the display protocols and the merge synchronization signal is synchronized in accordance with that display protocol such that the merge unit is operable to produce a next frame of the combined frame image data based on the at least one externally provided frame of frame image data.
108. The apparatus of any of claims 38 or 57, further comprising at least one memory hub operable to receive and store common image data, the at least one memory hub being operatively coupled to the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least some of the common image data, and (ii) the one or more merge units are operable to produce a successive frame of the combined frame image data based on the common image data.
109. The apparatus of claim 108, wherein the common image data include texture data.
110. A method for processing image data to produce an image for covering an image area of a display, comprising:
rendering the image data into frame image data using a plurality of graphics processors;
storing the frame image data in respective local frame buffers; and
synchronously merging the frame image data from the respective local frame buffers to synchronously produce combined frame image data based thereon.
111. The method of claim 110, further comprising producing a merge synchronization signal used by at least some of the plurality of graphics processors to synchronously release the frame image data from the respective local frame buffers for merging.
112. The method of claim 111, wherein the merge synchronization signal is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
113. The method of claim 112, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
114. The method of claim 113, wherein the merge synchronization signal includes transitions that are proximate to ends of the blanking periods such that the combined frame image data are available for display at the end of at least one of the blanking periods.
115. The method of claim 114, wherein the transitions of the merge synchronization signal are substantially coincident with the ends of the blanking periods.
116. The method of claim 114, wherein the merge synchronization signal includes transitions that lead the ends of the blanking periods.
117. The method of claim 111, wherein the image data are rendered into the respective frame buffers asynchronously with respect to the merge synchronization signal.
118. The method of claim 110, wherein the graphics processors can operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
119. The method of claim 118, further comprising instructing the graphics processors to operate in the one or more modes on a frame-by-frame basis.
120. The method of claim 118, wherein at least one of the modes provides that:
one or more of the graphics processors completes rendering the image data into the respective frame buffers prior to the end of each blanking period;
one or more of the graphics processors completes rendering the image data into the respective frame buffers prior to the end of an integral number of blanking periods; and
one or more of the graphics processors includes an integral number of local frame buffers and that the one or more graphics processors completes rendering the image data into the respective integral number of frame buffers prior to the ends of a corresponding integral number of blanking periods.
121. The method of claim 118, wherein the modes include at least one of area division, averaging, layer blending, Z-sorting and layer blending, and flip animation.
122. The method of claim 118, wherein at least one of the modes is an area division mode providing that at least two of the local frame buffers are partitioned into respective rendering areas that correspond with respective portions of the image area and non-rendering areas, and an aggregate of the rendering areas results in a total rendering area that corresponds with all portions of the image area.
123. The method of claim 122, wherein the area division mode further provides that the at least two graphics processors complete rendering the image data into the respective rendering areas of the frame buffers prior to the end of each blanking period.
124. The method of claim 122, further comprising synchronously aggregating the frame image data from the respective rendering areas of the at least two graphics processors based on alpha blending values to produce the combined frame image data, and the combined frame image data are capable of covering the image area.
125. The method of claim 118, wherein at least one of the modes is an averaging mode providing that the local frame buffers of at least two of the graphics processors include rendering areas that each correspond with all portions of the image area and that the respective frame image data from the at least two local frame buffers are averaged to produce the combined frame image data.
126. The method of claim 125, wherein the averaging mode further provides that the at least two graphics processors complete rendering the image data into the respective rendering areas of the frame buffers prior to the end of each blanking period.
127. The method of claim 125, further comprising synchronously averaging the frame image data from the respective rendering areas of the at least two graphics processors based on alpha blending values to produce the combined frame image data, and the combined frame image data are capable of covering the image area.
128. The method of claim 118, wherein at least one of the modes is a layer blending mode providing that: (i) at least some of the image data are rendered into the local frame buffers of at least two of the graphics processors such that each of these local frame buffers includes frame image data representing a portion of the combined frame image data; (ii) each of the portions of the combined frame image data are prioritized; and (iii) the method further comprises synchronously producing the combined frame image data by layering each of the frame image data in an order according to the priority thereof.
129. The method of claim 128, further comprising synchronously layering the frame image data such that one of the layers of frame image data may overwrite another of the layers of frame image data depending on the priority of the layers.
130. The method of claim 128, wherein the layer blending mode further provides that the at least two graphics processors complete rendering the image data into the respective frame buffers prior to the end of each blanking period.
131. The method of claim 118, wherein at least one of the modes is a Z-sorting and layer blending mode providing that: (i) at least some of the image data are rendered into the local frame buffers of at least two of the graphics processors such that each of the at least two local frame buffers includes frame image data representing a portion of the combined frame image data; (ii) the frame image data include Z-values representing image depth; and (iii) the method further comprises synchronously producing the combined frame image data by Z-sorting and layering each of the frame image data in accordance with the image depth.
132. The method of claim 131, further comprising synchronously layering the frame image data such that at least a portion of one of frame image data may overwrite another portion of frame image data depending on the Z-values thereof.
133. The method of claim 131, wherein the Z-sorting and layer blending mode further provides that the at least two graphics processors complete rendering the image data into the respective frame buffers prior to the end of each blanking period.
134. The method of claim 118, wherein at least one of the modes is a flip animation mode providing that: (i) the local frame buffers of at least two graphics processors include frame image data that are capable of covering the image area; and (ii) the method further comprises producing the combined frame image data by sequentially releasing the respective frame image data from the at least two graphics processors.
135. The method of claim 134, wherein the flip animation mode further provides that the at least two graphics processors complete rendering the image data into the respective frame buffers prior to the ends of an integral number of blanking periods.
136. The method of claim 135, wherein the integral number of blanking periods corresponds to the number of graphics processors participating in the flip animation mode.
137. The method of claim 135, wherein the integral number of blanking periods corresponds to the number of local frame buffers participating in the flip animation mode.
138. The method of claim 110, further comprising:
receiving at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data;
transmitting the at least one of a frame of the combined frame image data and the at least one externally provided frame of frame image data to at least one of the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data; and
producing a successive frame of the combined frame image data based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
139. The method of claim 110, further comprising:
receiving and storing common image data in a memory;
transmitting at least some of the common image data to at least some of the plurality of graphics processors such that (i) at least one of the successive frames of frame image data from one or more of the graphics processors may include at least some of the common image data; and
producing a successive frame of the combined frame image data based on the common image data.
140. The method of claim 139, wherein the common image data include texture data.
141. A method for processing image data to produce an image for display on a display screen, comprising:
rendering the image data into frame image data using a plurality of graphics processors;
storing the frame image data in respective local frame buffers;
producing respective local synchronization counts indicating when the frame image data should be released from the respective local frame buffers; and
synchronously producing combined frame image data from the frame image data.
142. The method of claim 141, further comprising producing a synchronization signal that is synchronized in accordance with a display protocol defining how respective frames of the combined frame image data are to be displayed.
143. The method of claim 142, wherein the display protocol defines at least one of a frame rate at which successive frames of the combined frame image data are displayed, and a blanking period that dictates when the combined frame image data is to be refreshed.
144. The method of claim 142, further comprising one of incrementing and decrementing the respective synchronization counts based on the synchronization signal.
145. The method of claim 144, further comprising releasing the respective frame image data for merging when the respective local synchronization counts reach a threshold.
146. The method of claim 145, further comprising providing respective reset signals indicative of when the respective synchronization counts should be reset.
147. A method for processing image data to produce an image for covering an image area of a display, the method being carried out using a subset of a plurality of processing nodes coupled together on a packet-switched network, the method comprising:
selecting from among the plurality of processing nodes at least one accelerator node including a plurality of sets of graphics processors, each graphics processor being operable to render the image data into frame image data and to store the frame image data in a respective local frame buffer;
selecting from among the plurality of processing nodes one or more merge nodes including one or more merge units, the one or more merge units including one or more local merge units and a core merge unit, a respective one of the local merge units being associated with each set of graphics processors and being operable to synchronously receive the frame image data from the respective local frame buffers and to synchronously produce local combined frame image data based thereon, and the core merge unit being associated with each local merge unit and being operable to synchronously receive the local combined frame image data from the respective local merge units and to synchronously produce the combined frame image data based thereon;
establishing a control node including a control processor operable to provide instructions to the subset of processing nodes over the packet-switched network, and operable to select the subset of processing nodes to participate in processing the image data to produce the image for display; and
employing at least one packet switch node operable route data packets between the subset of nodes, the data packets forming at least the image data, frame image data, and combined frame image data.
148. The method of claim 147, further comprising issuing one or more node configuration requests to the plurality of processing nodes on the packet-switched network, the one or more node configuration requests prompting the processing nodes to transmit at least some information concerning their data processing capabilities and their destination address to the configuration node.
149. The method of claim 148, wherein the at least some information concerning data processing capabilities includes at least one of a rate at which the image data can be processed into frame image data, a number of frame buffers available, frame image data resolution, an indication of respective modes of operation supported by the graphics processors, a number of parallel paths into which respective frame image data may be input for merging into the combined frame image data, an indication of respective modes of operation supported by the merge units, memory size available for storing data, memory access speed, and memory throughput.
150. The method of claim 148, further comprising transmitting the destination addresses of substantially all of the processing nodes that responded to the one or more node configuration requests to each of those processing nodes.
151. The method of claim 148, further comprising transmitting the information provided by each processing node in response to the one or more node configuration requests and the destination addresses thereof to the control node.
152. The method of claim 151, further comprising selecting the subset of processing nodes from among the processing nodes that responded to the one or more node configuration requests to participate in processing the image data to produce the image for display based their responses to the one or more node configuration requests.
153. The method of claim 152, further comprising transmitting a request to participate in processing the image data to produce the image for display to each of the subset of processing nodes prompting them to accept such participation.
154. The method of claim 153, further comprising transmitting one or more further node configuration requests to one or more of the subset of processing nodes that responded to the request to participate.
155. The method of claim 154, wherein the one or more further node configuration requests includes at least a request for the node to provide information concerning a format in which the node expects to transmit and receive at least one of the image data, the frame image data, the combined frame image data, and processing instructions.
156. The method of claim 155, further comprising determining which of the processing nodes are to be retained in the subset of processing nodes to participate in processing the image data to produce the image for display based on at least one of the data processing capabilities and format information provided in response to the one or more node configuration requests.
157. The method of claim 147, wherein the packet-switched network is formed on a local area network.
158. The method of claim 147, further comprising selecting from among the plurality of processing nodes at least one video hub node operable to (i) receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, (ii) transmit at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data to at least one of the graphics processors such that one or more of the successive frames of frame image data a next frame of the combined frame image data may be based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
159. The method of claim 155, further comprising selecting from among the plurality of processing nodes at least one memory hub node operable to (i) receive and store common image data, (ii) transmit the common image data to at least one of the graphics processors such that at least one of the successive frames of frame image data and a successive frame of the combined frame image data may include at least some of the common image data.
160. The method of claim 147, wherein the packet-switched network is formed on an open network.
161. The method of claim 160, wherein the open network is the Internet.
162. The method of claim 147, further comprising establishing a hierarchy among the subset of processing nodes such that the control node is an (n)th level control node, the at least one merge node is an (n)th level merge node, the packet switch node is an (n)th level packet switch node, and at least one accelerator node is an (n)th level accelerator node that includes an (n−1)th level control node and a plurality of (n−1)th level accelerator nodes coupled together by way of an (n−1)th level packet switch node over the packet-switched network.
163. The method of claim 162, wherein at least one of the (n−i)th accelerator nodes includes an (n−i−1)th level control node and a plurality of (n−i−1)th level accelerator nodes coupled together by way of an (n−i−1)th level packet switch node over the packet-switched network.
164. The method of claim 163, further comprising transmitting (n)th level information sets over the packet-switched network to one or more of the subset of processing nodes that at least one of (i) contain instructions, and (ii) contain various data used in the production of the image.
165. The method of claim 164, wherein the instructions include information that causes the graphics processors, the local merge units, and the core merge unit to operate in one or more modes that affect at least one of (i) timing relationships between when image data are rendered, when frame image data are released from respective local frame buffers, and when frame image data are merged; and (ii) how the frame image data are merged to synchronously produce the combined frame image data.
166. The method of claim 164, wherein the various data used in the production of the image includes at least one of texture data, video data, and an externally provided frame of frame image data.
167. The method of claim 164, wherein each of the (n)th level information sets includes (i) an (n)th level header indicating that the given information set was issued from the (n)th level control node; (ii) a plurality of (n)th level information blocks, each including information for one or more of the (n)th level nodes.
168. The method of claim 167, further comprising routing the plurality of (n)th level information blocks to the respective (n)th level nodes.
169. The method of claim 167, wherein at least one of the plurality of (n)th level information blocks includes information for the (n)th level accelerator node.
170. The method of claim 169, wherein the (n)th level information block for the (n)th level accelerator node includes a plurality of respective (n−1)th information sub-blocks for the respective (n−1)th nodes.
171. The method of claim 170, further comprising routing the plurality of (n−1)th level information sub-blocks to the respective (n−1)th level nodes.
172. The method of claim 170, wherein at least one of the (n−i)th level information sub-blocks for an (n−i)th level accelerator node includes a plurality of respective information sub-blocks for respective (n−i−1)th nodes.
173. The method of claim 169, further comprising selecting from among the plurality of processing nodes at least one (n)th level video hub node operable to (i) receive at least one of a frame of the combined frame image data and at least one externally provided frame of frame image data, (ii) transmit at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data to at least one of the graphics processors such that one or more of the successive frames of frame image data a next frame of the combined frame image data may be based on the at least one of the at least one externally provided frame of frame image data and the frame of the combined frame image data.
174. The method of claim 173, wherein at least one of the plurality of (n)th level information blocks includes information for the at least one (n)th level video hub node.
175. The method of claim 169, further comprising selecting from among the plurality of processing nodes at least one (n)th level memory hub node operable to (i) receive and store common image data, (ii) transmit the common image data to at least one of the graphics processors such that at least one of the successive frames of frame image data and a successive frame of the combined frame image data may include at least some of the common image data.
176. The method of claim 175, wherein at least one of the plurality of (n)th level information blocks includes information for the at least one (n)th level memory hub node.
US09/815,781 2000-03-23 2001-03-23 Image processing apparatus and method Expired - Lifetime US6924807B2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2000-82686 2000-03-23
JP2000082686 2000-03-23
JP2000396191 2000-12-26
JP2000-396191 2000-12-26
US27769501P 2001-03-21 2001-03-21
US27785001P 2001-03-22 2001-03-22

Publications (2)

Publication Number Publication Date
US20020030694A1 true US20020030694A1 (en) 2002-03-14
US6924807B2 US6924807B2 (en) 2005-08-02

Family

ID=34812159

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/815,781 Expired - Lifetime US6924807B2 (en) 2000-03-23 2001-03-23 Image processing apparatus and method

Country Status (1)

Country Link
US (1) US6924807B2 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020051005A1 (en) * 2000-04-01 2002-05-02 Discreet Logic Inc. Processing pipeline responsive to input and output frame rates
US20030016235A1 (en) * 2001-06-28 2003-01-23 Masayuki Odagawa Image processing apparatus and method
US20030164842A1 (en) * 2002-03-04 2003-09-04 Oberoi Ranjit S. Slice blend extension for accumulation buffering
US20030184548A1 (en) * 2002-03-29 2003-10-02 Emmot Darel N. System and method for passing messages among processing nodes in a distributed system
US20040160451A1 (en) * 2003-02-08 2004-08-19 Walls Jeffrey Joel Apparatus and method for buffering data
US20050017986A1 (en) * 2001-10-13 2005-01-27 Majid Anwar Systems and methods for generating visual representations of graphical data and digital document processing
US20050088445A1 (en) * 2003-10-22 2005-04-28 Alienware Labs Corporation Motherboard for supporting multiple graphics cards
EP1649357A1 (en) * 2003-07-15 2006-04-26 Alienware Labs Corporation Multiple parallel processor computer graphics system
US20060244758A1 (en) * 2005-04-29 2006-11-02 Modviz, Inc. Transparency-conserving method to generate and blend images
US20070223471A1 (en) * 2006-03-27 2007-09-27 Seiko Epson Corporation Device control using data communication
WO2008036231A2 (en) * 2006-09-18 2008-03-27 Alienware Labs. Corporation Multiple parallel processor computer graphics system
US20090073300A1 (en) * 2007-09-17 2009-03-19 Qualcomm Incorporated Clock management of bus during viewfinder mode in digital camera device
EP2081384A2 (en) * 2008-01-15 2009-07-22 Verint Systems Inc. Video tiling using multiple digital signal processors
US20090240859A1 (en) * 2008-03-21 2009-09-24 Hon Hai Precision Industry Co., Ltd. Automatic address setting system
US7616207B1 (en) * 2005-04-25 2009-11-10 Nvidia Corporation Graphics processing system including at least three bus devices
US20100005226A1 (en) * 2006-07-26 2010-01-07 Panasonic Corporation Nonvolatile memory device, access device, and nonvolatile memory system
US20100156894A1 (en) * 2008-10-26 2010-06-24 Zebra Imaging, Inc. Rendering 3D Data to Hogel Data
US20110093611A1 (en) * 2007-06-29 2011-04-21 Mikael Lind Network unit, a central distribution control unit and a computer program product
US20130055072A1 (en) * 2011-08-24 2013-02-28 Robert Douglas Arnold Multi-Threaded Graphical Display System
US20170278485A1 (en) * 2014-12-11 2017-09-28 Huawei Technologies Co., Ltd. Processing method and device for multi-screen splicing display
US10237559B2 (en) * 2014-11-20 2019-03-19 Getgo, Inc. Layer-based video decoding
KR20190073350A (en) * 2016-10-31 2019-06-26 에이티아이 테크놀로지스 유엘씨 A method for dynamically reducing time from application rendering to on-screen in a desktop environment.
US20190279602A1 (en) * 2016-10-25 2019-09-12 Sony Semiconductor Solutions Corporation Display control apparatus, electronic equipment, control method of display control apparatus, and program

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6933943B2 (en) * 2002-02-27 2005-08-23 Hewlett-Packard Development Company, L.P. Distributed resource architecture and system
US7477205B1 (en) 2002-11-05 2009-01-13 Nvidia Corporation Method and apparatus for displaying data from multiple frame buffers on one or more display devices
US20040179007A1 (en) * 2003-03-14 2004-09-16 Bower K. Scott Method, node, and network for transmitting viewable and non-viewable data in a compositing system
US7629979B2 (en) * 2003-08-20 2009-12-08 Hewlett-Packard Development Company, L.P. System and method for communicating information from a single-threaded application over multiple I/O busses
US7034834B2 (en) * 2003-10-24 2006-04-25 Microsoft Corporation Communication protocol for synchronizing animation systems
US7460126B2 (en) * 2004-08-24 2008-12-02 Silicon Graphics, Inc. Scalable method and system for streaming high-resolution media
US7542042B1 (en) * 2004-11-10 2009-06-02 Nvidia Corporation Subpicture overlay using fragment shader
US7522167B1 (en) * 2004-12-16 2009-04-21 Nvidia Corporation Coherence of displayed images for split-frame rendering in multi-processor graphics system
JP4893621B2 (en) * 2005-05-20 2012-03-07 ソニー株式会社 Signal processing device
US20070040839A1 (en) * 2005-08-17 2007-02-22 Tzu-Jen Kuo Motherboard and computer system with multiple integrated graphics processors and related method
US20090020611A1 (en) * 2005-09-30 2009-01-22 Symbol Technologies, Inc. Bi-optic imaging scanner with preprocessor for processing image data from multiple sources
US7430682B2 (en) * 2005-09-30 2008-09-30 Symbol Technologies, Inc. Processing image data from multiple sources
US7728841B1 (en) * 2005-12-19 2010-06-01 Nvidia Corporation Coherent shader output for multiple targets
US8505824B2 (en) * 2007-06-28 2013-08-13 Symbol Technologies, Inc. Bar code readers having multifold mirrors
US20090020612A1 (en) * 2007-06-28 2009-01-22 Symbol Technologies, Inc. Imaging dual window scanner with presentation scanning
US8033472B2 (en) * 2007-06-28 2011-10-11 Symbol Technologies, Inc. Electro-optical imaging reader having plural solid-state imagers with shutters to prevent concurrent exposure
US7780086B2 (en) * 2007-06-28 2010-08-24 Symbol Technologies, Inc. Imaging reader with plural solid-state imagers for electro-optically reading indicia
US8662397B2 (en) * 2007-09-27 2014-03-04 Symbol Technologies, Inc. Multiple camera imaging-based bar code reader
US20100001075A1 (en) * 2008-07-07 2010-01-07 Symbol Technologies, Inc. Multi-imaging scanner for reading images
US20100102129A1 (en) * 2008-10-29 2010-04-29 Symbol Technologies, Inc. Bar code reader with split field of view
US8479996B2 (en) * 2008-11-07 2013-07-09 Symbol Technologies, Inc. Identification of non-barcoded products
US8079523B2 (en) * 2008-12-15 2011-12-20 Symbol Technologies, Inc. Imaging of non-barcoded documents
US8146821B2 (en) 2009-04-02 2012-04-03 Symbol Technologies, Inc. Auto-exposure for multi-imager barcode reader
US8907941B2 (en) * 2009-06-23 2014-12-09 Disney Enterprises, Inc. System and method for integrating multiple virtual rendering systems to provide an augmented reality
US9190012B2 (en) * 2009-12-23 2015-11-17 Ati Technologies Ulc Method and system for improving display underflow using variable HBLANK
KR20110110433A (en) * 2010-04-01 2011-10-07 삼성전자주식회사 Image display device and method for displaying image
US8939371B2 (en) 2011-06-30 2015-01-27 Symbol Technologies, Inc. Individual exposure control over individually illuminated subfields of view split from an imager in a point-of-transaction workstation
JP6325886B2 (en) * 2014-05-14 2018-05-16 オリンパス株式会社 Display processing apparatus and imaging apparatus
US10362241B2 (en) * 2016-12-30 2019-07-23 Microsoft Technology Licensing, Llc Video stream delimiter for combined frame

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4949280A (en) * 1988-05-10 1990-08-14 Battelle Memorial Institute Parallel processor-based raster graphics system architecture
US5546530A (en) * 1990-11-30 1996-08-13 Vpl Research, Inc. Method and apparatus for rendering graphical images using parallel processing
US5657478A (en) * 1995-08-22 1997-08-12 Rendition, Inc. Method and apparatus for batchable frame switch and synchronization operations
US5727192A (en) * 1995-03-24 1998-03-10 3Dlabs Inc. Ltd. Serial rendering system with auto-synchronization on frame blanking
US5969728A (en) * 1997-07-14 1999-10-19 Cirrus Logic, Inc. System and method of synchronizing multiple buffers for display
US6128025A (en) * 1997-10-10 2000-10-03 International Business Machines Corporation Embedded frame buffer system and synchronization method
US6157395A (en) * 1997-05-19 2000-12-05 Hewlett-Packard Company Synchronization of frame buffer swapping in multi-pipeline computer graphics display systems
US6157393A (en) * 1998-07-17 2000-12-05 Intergraph Corporation Apparatus and method of directing graphical data to a display device
US6292200B1 (en) * 1998-10-23 2001-09-18 Silicon Graphics, Inc. Apparatus and method for utilizing multiple rendering pipes for a single 3-D display
US6329996B1 (en) * 1999-01-08 2001-12-11 Silicon Graphics, Inc. Method and apparatus for synchronizing graphics pipelines
US6411302B1 (en) * 1999-01-06 2002-06-25 Concise Multimedia And Communications Inc. Method and apparatus for addressing multiple frame buffers
US6473086B1 (en) * 1999-12-09 2002-10-29 Ati International Srl Method and apparatus for graphics processing using parallel graphics processors
US6573905B1 (en) * 1999-11-09 2003-06-03 Broadcom Corporation Video and graphics system with parallel processing of graphics windows
US6664968B2 (en) * 2000-01-06 2003-12-16 International Business Machines Corporation Display device and image displaying method of display device
US6753878B1 (en) * 1999-03-08 2004-06-22 Hewlett-Packard Development Company, L.P. Parallel pipelined merge engines

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2026527A1 (en) 1989-10-11 1991-04-12 Douglas A. Fischer Parallel polygon/pixel rendering engine
US5557711A (en) 1990-10-17 1996-09-17 Hewlett-Packard Company Apparatus and method for volume rendering
JPH05189549A (en) 1991-09-10 1993-07-30 Kubota Corp Image data processor by multiprocessor
US5392393A (en) 1993-06-04 1995-02-21 Sun Microsystems, Inc. Architecture for a high performance three dimensional graphics accelerator
KR100269106B1 (en) 1996-03-21 2000-11-01 윤종용 Multiprocessor graphics system
CN1139254C (en) 1998-06-26 2004-02-18 通用仪器公司 Terminal for composing and presenting MPEG-4 video programs

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4949280A (en) * 1988-05-10 1990-08-14 Battelle Memorial Institute Parallel processor-based raster graphics system architecture
US5546530A (en) * 1990-11-30 1996-08-13 Vpl Research, Inc. Method and apparatus for rendering graphical images using parallel processing
US5727192A (en) * 1995-03-24 1998-03-10 3Dlabs Inc. Ltd. Serial rendering system with auto-synchronization on frame blanking
US5657478A (en) * 1995-08-22 1997-08-12 Rendition, Inc. Method and apparatus for batchable frame switch and synchronization operations
US6157395A (en) * 1997-05-19 2000-12-05 Hewlett-Packard Company Synchronization of frame buffer swapping in multi-pipeline computer graphics display systems
US5969728A (en) * 1997-07-14 1999-10-19 Cirrus Logic, Inc. System and method of synchronizing multiple buffers for display
US6128025A (en) * 1997-10-10 2000-10-03 International Business Machines Corporation Embedded frame buffer system and synchronization method
US6157393A (en) * 1998-07-17 2000-12-05 Intergraph Corporation Apparatus and method of directing graphical data to a display device
US6292200B1 (en) * 1998-10-23 2001-09-18 Silicon Graphics, Inc. Apparatus and method for utilizing multiple rendering pipes for a single 3-D display
US6411302B1 (en) * 1999-01-06 2002-06-25 Concise Multimedia And Communications Inc. Method and apparatus for addressing multiple frame buffers
US6329996B1 (en) * 1999-01-08 2001-12-11 Silicon Graphics, Inc. Method and apparatus for synchronizing graphics pipelines
US6753878B1 (en) * 1999-03-08 2004-06-22 Hewlett-Packard Development Company, L.P. Parallel pipelined merge engines
US6573905B1 (en) * 1999-11-09 2003-06-03 Broadcom Corporation Video and graphics system with parallel processing of graphics windows
US6473086B1 (en) * 1999-12-09 2002-10-29 Ati International Srl Method and apparatus for graphics processing using parallel graphics processors
US6664968B2 (en) * 2000-01-06 2003-12-16 International Business Machines Corporation Display device and image displaying method of display device

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6924821B2 (en) * 2000-04-01 2005-08-02 Autodesk Canada Inc. Processing pipeline responsive to input and output frame rates
US20020051005A1 (en) * 2000-04-01 2002-05-02 Discreet Logic Inc. Processing pipeline responsive to input and output frame rates
US7110007B2 (en) * 2001-06-28 2006-09-19 Canon Kabushiki Kaisha Image processing apparatus and method
US20030016235A1 (en) * 2001-06-28 2003-01-23 Masayuki Odagawa Image processing apparatus and method
US20050017986A1 (en) * 2001-10-13 2005-01-27 Majid Anwar Systems and methods for generating visual representations of graphical data and digital document processing
US7271814B2 (en) * 2001-10-13 2007-09-18 Picsel Technologies, Ltd. Systems and methods for generating visual representations of graphical data and digital document processing
US20030164842A1 (en) * 2002-03-04 2003-09-04 Oberoi Ranjit S. Slice blend extension for accumulation buffering
US20030184548A1 (en) * 2002-03-29 2003-10-02 Emmot Darel N. System and method for passing messages among processing nodes in a distributed system
US6873331B2 (en) * 2002-03-29 2005-03-29 Hewlett-Packard Development Company, L.P. System and method for passing messages among processing nodes in a distributed system
US20040160451A1 (en) * 2003-02-08 2004-08-19 Walls Jeffrey Joel Apparatus and method for buffering data
US6914607B2 (en) * 2003-02-08 2005-07-05 Hewlett-Packard Development Company, L.P. Apparatus and method for buffering data
EP1649357A1 (en) * 2003-07-15 2006-04-26 Alienware Labs Corporation Multiple parallel processor computer graphics system
US20060290700A1 (en) * 2003-07-15 2006-12-28 Alienware Labs. Corp. Multiple parallel processor computer graphics system
US20080211816A1 (en) * 2003-07-15 2008-09-04 Alienware Labs. Corp. Multiple parallel processor computer graphics system
US7782327B2 (en) 2003-07-15 2010-08-24 Alienware Labs. Corp. Multiple parallel processor computer graphics system
US20050088445A1 (en) * 2003-10-22 2005-04-28 Alienware Labs Corporation Motherboard for supporting multiple graphics cards
US7782325B2 (en) 2003-10-22 2010-08-24 Alienware Labs Corporation Motherboard for supporting multiple graphics cards
US7616207B1 (en) * 2005-04-25 2009-11-10 Nvidia Corporation Graphics processing system including at least three bus devices
US8035645B2 (en) 2005-04-25 2011-10-11 Nvidia Corporation Graphics processing system including at least three bus devices
US20100053177A1 (en) * 2005-04-25 2010-03-04 Nvidia Corporation Graphics processing system including at least three bus devices
US20060244758A1 (en) * 2005-04-29 2006-11-02 Modviz, Inc. Transparency-conserving method to generate and blend images
US7978204B2 (en) * 2005-04-29 2011-07-12 Nvidia Corporation Transparency-conserving system, method and computer program product to generate and blend images
US20100214303A1 (en) * 2006-03-27 2010-08-26 Seiko Epson Corporation Device Control Using Data Communication
US8203565B2 (en) 2006-03-27 2012-06-19 Seiko Epson Corporation Device control using data communication
US7742051B2 (en) * 2006-03-27 2010-06-22 Seiko Epson Corporation Device control using data communication
US20070223471A1 (en) * 2006-03-27 2007-09-27 Seiko Epson Corporation Device control using data communication
US8661186B2 (en) * 2006-07-26 2014-02-25 Panasonic Corporation Nonvolatile memory device, access device, and nonvolatile memory system
US20100005226A1 (en) * 2006-07-26 2010-01-07 Panasonic Corporation Nonvolatile memory device, access device, and nonvolatile memory system
WO2008036231A2 (en) * 2006-09-18 2008-03-27 Alienware Labs. Corporation Multiple parallel processor computer graphics system
WO2008036231A3 (en) * 2006-09-18 2008-11-27 Alienware Labs Corp Multiple parallel processor computer graphics system
GB2455249B (en) * 2006-09-18 2011-09-21 Alienware Labs Corp Multiple parallel processor computer graphics system
US20110093611A1 (en) * 2007-06-29 2011-04-21 Mikael Lind Network unit, a central distribution control unit and a computer program product
US8077242B2 (en) * 2007-09-17 2011-12-13 Qualcomm Incorporated Clock management of bus during viewfinder mode in digital camera device
US20090073300A1 (en) * 2007-09-17 2009-03-19 Qualcomm Incorporated Clock management of bus during viewfinder mode in digital camera device
EP2081384A3 (en) * 2008-01-15 2012-06-27 Verint Systems Inc. Video tiling using multiple digital signal processors
EP2081384A2 (en) * 2008-01-15 2009-07-22 Verint Systems Inc. Video tiling using multiple digital signal processors
US20090240859A1 (en) * 2008-03-21 2009-09-24 Hon Hai Precision Industry Co., Ltd. Automatic address setting system
US20100156894A1 (en) * 2008-10-26 2010-06-24 Zebra Imaging, Inc. Rendering 3D Data to Hogel Data
US20130055072A1 (en) * 2011-08-24 2013-02-28 Robert Douglas Arnold Multi-Threaded Graphical Display System
US10237559B2 (en) * 2014-11-20 2019-03-19 Getgo, Inc. Layer-based video decoding
US20170278485A1 (en) * 2014-12-11 2017-09-28 Huawei Technologies Co., Ltd. Processing method and device for multi-screen splicing display
US10403237B2 (en) * 2014-12-11 2019-09-03 Huawei Technologies Co., Ltd. Processing method and device for multi-screen splicing display
US20190279602A1 (en) * 2016-10-25 2019-09-12 Sony Semiconductor Solutions Corporation Display control apparatus, electronic equipment, control method of display control apparatus, and program
US10867587B2 (en) * 2016-10-25 2020-12-15 Sony Semiconductor Solutions Corporation Display control apparatus, electronic equipment, and control method of display control apparatus
KR20190073350A (en) * 2016-10-31 2019-06-26 에이티아이 테크놀로지스 유엘씨 A method for dynamically reducing time from application rendering to on-screen in a desktop environment.
CN110073324A (en) * 2016-10-31 2019-07-30 Ati科技无限责任公司 The method equipment for being rendered into the time on screen for dynamically reducing application program in desktop environment
JP2020502553A (en) * 2016-10-31 2020-01-23 エーティーアイ・テクノロジーズ・ユーエルシーAti Technologies Ulc Method and apparatus for dynamically reducing the time between rendering an application and going on-screen in a desktop environment
KR102426012B1 (en) * 2016-10-31 2022-07-27 에이티아이 테크놀로지스 유엘씨 A method device for dynamically reducing the time from rendering an application to on-screen in a desktop environment
JP7198202B2 (en) 2016-10-31 2022-12-28 エーティーアイ・テクノロジーズ・ユーエルシー Method and Apparatus for Dynamically Reducing Application Rendering to On-Screen Time in Desktop Environment

Also Published As

Publication number Publication date
US6924807B2 (en) 2005-08-02

Similar Documents

Publication Publication Date Title
US6924807B2 (en) Image processing apparatus and method
US7173635B2 (en) Remote graphical user interface support using a graphics processing unit
US9272220B2 (en) System and method for improving the graphics performance of hosted applications
US9682318B2 (en) System and method for improving the graphics performance of hosted applications
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
US20080211816A1 (en) Multiple parallel processor computer graphics system
US8961316B2 (en) System and method for improving the graphics performance of hosted applications
US10623609B1 (en) Virtual video environment display systems
EP1266295B1 (en) Image processing apparatus and method
CN108389241A (en) The methods, devices and systems of textures are generated in scene of game
US20050253872A1 (en) Method and system for culling view dependent visual data streams for a virtual environment
US20120108331A1 (en) System and method for improving the graphics performance of hosted applications
US20120115601A1 (en) System amd method for improving the graphics performance of hosted applications
JP2023080128A (en) System and method for efficient multi-gpu rendering of geometry by geometry analysis while rendering
CN110677600B (en) Multi-group display method and system of ultra-wide picture, on-demand equipment and on-demand system
Pomi et al. Sttreaming Video Textures for Mixed Reality Applications in Interactive Ray Tracing Environments.
CN107391068B (en) Multi-channel three-dimensional scene playing synchronization method
Pomi et al. Interactive ray tracing for virtual TV studio applications
Yang et al. Research on network architecture and communication protocol of network virtual reality based on image rendering
JP4929848B2 (en) Video data transmission system and method, transmission processing apparatus and method
CN117014655A (en) Video rendering method, device, equipment and storage medium
Lorenz et al. Driving tiled displays with an extended chromium system based on stream cached multicast communication
WO2022190398A1 (en) 3d object streaming method, device, and program
Hadic et al. A Simple Desktop Compression and Streaming System
Chai et al. Hybrid Tiled Display System Combining Distributed Rendering and Streaming

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EBIHARA, HITOSHI;SATO, KAZUMI;MOKUNO, MASAKAZU;AND OTHERS;REEL/FRAME:012409/0709;SIGNING DATES FROM 20010529 TO 20010611

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: SONY NETWORK ENTERTAINMENT PLATFORM INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:027445/0549

Effective date: 20100401

AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY NETWORK ENTERTAINMENT PLATFORM INC.;REEL/FRAME:027449/0303

Effective date: 20100401

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12