US20100020069A1 - Partitioning-based performance analysis for graphics imaging - Google Patents

Partitioning-based performance analysis for graphics imaging Download PDF

Info

Publication number
US20100020069A1
US20100020069A1 US12/507,767 US50776709A US2010020069A1 US 20100020069 A1 US20100020069 A1 US 20100020069A1 US 50776709 A US50776709 A US 50776709A US 2010020069 A1 US2010020069 A1 US 2010020069A1
Authority
US
United States
Prior art keywords
graphics
partitions
displaying
display
processors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/507,767
Inventor
Baback Elmieh
James P. Ritts
Angus Dorbie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to US12/507,767 priority Critical patent/US20100020069A1/en
Priority to PCT/US2009/051772 priority patent/WO2010011980A1/en
Priority to CN2009801274650A priority patent/CN102089784A/en
Priority to JP2011520245A priority patent/JP5242788B2/en
Priority to KR1020117004633A priority patent/KR101286938B1/en
Priority to CA2730298A priority patent/CA2730298A1/en
Priority to EP09790829A priority patent/EP2319015A1/en
Priority to TW098125262A priority patent/TW201015483A/en
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DORBIE, ANGUS, ELMIEH, BABACK, RITTS, JAMES P.
Publication of US20100020069A1 publication Critical patent/US20100020069A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/128Frame memory using a Synchronous Dynamic RAM [SDRAM]

Definitions

  • This disclosure relates to display of graphics images.
  • Graphics processors are widely used to render two-dimensional (2D) and three-dimensional (3D) images for various applications, such as video games, graphics programs, computer-aided design (CAD) applications, simulation and visualization tools, and imaging.
  • Display processors may be used to display the rendered output of the graphics processor for presentation to a user via a display device.
  • OpenGL® Open Graphics Library
  • API Application Programming Interface
  • Other languages such as Java, may define bindings to the OpenGL API's through their own standard processes.
  • the interface includes multiple function calls, or instructions, that can be used to draw scenes from simple primitives. Graphics processors, multi-media processors, and even general purpose CPU's can then execute applications that are written using OpenGL function calls.
  • OpenGL ES embedded systems
  • embedded devices such as mobile wireless phones, digital multimedia players, personal digital assistants (PDA's), or video game consoles.
  • Graphics applications such as 3D graphics applications, may describe or define contents of a scene by invoking API's, or instructions, that in turn use the underlying graphics hardware, such as one or more processors in a graphics device, to generate an image.
  • the graphics hardware may undergo a series of state transitions that are exercised through these API's.
  • a full set of states for each API call, such as a draw call or instruction may describe the process with which the image is rendered from one or more graphics primitives, such as one or more triangles, by the hardware.
  • binning-based, or partitioning-based, graphics hardware may often be implemented using a process in which the individual graphics primitives destined for rendering may be clustered into binning partitions, or bins, in order to divide up a scene of images displayed on a screen of a display device.
  • the hardware may do so due to on screen-size or resolution constraints, or due to on memory limitations associated with rendering operations.
  • Graphics primitives that may span across multiple binning partitions may be divided into multiple fragments by the hardware along the edges of the partitions before the primitive fragments are rendered.
  • the hardware may render all primitive fragments in each partition separately.
  • an individual primitive that may span, for example, across two binning partitions may be divided into two fragments, and each of these two fragments may then be independently rendered.
  • the graphics images generated by each of these fragments may then need to be re-combined within a frame of image data before being displayed on the screen of the display device.
  • primitive graphics data spanning across different partitions may be separately processed and rendered, and then the rendered image data may be recombined to form the final image.
  • this disclosure relates to techniques for providing a visual representation of a graphical scene that includes a number of different graphical partitions, which may allow a user to identify portions of the graphics scene that exhibit reduced performance due to costs that may be associated with screen partitioning.
  • a graphics device such as a mobile device, may provide partitioning, or binning, information to an external computing device (e.g., personal computer) based upon the number and type of partitions that have been created by a graphics driver.
  • the graphics device may also provide graphics instructions and state information to the computing device.
  • the computing device may display one or more graphics images in a graphical scene based upon the graphics instructions and the state information.
  • the computing device may also display a graphical representation of partitions that overlay the scene based upon the received partitioning information, and may also provide analysis information regarding potential partitioning costs or performance overhead. An application developer may use this information to investigate alternate compositions of the scene to help reduce these costs and/or performance overhead.
  • a method comprises displaying one or more graphics images in a graphical scene, displaying a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene, and analyzing graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • a device comprises a display device and one or more processors.
  • the one or more processors are configured to display one or more graphics images in a graphical scene on the display device, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene on the display device, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • a computer-readable medium comprising computer-executable instructions for causing one or more processors to display one or more graphics images in a graphical scene, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • the techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in a processor, which may refer to one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP), or other equivalent integrated or discrete logic circuitry.
  • a processor such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP), or other equivalent integrated or discrete logic circuitry.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • this disclosure also contemplates computer-readable media comprising instructions to cause a processor to perform any of a variety of techniques as described in this disclosure.
  • the computer-readable medium may form part of a computer program product, which may be sold to manufacturers and/or used in a device.
  • the computer program product may include the computer-readable medium, and in some cases, may also include packaging materials.
  • FIG. 1 is a block diagram illustrating a graphics device that may provide graphics instructions, state and/or performance information, and partitioning information, to an application computing device, according to one aspect of the disclosure.
  • FIG. 2 is a block diagram illustrating certain details of the graphics processing system, graphics driver, and application computing device shown in FIG. 1 , according to one aspect of the disclosure.
  • FIG. 3 is a flow diagram illustrating additional details of operations that may be performed by the control processor, graphics processor, vertex processor, and display processor shown in FIG. 1 , according to one aspect of the disclosure.
  • FIG. 4 is a block diagram illustrating additional details of the graphics driver shown in FIG. 2 , according to one aspect of the disclosure.
  • FIG. 5A is a conceptual diagram illustrating a first example of graphics data that may span across four partitions of a screen area provided by a display device, according to one aspect of the disclosure.
  • FIG. 5B is a conceptual diagram illustrating graphics data of the first example of FIG. 5A that is split along partition boundaries.
  • FIG. 6 is a conceptual diagram illustrating a second example of graphics data that may span across eight partitions of a screen area provided by a display device, according to one aspect of the disclosure.
  • FIG. 7 is a flow diagram of a first method that may be performed by the application computing device shown in FIG. 1 , according to one aspect of the disclosure.
  • FIG. 8 is a flow diagram of a second method that may be performed by the application computing device shown in FIG. 1 , according to one aspect of the disclosure.
  • FIG. 9 is a conceptual diagram illustrating an example of a graphics device that is coupled to a display device for displaying information in a graphic window, according to one aspect of the disclosure.
  • FIG. 10 is a conceptual diagram illustrating another example of a graphics device coupled to a display device that displays information within a graphical window, according to one aspect of the disclosure.
  • FIG. 1 is a block diagram illustrating a graphics device 2 that may provide graphics instructions 30 , state and/or performance information 32 , and partitioning information 33 , to an application computing device 20 , according to one aspect of the disclosure.
  • Graphics device 2 may be a stand-alone device or may be part of a larger system.
  • Graphics device 2 may form part of a wireless communication device (such as a wireless mobile handset), or may be part of a digital camera, video camera, digital multimedia player, personal digital assistant (PDA), video game console, other video device, or a dedicated viewing station (such as a television).
  • Graphics device 2 may also comprise a personal computer or a laptop device.
  • Graphics device 2 may also be included in one or more integrated circuits, chips, or chipsets, which may be used in some or all of the devices described above.
  • graphics device 2 may be capable of executing various applications, such as graphics applications, video applications, audio applications, and/or other multi-media applications.
  • graphics device 2 may be used for graphics applications, video game applications, video playback applications, digital camera applications, instant messaging applications, video teleconferencing applications, mobile applications, or video streaming applications.
  • Graphics device 2 may be capable of processing a variety of different data types and formats. For example, graphics device 2 may process still image data, moving image (video) data, or other multi-media data, as will be described in more detail below.
  • the image data may include computer-generated graphics data.
  • graphics device 2 includes a graphics processing system 4 , a storage medium 8 (which may comprise memory), and a display device 6 .
  • Programmable processors 10 , 12 , 14 , and 16 may be included within graphics processing system 4 .
  • Programmable processor 10 is a control, or general-purpose, processor.
  • Programmable processor 12 is a graphics processor
  • programmable processor 14 is a vertex processor
  • programmable processor 16 is a display processor.
  • Control processor 10 may be capable of controlling graphics processor 12 , vertex processor 14 , and/or display processor 16 .
  • graphics processing system 4 may include other forms of multi-media processors.
  • graphics processing system 4 is coupled both to storage medium 8 and to display device 6 .
  • Storage medium 8 may include any permanent or volatile memory that is capable of storing instructions and/or data, such as, for example, synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), embedded dynamic random access memory (eDRAM), static random access memory (SRAM), or flash memory.
  • Display device 6 may be any device capable of displaying image data for display purposes, such as an LCD (liquid crystal display), plasma display device, or other television (TV) display device.
  • Vertex processor 14 is capable of managing vertex information and processing vertex transformations.
  • vertex processor 14 may comprise a digital signal processor (DSP).
  • DSP digital signal processor
  • Graphics processor 12 may be a dedicated graphics rendering device utilized to render, manipulate, and display computerized graphics.
  • Graphics processor 12 may implement various complex graphics-related algorithms.
  • the complex algorithms may correspond to representations of two-dimensional or three-dimensional computerized graphics.
  • Graphics processor 12 may implement a number of so-called “primitive” graphics operations, such as forming points, lines, and triangles or other polygon surfaces, to create complex, three-dimensional images on a display, such as display device 6 .
  • Graphics processor 12 may carry out instructions that are stored in storage medium 8 .
  • Storage medium 8 is capable of storing application instructions 21 for an application (such as a graphics or video application), as well as one or more graphics drivers 18 .
  • Application instructions 21 may be loaded from storage medium 8 into graphics processing system 4 for execution.
  • control processor 10 , graphics processor 12 , and display processor 16 may execute instructions 21 .
  • application instructions 21 may comprise one or more downloadable modules that are downloaded dynamically, over the air, into storage medium 8 .
  • application instructions 21 may comprise a call stream of binary instructions that are generated or compiled from application programming interface (API) instructions created by an application developer.
  • API application programming interface
  • Graphics drivers 18 may also be loaded from storage medium 8 into graphics processing system 4 for execution.
  • control processor 10 graphics processor 12
  • graphics processor 12 may execute certain instructions from graphics drivers 18 .
  • graphics drivers 18 are loaded and executed by graphics processor 12 . Graphics drivers 18 will be described in further detail below.
  • Storage medium 8 also includes graphics data mapping information 23 .
  • Graphics data mapping information 23 includes information to map one or more of application instructions 21 to graphics data that may be rendered during execution of application instructions 21 .
  • the graphics data which may be stored in storage medium 8 and/or buffers 15 , may include one or more primitives (e.g., polygons).
  • Graphics data mapping information 23 may maintains a mapping of individual primitives that are to be rendered to individual instructions. After the primitives have been rendered, mapping information 23 allows a mapping from individual instructions back to original graphics data that was used to create one or more images that are ultimately displayed on graphics device 6 . Mapping information 23 may, in some cases, be useful for debugging and/or performance analysis.
  • graphics processing system 4 includes one or more buffers 15 .
  • Control processor 10 , graphics processor 12 , vertex processor 14 , and/or display processor 16 each have access to buffers 15 , and may store data in or retrieve data from buffers 15 .
  • Buffers 15 may comprise cache memory, and may be capable of storing both data and instructions.
  • buffers 15 may include one or more of application instructions 21 or one or more instructions from graphics drivers 18 that have been loaded into graphics processing system 4 from storage medium 8 .
  • Buffers 15 and/or storage medium 8 may also contain graphics data used during instruction execution.
  • Applications instructions 21 may, in certain cases, include instructions for a graphics application, such as a 3D graphics application.
  • Application instructions 21 may comprise instructions that describe or define contents of a graphics scene that includes one or more graphics images.
  • graphics processing system 4 may undergo a series of state transitions.
  • One or more instructions within graphics drivers 18 may also be executed to render or display graphics images on display device 6 during execution of application instructions 21 .
  • a full set of states for an instruction may describe a process with which an image is rendered by graphics processing system 4 .
  • an application developer who has written application instructions 21 may often have limited ability to interactively view or modify these states for purposes of debugging or experimenting with alternate methods of describing or rendering images in a defined scene.
  • different hardware platforms may have different hardware designs and implementations of these states and/or state transitions.
  • binning-based graphics hardware such as one or more of processors 10 , 12 , 14 , and 16
  • the hardware may do so based on screen size or resolution constraints of display device 6 , or based on memory limitations of storage medium 8 associated with rendering operations.
  • Primitives that may span across multiple binning partitions may be divided into multiple fragments by one or more of processors 10 , 12 , 14 , or 16 along the edges of the partitions before the primitive fragments are rendered. The primitive fragments in each partition may then be rendered separately.
  • Binning partitions in general, may be varied in number, depending on the hardware architecture, and may have various sizes and shapes.
  • the binning partitions may include multiple (e.g., four, eight) rectangular-shaped partitions.
  • an individual primitive that may span, for example, across two binning partitions may be divided, into two fragments, and each of these two fragments may then be independently rendered.
  • the graphics images generated by each of these fragments may then need to be re-combined within a frame of image data before being displayed on the screen of display device 6 .
  • dividing individual primitives that span across multiple binning partitions can have potential processing overhead, and cause overall performance degradation.
  • an application developer may use application computing device 20 , shown in FIG. 1 , to assist in the processing of debugging and experimenting with alternate methods for describing or rendering images in a scene.
  • Application computing device 20 may be capable of displaying a scene, and overlaying a graphical representation of binning partitions that may be implemented by graphics device 2 .
  • Application computing device 20 is coupled to graphics device 2 .
  • application computing device 20 is coupled to graphics device 2 via a Universal Serial Bus (USB) connection.
  • USB Universal Serial Bus
  • other types of connections such as wireless or other forms of wired connections, may be used.
  • Application computing device 20 includes one or more processors 22 , a display device 24 , and a storage medium 26 (which may comprise memory).
  • Processors 22 may include one or more of a control processor, a graphics processor, a vertex processor, and a display processor, according to one aspect.
  • Storage medium 26 may include any permanent or volatile memory that is capable of storing instructions and/or data, such as, for example, synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), static random access memory (SRAM), or flash memory.
  • Display device 24 may be any device capable of displaying image data for display purposes, such as an LCD (liquid crystal display), plasma display device, or other television (TV) display device.
  • Application computing device 20 is capable of capturing and analyzing graphics instructions 30 , along with state and/or performance information 32 , which is sent from graphics device 2 .
  • graphics drivers 18 are configured to send graphics instructions 30 and state/performance information 32 to application computing device 20 .
  • Graphics instructions 30 may include one or more of application instructions 21 , and state/performance information 32 may be generated or captured during execution of graphics instructions 30 within graphics processing system 4 .
  • State/performance information 32 includes information about the state and/or performance of graphics processing system 4 during instruction execution, and will be described in more detail below.
  • State/performance information 32 may include graphics data (e.g., primitive and/or rasterized graphics data) that may be used, or is otherwise associated, with graphics instructions 30 .
  • Graphics processing system 4 may execute graphics instructions 30 to display an image, or a scene of images, on display device 6 .
  • Application computing device 20 is capable of using graphics instructions 30 , along with state/performance information 32 , to re-create the graphics image or scene that is also shown on display device 6 of graphics device 2 .
  • Graphics device 2 may also send mapping and/or partitioning information 33 to application computing device 20 .
  • graphics drivers 18 are configured to send mapping/partitioning information 33 to application computing device 20 .
  • Mapping/partitioning information 33 may include one or more portions of graphics data mapping information 23 , which includes information to map graphics data to individual instructions within graphics instructions 30 .
  • mapping/partitioning information 33 may include information to map one or more primitives (e.g., polygons) to individual instructions within graphics instructions 30 .
  • Mapping/partitioning information 33 may also include partitioning information that is generated and provided by graphics device 2 . This partitioning information, in some cases, may be generated and provided by one or more of processors 10 , 12 , 14 , and 16 , such as control processor 10 . Partitioning information may include information that identifies the number, type, size, and/or shape of binning partitions, or bins, that may be used within graphics processing system 4 to render graphics data into one or more graphics images, and display such images on display device 6 . As described previously, graphics device 2 may partition a screen space, or size, of display device 6 into partitions, based upon, for example, memory-size limitations of buffers 15 and/or storage medium 8 during rendering operations. The partitioning information provides information about the partitions that are created and used. FIGS. 5 and 6 show examples of such partitions.
  • Simulation application 28 may be executed by one or more processors 22 of application computing device 20 to re-create the graphics image or scene upon receipt of graphics instructions 30 and state/performance information 32 , and display the image, or scene of images, on display device 24 .
  • Simulation application 28 may comprise a software module that contains a number of application instructions.
  • Simulation application 28 is stored in storage medium 26 , and may be loaded and executed by processors 22 .
  • Simulation application 28 may be pre-loaded into storage medium 26 , and may be customized to operate with graphics device 2 .
  • simulation application 28 simulates the hardware operation of graphics device 2 .
  • Different versions of simulation application 28 may be stored in storage medium 26 and executed by processors 22 for different graphics devices having different hardware designs.
  • software libraries may also be stored within storage medium 26 , which are used in conjunction with simulation application 28 .
  • simulation application 28 may be a generic application, and specific hardware or graphics device simulation functionality may be included within each separate library that may be linked with simulation application 28 during execution.
  • a visual representation of state/performance information 32 may be displayed to application developers on display device 24 .
  • a visual representation of graphics instructions 30 may also be displayed. Because, in many cases, graphics instructions 30 may comprise binary instructions, application computing device 20 may use instruction mapping information 31 to generate the visual representation of graphics instructions 30 on display device 24 . Instruction mapping information 31 is stored within storage medium 26 and may be loaded into processors 22 in order to display a visual representation of graphics instructions 30 .
  • instruction mapping information 31 may include mapping information, such as within a lookup table, to map graphics instructions 30 to corresponding API instructions that may have been previously compiled when generating graphics instructions 30 .
  • Application developers may write programs that use API instructions, but these API instructions are typically compiled into binary instructions, such as graphics instructions 30 (which are included within application instructions 21 ), for execution on graphics device 2 .
  • graphics instructions 30 which are included within application instructions 21
  • One or more instructions within graphics instructions 30 may be mapped to an individual API instruction. The mapped API instructions may then be displayed to an application developer on display device 24 to provide a visual representation of the graphics instructions 30 that are actually being executed.
  • a user such as an application developer, may wish to change one or more of the graphics instructions 30 to determine, for example, the effects of such changes on performance.
  • the user may change the visual representation of graphics instructions 30 .
  • Mapping information 31 may then be used to map these changes within the visual representation of graphics instructions 30 to binary instructions that can then be provided back to graphics device 2 within requested modifications 34 , as will be described in more detail below.
  • the graphics image that is displayed on display device 24 of application computing device 20 may be a representation of an image that is displayed on graphics device 2 .
  • simulation application 28 may use graphics instructions 30 and state/performance information 32 to re-create an image or scene exactly as it is presented on graphics device 2 , application developers that use application computing device 20 may be able to quickly identify potential performance issues or bottlenecks during execution of graphics applications 30 , and even prototype modifications to improve the overall performance of graphics applications 30 .
  • an application developer may choose to make one or more requested modifications 34 to graphics instructions 30 and/or state/performance information 32 during execution of simulation application 28 on application computing device 20 and display of the re-created image on display device 24 .
  • Any such requested modifications 34 may be based upon observed performance issues, or bottlenecks, during execution of graphics instructions 30 or analysis of state/performance information 32 .
  • These requested modifications 34 may then be sent from application computing device 20 to graphics device 2 , where they are processed by graphics processing system 4 .
  • one or more of graphics drivers 18 are executed within graphics processing system 4 to process requested modifications 34 .
  • Requested modifications 34 in some cases, may include modified instructions. In some cases, requested modifications may include modified state and/or performance information.
  • Updated instructions and/or information 35 is sent back to application computing device 20 , such as by one or more of graphics drivers 18 .
  • Updated instructions/information 35 may include updated graphics instructions for execution based upon requested modifications 34 that were processed by graphics device 2 .
  • Updated instructions/information 35 may also include updated state and/or performance information based upon the requested modifications 34 that were processed by graphics device 2 .
  • the updated instructions/information 35 is processed by simulation application 28 to update the display of the re-created image information on display device 24 , and also to provide a visual representation of updated instructions/information 35 to the application developer (which may include again using instruction mapping information 31 ).
  • the application developer may then view the updated image information on display device 24 , as well as the visual representation of updated instructions/information 35 , to determine if the performance issues have been resolved or mitigated.
  • the application developer may use an iterative process to debug graphics instructions 30 or prototype modifications to improve the overall performance graphics applications 30 .
  • application computing device 20 uses mapping/partitioning information 23 to display a visual, graphical representation of partitions that overlay the graphics images displayed on display device 24 . These partitions graphically divide the scene comprising these images on display device 24 .
  • simulation application 28 may use partitioning module 27 to process mapping/partitioning information 33 to create the graphics representation of these partitions (e.g., multiple rectangular-shaped partitions) on a screen of display device 24 .
  • Partitioning module 27 may be loaded from storage medium 26 and executed by processors 22 . When executed, partitioning module 27 may also analyze graphics data, which may be included within state/performance information 32 , for one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, partitioning module 27 may analyze one or more polygons that are used to create graphics images for display on display device 24 , and determine which ones of these polygons may span across multiple partitions, as will be described in more detail below.
  • Storage medium 26 further includes a navigation module 29 , which may also be executed by processors 22 .
  • Simulation application 28 during execution, may use navigation module 29 to display a navigation controller on display device.
  • a user such as an application developer, may interact with this navigation controller to view a modified perspective view of graphics images within a scene that is displayed on display device 24 .
  • Partitioning module 27 may then display a graphical representation of partitions that overlay the modified perspective view of the graphics images to graphically divide the modified scene.
  • Partitioning module 27 may also then analyze one or more polygons that are used to create the graphics images in the modified perspective view to determine which ones of the polygons may span across multiple partitions.
  • FIG. 2 is a block diagram illustrating certain details of graphics processing system 4 , graphics driver 18 , and application computing device 20 shown in FIG. 1 , according to one aspect.
  • application computing device 20 is coupled to graphics processing system 4 of device 2 .
  • this is shown for illustration purposes only. In other scenarios, application computing device 20 may be coupled to many other forms of graphics processing systems and devices.
  • graphics processing system 4 includes four programmable processors: control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 , which are also shown in FIG. 1 .
  • Control processor 10 may control any of vertex processor 14 , graphics processor 12 , or display processor 16 . In many cases, these processors 10 , 12 , 14 , and 16 may be part of a graphics processing pipeline within system 4 .
  • Control processor 10 may control one or more aspects of the flow of data or instruction execution through the pipeline, and may also provide geometry information for a graphics image to vertex processor 14 .
  • Vertex processor 14 may manage vertex transformation or geometry processing of the graphics image, which may be described or defined according to multiple vertices in primitive geometry form.
  • Vertex processor 14 may provide its output to graphics processor 12 , which may perform rendering or rasterization operations on the graphics image.
  • Graphics processor 12 may provide its output to display processor 16 , which prepares the graphics image, in pixel form, for display. Graphics processor 12 may also perform various operations on the pixel data, such as shading or scaling.
  • graphics image data may be processed in this processing pipeline during execution of graphics instructions 30 , which may be part of application instructions 21 ( FIG. 1 ).
  • graphics instructions 30 may be executed by one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 .
  • Application developers may typically not have much knowledge or control of which particular processors within graphics processing system 4 execute which ones of graphics instructions 30 .
  • control processor 10 may have performance issues, or serve as potential bottlenecks within the processing pipeline, during the execution of graphics instructions 30 .
  • overall performance within graphics processing system 4 may be deteriorated, and the application developer may wish to make changes the graphics instructions 30 to improve performance.
  • the developer may not necessarily know which ones of processors 10 , 12 , 14 , or 16 may be the ones that have performance issues.
  • These performance issues may include, for example, issues related to processor usage or utilization for one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 .
  • binning-based operations in which primitive graphics data is divided up across multiple binning partitions prior to rendering, may often create certain performance issues. For example, if a polygon (such as triangle 146 shown in the example of FIG. 5A ) spans across two different partitions (e.g., partitions 136 and 138 shown in FIG. 5A ), the polygon may be divided into two constituent fragments, one for each partition, and then these two constituent fragments (e.g., fragments 146 A and 146 B shown in FIG. 5B ) may be independently rendered into separate graphics images comprising pixel data. These two separate graphics images may then need to be combined prior to display in order to create a visual representation of triangle 146 .
  • the independent rendering operations for the two fragments of triangle 146 along with the combination operation for the two related graphics images, can cause performance overhead.
  • the graphics driver 18 A of graphics device 2 may capture, or collect, graphics instructions 30 from graphics processing system 4 and route them to application computing device 20 , as shown in FIG. 2 .
  • Graphics driver 18 A is part of graphics drivers 18 shown in FIG. 1 .
  • Graphics driver 18 A may be loaded and executed by one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 .
  • graphics driver 18 A may also collect state and/or performance information 32 from one or more of control processor 10 , vector processor 14 , graphics processor 12 , and display processor 16 and route this information 32 to application computing device 20 , as well.
  • graphics driver 18 A may comprise an OpenGL ES driver when graphics instructions 30 include binary instructions that may have been generated or compiled from OpenGL ES API instructions.
  • state data may include graphics data used during execution of, or otherwise associated with, graphics instructions 30 .
  • the state data may be related to a vertex array, such as position, color, coordinates, size, or weight data.
  • State data may further include texture state data, point state data, line state data, polygon state data, culling state data, alpha test state data, blending state data, depth state data, stencil state data, or color state data.
  • state data may include both state information and actual data.
  • the state data may comprise data associated with one or more OpenGL tokens.
  • performance data may also be included within state/performance information 32 .
  • this performance data may include metrics or hardware counter data from one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 .
  • the performance data may include frame rate or cycle data.
  • the cycle data may include data for cycles used for profiling, command arrays, vertex and index data, or other operations.
  • various forms of state and performance data may be included within state/performance information 32 that is collected from graphics processing system 4 by graphics driver 18 A.
  • application computing device 20 may display a representation of a graphics image according to received graphics instructions 30 and state/performance information 32 .
  • Application computing device 20 may also display a visual representation of state/performance information 32 .
  • an application developer may be able to quickly identify and resolve performance issues within graphics processing system 4 of graphics device 2 during execution of graphics instructions 30 .
  • the application developer may be able to identify which specific ones of processors 10 , 12 , 14 , and/or 16 may have performance issues.
  • mapping and/or partitioning information 33 to application computing device 20 .
  • partitioning module 27 may process the received mapping/partitioning information 33 to display a graphical representation of partitions on display device 24 that overlay the graphics image in a scene, in order to graphically divide the scene.
  • Partitioning module 27 may also use mapping/partitioning information 33 to analyze graphics data, which may be included within state/performance information 32 , to determine which portions of the data are associated with multiple ones of the partitions.
  • Mapping/partitioning information 33 may include mapping information that maps the graphics data, which may be used to generate one or more graphics images, to identified instructions within graphics instructions 30 .
  • the developer may initiate one or more requested modifications 34 on application computing device 20 .
  • the developer may interact with the re-created image or the representation of state/performance information 32 to create the requested modifications 34 .
  • the developer may even directly change the state/performance information 32 , as described in more detail below, to generate the requested modifications 34 .
  • requested modifications 34 may include one or more requests to disable execution of one or more of graphics instructions 30 in graphics processing system 4 of graphics device 2 , or requests to modify one or more of graphics instructions 30 .
  • the user may interact with a navigation controller displayed on display device 24 to request that a modified perspective view of a graphics scene be displayed.
  • Navigation module 29 may manage the display of and interaction with this navigation controller. Any requests entered by the user via a user interface may be included with requested modifications 34 . These requests may include, for example, requests to rotate one or more graphics images within the scene, requests to zoom in, requests to zoom out, or other similar requests to change a perspective view of images within the scene.
  • Requested modifications 34 are sent from application computing device 20 to graphics driver 18 A, which handles the requests for graphics device 2 during operation.
  • the requested modifications 34 may include requests to modify state information, which may include data, within one or more of processors 10 , 12 , 14 , or 16 within graphics processing system 4 during execution of graphics instructions 30 .
  • Graphics driver 18 A may then implement the changes within graphics processing system 4 that are included within requested modifications 34 . These changes may alter the flow of execution among processors 10 , 12 , 14 , and/or 16 for execution of graphics instructions 30 .
  • one or more of graphics instructions 30 may be disabled during execution in graphics processing system 4 according to requested modifications 34 .
  • Graphics driver 18 A is capable of sending updated instructions and/or information 35 to application computing device 20 in response to the processing of requested modifications 34 .
  • Updated instructions/information 35 may include updated state information collected from graphics processing system 4 by graphics driver 18 A, including performance information.
  • Updated instructions/information 35 may include updated graphics instructions and/or graphics data.
  • Application computing device 20 may use updated instructions/information 35 to display an updated representation of the graphics image, as well as a visual representation of updated instructions/information 35 .
  • the application developer may then be capable of assessing whether the previously identified performance issues have been resolved or otherwise addressed. For example, the application developer may be able to analyze the updated image, as well as the visual representation of updated instructions/information 35 to determine if certain textures, polygons, or other features have been optimized, or if other performance parameters have been improved.
  • Updated instructions/information 35 may also include updated mapping and/or partitioning information, such as an updated mapping of graphics data to instructions that are also included within instructions/information 35 . If an updated perspective view of a scene is displayed on display device 24 as a result of updated instructions/information 35 , partitioning module 27 may display a graphical representation of partitions that overlay the modified perspective view and that graphically divide the modified scene. Partitioning module 27 may also analyze graphics data for the modified perspective view (which may also be included within updated instructions/information 35 ) to determine which portions of the graphics data are associated with multiple ones of the partitions. The partitions may be determined based upon rendering operations performed on the graphics data.
  • the application developer may be able to rapidly and effectively debug or analyze execution of graphics instructions 30 within an environment on application computing device 20 that simulates the operation of graphics processing system 4 on graphics device 2 .
  • the developer may iteratively interact with the displayed image and state/performance information on application computing device 20 to analyze multiple graphics images in a scene or multiple image frames to maximize execution performance of graphics instructions 30 . Examples of such interaction and displayed information on application computing device 20 will be presented in more detail below.
  • FIG. 3 is a flow diagram illustrating additional details of operations that may be performed by control processor 10 , graphics processor 12 , vertex processor 14 , and display processor 16 , according to one aspect.
  • FIG. 3 also shows operations for frame buffer storage 100 and display 102 .
  • control processor 10 , vertex processor 14 , graphics processor 12 , and/or display processor 16 perform various operations as a result of execution of one or more of graphics instructions 30 .
  • control processor 10 may control one or more aspects of the flow of data or instruction execution through the graphics processing pipeline, and may also provide geometry information to vertex processor 14 . As shown in FIG. 3 , control processor 10 may perform geometry storage at 90 . In some cases, geometry information for one or more primitives may be stored by control processor 10 in buffers 15 ( FIG. 1 ). In some cases, geometry information may be stored in storage medium 8 .
  • Vertex processor 14 may then obtain the geometry information for a given primitive provided by control processor and/or stored in buffers 15 for processing at 92 . In certain cases, vertex processor 14 may manage vertex transformation of the geometry information. In certain cases, vertex processor 14 may perform lighting operations on the geometry information.
  • Vertex processor 14 may provide its output to graphics processor 12 , which may perform rendering or rasterization operations on the data at 94 .
  • Graphics processor 12 may provide its output to display processor 16 , which prepares one or more graphics images, in pixel form, for display.
  • graphics processor 12 may split graphics data for a geometry, such as one or more polygons, based upon determined binning partitions. As described previously, one or more of processors within graphics device 2 , such as graphics processor 12 , may create multiple binning partitions that are associated with different screen areas of display 102 based upon certain factors, such as memory requirements or limitations.
  • graphics processor may split up the geometry along partition boundaries into fragments, and independently render the fragments.
  • graphics processor 12 may provide mapping/partitioning information 33 to application computing device 20 based upon the number, type, size, shape, etc., of the determined partitions.
  • Display processor 16 may perform various operations on the pixel data, including fragment processing to process various fragments of the data, at 98 . In certain cases, this may include one or more of depth testing, stencil testing, blending, or texture mapping, as is known in the art. If graphics processor 12 previously rendered multiple geometry fragments, fragment processing 98 of display processor 16 may then combine the rendered fragments for storage into a frame buffer. When performing texture mapping, display processor 16 may incorporate texture storage and filtering information at 96 . In some cases, graphics processor 16 may perform other operations on the rasterized data, such as shading or scaling operations.
  • Display processor 16 provides the output pixel information for storage into a frame buffer at 100 .
  • the frame buffer may be included within buffers 15 ( FIG. 1 ). In other cases, the frame buffer may be included within storage medium 8 .
  • the frame buffer stores one or more frames of image data, which can then be displayed at 102 , such as on display device 6 .
  • graphics instructions 30 may be executed by one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 .
  • Application developers may typically not have much knowledge or control of which particular processors within graphics processing system 4 execute which ones of graphics instructions 30 .
  • one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 may have performance issues, or serve as potential bottlenecks within the processing pipeline, during the execution of graphics instructions 30 .
  • graphics instructions 30 and/or state information may be provided from graphics device 2 to an external computing device, such as application computing device 20 .
  • the state information may include data from one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 with respect to various operations, such as those shown in FIG. 3 , that occur during the execution of graphics instructions 30 .
  • Application computing device 20 may create a graphics image that is shown on device 2 in order to help identify and resolve bottlenecks in an efficient and effective manner.
  • Application computing device 20 may also display partitioning information, and analyze graphics data for one or more geometries to determine which portions of the data are associated with multiple ones of the partitions.
  • FIG. 4 is a block diagram illustrating additional details of graphics driver 18 A shown in FIG. 2 , according to one aspect.
  • graphics driver 18 A may comprise instructions that can be executed within graphics processing system 4 (such as, for example, by one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 ), and may be part of graphics drivers 18 .
  • Execution of graphics driver 18 A allows graphics processing system 4 to communicate with application computing device 20 .
  • graphics driver 18 A may comprise instructions that can be executed within graphics processing system 54 , and may be part of graphics drivers 68 .
  • Graphics driver 18 A when executed, includes various functional blocks, which are shown in FIG. 4 as transport interface 110 , processor usage module 112 , hardware counter module 114 , state/performance data module 116 that can manage other state and/or performance data, API trace module 118 , and override module 120 .
  • Graphics driver 18 A uses transport interface module 110 to communicate with application computing device 20 .
  • Processor usage module 112 collects and maintains processor usage information for one or more of control processor 10 , vertex processor 14 , graphics processor 12 , and display processor 16 .
  • the processor usage information may include processor cycle and/or performance information.
  • Cycle data may include data for clock cycles used for profiling, command arrays, vertex and index data, or other operations.
  • Processor usage module 112 may then provide such processor usage information to application computing device 20 via transport interface module 110 . In some cases, processor usage module 112 provides this information to device 20 as it receives the information, in an asynchronous fashion. In other cases, processor usage module 112 may provide the information upon receipt of a request from device 20 .
  • Hardware counter module 114 collects and maintains various hardware counters during execution of instructions by one or more of control processor 10 , graphics processor 12 , vertex processor 14 , or display processor 16 . The counters may keep track of various state indicators and/or metrics with respect to instruction execution within graphics processing system 4 . Hardware counter module 114 may provide information to device 20 asynchronously or upon request.
  • State/performance data module 116 collects and maintains other state and/or performance data for one or more of control processor 10 , graphics processor 12 , vertex processor 14 , and display processor 16 in graphics processing system 4 .
  • the state data may, in some cases, comprise graphics data.
  • the state data may include data related to a vertex array, such as position, color, coordinates, size, or weight data.
  • State data may further include texture state data, point state data, line state data, polygon state data, culling state data, alpha test state data, blending state data, depth state data, stencil state data, or color state data.
  • Performance data may include various other metrics or cycle data.
  • State/performance data module 116 may provide information to device 20 asynchronously or upon request.
  • Mapping/partitioning module 117 collects mapping and/or partitioning information 33 from one or more of control processor 10 , graphics processor 12 , vertex processor 14 , and display processor 16 , and may also collect information from graphics data mapping information 23 ( FIG. 1 ).
  • the mapping information may include information to map identified portions of graphics data, which are rendered to generate graphics images for display, to one or more of graphics instructions 30 . This mapping information may be helpful in mapping individual instructions back to the original graphics data that was used to render the output images.
  • the partitioning information may include information identifying a number, type, size, shape, etc. of partitions that are created and used within graphics processing system 4 when splitting apart graphics data into constituent fragments prior to rendering.
  • Mapping/partitioning module 117 may provide mapping/partitioning information 33 to application computing device 20 .
  • API trace module 118 manages a flow and/or trace of graphics instructions that are executed by graphics processing system 4 and transported to application computing device 20 via transport interface module 110 .
  • graphics device 2 provides a copy of graphics instructions 30 , which are executed by graphics processing system 4 in its processing pipeline, to device 20 .
  • API trace module 118 manages the capture and transport of these graphics instructions 30 .
  • API trace module 118 may also provide certain information used with instruction mapping information 31 ( FIG. 1 ) to map graphics instructions 30 to a visual representation of graphics instructions 30 , such as API instructions that may have been used to generate graphics instructions 30 .
  • Override module 120 allows graphics driver 18 A to change, or override, the execution of certain instructions within graphics processing system 4 .
  • application computing device 20 may send one or more requested modifications, such as modifications 34 , to graphics device 2 .
  • requested modifications 34 may include one or more requests to disable execution of one or more of graphics instructions 30 in graphics processing system 4 , or requests to modify one or more of graphics instructions 30 .
  • requested modifications 34 may include requests to change state/performance information 32 .
  • Override module 120 may accept and process requested modifications 34 .
  • override module 120 may receive from device 20 any requests to modify one or more of graphics instructions 30 , along with any requests to modify state/performance information 32 , and send such requests to graphics processing system 4 .
  • One or more of control processor 10 , graphics processor 12 , vertex processor 14 , and display processor 16 may then process these requests and generate updated instructions/information 35 .
  • Override module 120 may then send updated instructions/information 35 to application computing device 20 for processing, as described previously.
  • graphics driver 18 A provides an interface between graphics device 2 and application computing device 20 .
  • Graphics driver 18 A is capable of providing graphics instructions and state/performance information 32 to application computing device 20 , and also receiving requested modifications 34 from application computing device 20 . After processing such requested modifications 34 , graphics driver 18 A is subsequently able to provide updated instructions/information 35 back to application computing device 20 .
  • FIG. 5A is a conceptual diagram illustrating a first example of graphics data that may span across four partitions of a screen area 130 provided by a display device, such as display device 6 of graphics device 2 , or display device 24 of application computing device 20 in FIG. 1 .
  • the data shown in FIG. 5A may, in some cases, be displayed on display device 6 .
  • the data shown in FIG. 5A is graphically shown on display device 24 of application computing device 20 based upon state/performance information 32 received from graphics device 2 , and also upon mapping/partitioning information 33 received from graphics device 2 .
  • the state/performance information 32 may include graphics data for polygons (i.e., geometries) 140 , 142 , 144 , and 146 , and mapping/partitioning information 33 may include information for partitions 132 , 134 , 136 , and 138 .
  • the mapping/partitioning information 33 received by application computing device 20 may indicate that graphics device 2 uses four distinct partitions, represented by 132 , 134 , 136 , and 138 , when rendering graphics data.
  • FIG. 5A In the example of FIG. 5A , four binning partitions 132 , 134 , 136 , and 138 are implemented. These partitions represent four corresponding areas within screen area 130 that may be displayed on display device 6 or display device 24 .
  • polygons 140 and 142 are each defined by application instructions 21 ( FIG. 1 ) to be located, or situated, completely within a corresponding partition. Polygon 140 is located within partition 132 , and polygon 142 is located within partition 134 .
  • graphics processor 12 may render data within each of partitions 132 , 134 , 136 , and 138 separately, and during independent rendering operations.
  • polygon 140 is fully within partition 132 , it may be rendered as a complete geometry during the rendering operation associated with partition 132 .
  • polygon 142 is fully within partition 134 , it may be rendered as a complete geometry during the rendering operation associated within partition 134 .
  • polygons 144 and 146 span across multiple partitions. Polygon 144 spans across all four partitions 132 , 134 , 136 , and 138 , while polygon 146 spans across two of the partitions 136 and 138 .
  • graphics processor 12 may split polygon 144 into four constituent fragments 144 A, 144 B, 144 C, and 144 D (shown in FIG. 5B ). Graphics processor 12 may then independently render fragments 144 A, 144 B, 144 C, and 144 D during independent rendering operations.
  • graphics processor 12 may render fragment 144 A; during the rendering operation associated with partition 134 , graphics processor 12 may render fragment 144 B; during the rendering operation associated with partition 138 , graphics processor 12 may render fragment 144 C; and, during the rendering operation associated with partition 136 , graphics processor 12 may render fragment 144 D.
  • display processor 16 may need to combine the rendered images for each of these fragments in order to display an accurate graphical representation of polygon 144 . These separate rendering and combining operations may cause performance overhead.
  • graphics processor 12 may split polygon 146 into two constituent fragments 146 A and 146 B (shown in FIG. 5B ). Graphics processor 12 may then independently render fragments 146 A and 146 B during independent rendering operations. For example, during the rendering operation associated with partition 138 , graphics processor 12 may render fragment 146 A, during the rendering operation associated with partition 136 , graphics processor 12 may render fragment 146 B. After these fragments 146 A and 146 B have been independently rendered, display processor 16 may combine the rendered images for each of these fragments in order to display an accurate graphical representation of polygon 146 .
  • the information shown in FIGS. 5A-5B may, in some cases, be displayed on display device 24 of application computing device 20 .
  • Application computing device 20 may use graphics instructions 30 and state/performance information 32 to display a representation, or graphics images, of polygons 140 , 142 , 144 , and 146 within screen area 130 of display device 24 .
  • Application computing device 20 may also use mapping/partitioning information 33 to display a graphical representation of partitions 132 , 134 , 136 , and 138 that overlay the graphics images and that graphically divide the scene of these images.
  • Application computing device 20 may also analyze the graphics data for polygons 140 , 142 , 144 , and 146 to determine which ones of these polygons span across multiple ones of the partitions 132 , 134 , 136 , and/or 138 .
  • the developer When an application developer views the information displayed within window 130 , the developer is able to obtain an idea of which polygons may be split by the hardware because they span across multiple partitions, and also where such partitions are located.
  • the developer may be able to use this information to determine an optimized configuration or location of certain graphics data within a graphics application, such as an application that uses application instructions 21 ( FIG. 1 ), when defining a scene. For example, upon reviewing the information presented in FIG. 5A , the developer may determine to rearrange, or reconfigure, polygons 144 and 146 , such that they do not span across multiple partitions.
  • the developer may better understand how to define, configure, or locate polygons 144 and 146 such that they do not span across multiple partitions, or such that they span across only a minimal number of partitions.
  • the developer may determine to re-define a polygon as sub-polygons, such that they may not need to be combined by display processor 16 after rendering.
  • the developer may re-define polygon 146 in a modified version of application instructions 21 as two separate polygons 146 A and 146 B, as shown in FIG. 5B . If these polygons are separately defined at the outset, the rendered versions of these polygons may then not need to be combined prior to display, which may reduce performance overhead.
  • FIG. 6 is a conceptual diagram illustrating a second example of graphics data that may span across eight partitions of a screen area 150 provided by a display device, such as display device 6 (in graphics device 2 ) or display device 24 (in application computing device 20 ) shown in FIG. 1 .
  • graphics processing system 4 of graphics device 2 may create, or use, binning partitions associated with a screen area of display device 6 of various different shapes, sizes, and types, which may depend on various factors, such as memory size requirements or constraints, or other performance considerations.
  • processors 10 , 12 , 14 , or 16 may determine to create and use eight separate partitions, rather than the four partitions shown in the examples in FIGS. 5A and 5B .
  • each partition shown in FIG. 6 is one-half the size of each partition shown in FIGS. 5A and 5B .
  • Application instructions 21 may again, in the example of FIG. 6 , be executed to create and/or render polygons 140 , 142 , 144 , and 146 .
  • FIGS. 5A and 5B when only four partitions were used, polygons 140 and 142 did not span across multiple partitions.
  • a graphics application that includes application instructions 21 may not experience additional performance overhead caused from the rendering of polygons 140 and 142 , since these polygons do not span across multiple partitions.
  • graphics device 2 implements eight binning partitions, as shown in FIG. 6 , polygons 140 and 140 each span across two separate partitions-polygon 140 spans across partitions 152 and 154 , while polygon 142 spans across partitions 156 and 158 .
  • a graphical representation of partitions 152 , 154 , 156 , 158 , 160 , 162 , 164 , and 166 may be displayed to an application developer on display device 24 .
  • Any graphical display of such partitions that overlay graphics images, such as representations of polygons 140 , 142 , 144 , and 146 may be quite useful to the developer. Often, the developer will have little information or idea on the number, type, shape, size, etc., of the partitions that are created and used by any individual device, such as graphics device 2 .
  • the developer By being able to view a graphical representation of such partitions overlaid upon graphics images in a scene, the developer obtains a better idea of which graphics images or primitive graphics data, for example, may span across multiple partitions, and may therefore have certain rendering performance overhead. As a result, the developer may be able to redefine, reconfigure, resize, or otherwise change the graphics data generated and manipulated by a graphics application, such as one that includes application instructions 21 .
  • FIG. 7 is a flow diagram of a method that may be performed by application computing device 20 through execution of simulation application 28 ( FIG. 1 ), according to one aspect.
  • Application computing device 20 may receive mapping/partitioning information 33 from an external graphics device, such as graphics device 2 ( 170 ).
  • Application computing device 20 may also receive graphics instructions 30 graphics device 2 ( 172 ).
  • Graphics instructions 30 are executed by graphics device 2 to display one or more graphics images, such as three-dimensional (3D) graphics images, on display device 6 .
  • graphics instructions 30 comprise a call stream that, when executed, renders the graphics images.
  • the call stream comprises binary instructions generated from application programming interface (API) instructions.
  • API application programming interface
  • Application computing device 20 may further receive state and/or performance information 32 from graphics device 2 ( 174 ).
  • State/performance information 32 is associated with execution of graphics instructions 30 on graphics device 2 .
  • State/performance information 32 may include state information that indicates one or more states of graphics device 2 as it renders a graphics image.
  • the state information may include state information from one or more processors of graphics device 2 that execute graphics instructions 30 , such as control processor 10 , graphics processor 12 , vertex processor 14 , and/or display processor 16 .
  • the state information may comprise graphics data, such as primitive polygon data that is used by graphics processor 12 to render graphics image data.
  • Application computing device 20 may display a representation of one or more graphics images based on graphics instructions 30 and the state/performance information 32 in a graphical scene ( 176 ). In such fashion, application computing device 20 is capable of displaying a representation of these graphics images within a simulated environment that simulates graphics device 2 .
  • the simulated environment may be provided via execution of simulation application 28 on processors 22 of application computing device 20 .
  • Application computing device 20 may display a graphical representation of partitions that overlay the graphics images and that graphically divide the scene ( 178 ).
  • application computing device 20 may display a graphical representation of the partitions shown in FIGS. 5A and 5B , in one scenario.
  • the partitions may comprise rectangular-shaped partitions.
  • Application computing device 20 may display the graphical representation of the partitions based upon the received mapping/partitioning information 33 .
  • application computing device 20 may analyze graphics data for the displayed graphics images and determine which portions are associated with multiple partitions ( 180 ).
  • application computing device 20 may analyze graphics primitives, such as polygon data used to generate or render the display graphics images, and determine which polygons (e.g., triangles) span across multiple partitions.
  • the receiving of the graphics instructions ( 172 ), receiving of the state information ( 174 ), displaying the representation of the graphics image ( 176 ), displaying of the partitions ( 178 ), and the analyzing of the graphics data ( 180 ) may be repeated for multiple image frames of the one or more graphics images if there are more frames to process ( 182 ).
  • application computing device 20 is capable of displaying both still and moving graphics images (including 3D images) on display device 24 , and displaying a graphical representation of partitions that overlay the images and graphically divide the scene. As the graphics images change, or as alternate perspective views of the images are shown, the application developer can continuously ascertain the relationship between the graphics data associated with the images and the location of the partitions.
  • FIG. 8 is a flow diagram of a method that may be performed by application computing device 20 through execution of simulation application 28 ( FIG. 1 ), according to one aspect.
  • processors 22 may execute simulation application 28 to display a navigation controller on display device 24 .
  • Application computing device 20 may receive mapping/partitioning information 33 from an external graphics device, such as graphics device 2 ( 190 ). Application computing device 20 may also display a perspective view of one or more graphics images on its display device 24 ( 191 ). For example, application computing device 20 may display a perspective view of graphics images based upon received graphics instructions 30 and/or state/performance information 32 .
  • Application computing device 20 may display a graphics representation of partitions that overlay the graphics images on display device 24 based upon the received mapping/partitioning information 33 ( 192 ).
  • Application computing device 20 may also analyze graphics data for the graphics images, such as graphics data included within state/performance information 32 , to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • the graphics data may comprise a plurality of graphics primitives, such as triangles.
  • Application computing device 20 may determine which ones of the triangles span across multiple ones of the partitions ( 193 ). These triangles may comprise triangles that have been at least partially rendered in multiple partitions.
  • application computing device 20 displays a graphical representation of the triangles that span across multiple ones of the partitions on display device 24 in conjunction with displaying the graphical representation of the partitions.
  • application computing device 20 may display a graphical indication, such as a color, for each triangle that spans across multiple partitions ( 194 ).
  • application computing device 20 may, in one aspect, display a “heat map” representation of the triangles on display device 24 , where each triangle has an associated graphical indicator, such as a color.
  • graphical indicators such as a color
  • other forms of graphical indicators e.g., dashed liens, blinking indicators, highlighted indictors
  • Triangles that do not span across multiple partitions may be displayed in one color (e.g., blue).
  • Triangles that span across multiple partitions e.g., two to three partitions
  • may be displayed in a second color e.g., purple).
  • Triangles that span across more than three partitions may be prominently displayed in a third color (e.g., red).
  • an application developer can quickly determine which triangles span across multiple partitions, and which ones span across more partitions than others.
  • the developer may be able to use this information to determine how to reconfigure, redefine, or otherwise restructure triangles that span across multiple partitions to reduce performance (e.g., rendering) overhead.
  • Application computing device 20 may use navigation module 29 ( FIG. 1 ) to display a navigation controller within a user interface displayed on display device 24 .
  • the navigation controller may comprise a 3D camera controller.
  • the application developer may interact with the navigation controller to navigate around the scene of graphics images that are displayed on display device 24 .
  • Application computing device 20 may receive user input from the developer via the navigation controller to modify a perspective view of the images ( 195 ).
  • Application computing device 20 may then display a modified perspective view of the graphics images in a modified graphics scene based upon the user input to the navigation controller. For example, the developer may interact with the navigation controller to rotate around a scene of images, to zoom in or zoom out of the scene, or to otherwise change a perspective view of the scene, which may then display a modified perspective view of images within the modified scene, including new images.
  • the user input provided to the navigation controller may be sent back to graphics device 2 as requested modifications 34 , and the display of the updated perspective view may be based upon the updated instructions/information 35 provided by graphics device 2 back to application computing device 20 .
  • requested modifications 34 may include at least one of a request to disable execution of one or more of graphics instructions 30 on graphics device 2 , a request to modify one or more of graphics instructions 30 on graphics device 2 , and a request to modify state/performance information 32 on graphics device 2 .
  • application computing device 20 may also display a graphical representation of the partitions that overlay the modified perspective view of the graphics images and that graphically divided the modified scene.
  • Application computing device 20 may analyze graphics data for the modified perspective view of the graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • the displaying of a perspective view of graphics image(s) ( 191 ), displaying of partitions that overlay the graphics image(s) ( 192 ), determining which primitive triangle(s) span across multiple partitions ( 193 ), displaying of a graphical indication for each determined triangle ( 194 ), and receiving user input via a navigation controller to modify a perspective view of a scene ( 195 ) may be repeated for multiple perspective views of the scene ( 196 ).
  • the application developer can continuously ascertain the relationship between the graphics data associated with the images and the location of the partitions.
  • FIG. 9 is a conceptual diagram illustrating an example of a graphics device 200 that is coupled to a display device 201 for displaying information in a graphic window 203 , according to one aspect. If, for example, graphics device 200 is part of graphics device 2 ( FIG. 1 ), display device 201 may be part of graphics device 24 in application computing device 20 . Graphics device 200 is capable of displaying a 3D graphics image 202 . Display device 201 is capable of displaying, within window 203 , a 3D graphics image 210 , which is a re-creation of graphics image 202 , based upon graphics instructions and state/performance information that is sent from graphics device 200 .
  • Display device 201 is also capable of displaying visual representations of these instructions and state/performance information, such that a developer may change these instructions and information to modify graphics image 210 or an entire scene that includes graphics image 210 .
  • Display device 201 may be included within any type of computing device (not shown) that is coupled to graphics device 200 and is capable of receiving such instructions and state/performance information from graphics device 200 . (For purposes of simplicity, the computing device that includes display device 201 has been left out of the conceptual diagram shown in FIG. 10 .)
  • graphics device 200 is capable of display 3D graphics image 202 (which is a cube in the example of FIG. 9 ).
  • graphics device 200 also has a keypad 204 .
  • a user may interact with keypad 204 to manipulate graphics device 200 .
  • Keypad 204 may include a number of keys and/or buttons.
  • Graphics device 200 is capable of sending graphics instructions and state/performance information to a device (e.g., application computing device 20 ) that includes graphics device 201 via connector 206 .
  • connector 206 comprises a Universal Serial Bus (USB) connector.
  • USB Universal Serial Bus
  • different forms of connectors may be used.
  • wireless communication may replace connector 206 .
  • display device 201 may display various types of information within a graphical user interface.
  • display device 201 displays graphical window 203 within the graphical user interface.
  • Window 203 includes a display area 211 , a graphics instructions area 208 , and a state/performance information area 214 .
  • Display area 211 includes 3D graphics image 210 , which, as described previously, is a re-creation of 3D graphics image 202 .
  • 3D graphics image comprises a cube.
  • the information displayed on display device 201 comprises a representation, or simulation, of information displayed on graphics device 202 for purposes of debugging and testing, according to one aspect.
  • Graphics instructions area 208 includes a visual representation of one or more graphics instructions that have been received from graphics device 200 .
  • the visual representation of such instructions may comprise a representation of such instructions.
  • display device 201 may display a representation of such binary instructions in another form, such as higher-level application programming interface (API) instructions (e.g., OpenGL instructions).
  • API application programming interface
  • Mapping information (such as mapping information 31 shown in FIG. 1 ) may be used to map received binary instructions into another format that may be displayed within graphics instructions area 208 .
  • State/performance information area 214 includes a visual representation of state and/or performance information that has been received from graphics device 200 .
  • the received graphics instructions and state/performance information may be used to display 3D graphics image 210 within display area.
  • graphics device 200 may utilize a graphics driver that implements a state/performance data module (such as state/performance data module 116 shown in FIG. 4 ) to provide various state and/or performance data.
  • the received state/performance information may include graphics data (e.g., primitive data and/or rasterized data).
  • Window 203 also includes one or more selectors 212 A- 212 N. A user may select any of these selectors 212 A- 212 N. Each selector 212 A- 212 N may be associated with different functions, such as statistical and navigation functions, as will be described in more detail below. Window 203 further includes selectors 216 A- 216 N and 218 A- 218 N, each of which may be selected by a user. Each selector 216 A- 216 N and 218 A- 218 N may also be associated with different functions, such as metric functions, override functions, and/or texture functions, as will be described in more detail below in reference to FIG. 10 .
  • a user may change information displayed within window 203 .
  • the user may modify one or more of the instructions displayed within graphics instructions area 208 , or any of the state/performance information within state/performance information area 214 .
  • Any changes initiated by the user within window 203 may then be sent back to graphics device 200 as requested modifications.
  • Graphics device 200 may then process these modifications, and provide updated instructions and/or information which may then be displayed within graphics instructions area 208 and/or state/performance information area 214 .
  • the updated instructions and/or information may also be used to display a modified version of 3D graphics image 210 within display area 211 .
  • the state and/or performance information that may be displayed within area 214 may be analyzed by the computing device that includes display device 201 (such as application computing device 20 shown in FIG. 1 ) to identify potential bottlenecks during execution of the graphics instructions on graphics device 200 .
  • a user such as an application developer, may wish to view the information presented in window 203 during a debugging process to optimize the execution of graphics instructions on graphics device 200 .
  • bottlenecks may be introduced anywhere within the graphics processing pipeline in graphics device 200 , and it may be difficult for an application developer to isolate such bottlenecks for performance optimization.
  • potential bottlenecks and possible workarounds can be displayed in window 203 , such as within one or more sub-windows or pop-up windows, or within area 214 of window 203 .
  • window 203 may display a report on the bottlenecks encountered in the call-stream of the graphics instructions received from graphics device 200 , and may also display possible workarounds.
  • these possible workarounds may be presented as “what-if” scenarios to the user. For example, rendering a non-optimized triangle-list in a call-stream may be presented as one possible scenario, while pre-processing that list through a triangle-strip optimization framework may be presented as a second possible scenario.
  • the user may select any of these possible workaround scenarios as requested modifications, and the requested modifications are then transmitted back to graphics device 200 , where the performance may be measured.
  • Graphics device 200 then sends updated instructions/information, which may be presented within graphics instruction area 208 and/or state/performance information area 214 .
  • the user can then view the results, and compare results for various different potential workarounds to identify an optimum solution.
  • the user can use this process to quickly identify a series of steps that can be taken in order to remove bottlenecks from their application.
  • the user may iteratively continue to make adjustments within window 203 for purposes of experimentation, or trial/error debugging.
  • the user may experiment with various different forms or combinations of graphics instructions and state/performance information to identify changes in the images or scenes that are displayed within display area 211 .
  • the user can use the simulation environment provided by the contents of window 203 to interactively view and modify the graphics instructions, which may be part of a call-stream, and states provided by graphics device 200 without having to recompile any source code and re-execute the compiled code on graphics device 200 .
  • buttons 212 A- 212 N may manipulate a graphical navigation controller, such as graphical camera, to modify a perspective view of graphics image 210 .
  • a graphical navigation controller such as graphical camera
  • Such manipulation may be captured as requested modifications that are then sent back to graphics device 200 .
  • the updated instructions/information provided by graphics device 200 is then used to modify the perspective view of graphics image 210 .
  • various texture and/or state information may be provided in area 214 of window 203 as modifiable entities.
  • a user may even select, for example, a pixel of graphics image 210 within display area 211 , such that one or more corresponding instructions within graphics instruction area 208 are identified. In this fashion, a user can effectively drill backwards to a rendering instruction or call that was used to render or create that pixel or other portions of graphics image 210 . Because graphics device 201 may re-create image 210 in window 203 exactly as it is presented on graphics device 200 , the user is able to quickly isolate issues in their application (which may be based on the various graphics instructions displayed in graphics instructions area 208 ), and modify any states within state/performance area 214 to prototype new effects.
  • display device 201 is also capable of displaying partitioning information, as well as polygon data that may span across multiple partitions.
  • the application developer may select a button, such as one of buttons 212 A- 212 N, to cause display device 201 to display a graphical representation of partitions (e.g., rectangular-shaped partitions) that overlay image 210 and graphically divide the scene in display area 211 .
  • the displayed partitions may be based on received mapping/partitioning information 33 ( FIG. 1 ).
  • the device that includes display device 201 may also analyze graphics data (e.g., polygon data) for graphics image 210 to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, if multiple polygons were used to render graphics image 210 , the device may analyze the polygons to determine which ones of these polygons span across multiple partitions.
  • FIG. 10 is a conceptual diagram illustrating another example of graphics device 200 coupled to display device 201 that displays information within graphical window 220 , according to one aspect.
  • window 220 includes various instruction information as well as metric information.
  • Graphics instructions 242 may be a subset of graphics instructions that are provided by graphics device 200 .
  • Graphics instructions 242 may be a subset of graphics instructions 30 .
  • mapping information (such as mapping information 31 shown in FIG. 1 ) may be used to map incoming instructions received from graphics device 200 to a visual representation of these instructions, materialized as instructions 242 that are displayed within graphics instructions area 208 .
  • instructions 242 may comprise API instructions that were used to generate the instructions in binary form.
  • graphics instructions 242 include both high-level instructions and low-level instructions.
  • a user such as an application developer, may use scrollbar 244 to view the full-set of instructions 242 .
  • Certain high-level instructions may include one or more low-level instructions, such as lower-level API instructions.
  • the application developer may, in some cases, select (e.g., such as by clicking) on a particular high-level instruction in order to view any low-level instructions that are part of, or executed by, the associated high-level instruction.
  • received graphics instructions such as instructions 242 , are used to generate the representation of graphics image 202 , which comprises graphics image 210 shown in display area 211 of window 220 .
  • selection buttons are shown below state/performance information area 214 in FIG. 10 .
  • These selection buttons include a textures button 236 , an override button 238 , and a metrics button 240 .
  • the application developer has selected the metrics button 240 .
  • various metrics options may be displayed.
  • one or more metric buttons 234 A- 234 N may be displayed above state/performance area 214 .
  • Each metric button 234 A- 234 N may be associated with a particular metric.
  • one or more of these metrics may be predefined or preconfigured metric types, and in some cases, the application developer may select or customize one or more of the metrics.
  • Example metrics may include, for example, any one or more of the following: frames per second, % busy (for one or more processors), bus busy, memory busy, vertex busy, vertices per second, triangles per second, pixel clocks per second, fragments per second, etc.
  • the application developer may select any of metric buttons 234 A- 234 N to view additional details regarding the selected metrics.
  • metric button 234 A is associated with the number of frames per second
  • the application developer may select metric button 234 A to view additional details on the number of frames per second (related to performance) for graphics image 210 , or select portions of graphics image 210 .
  • the developer may, in some cases, select metric button 234 A, or drag metric button 234 A into state/performance information area 214 .
  • the detailed information on the number of frames per second may be displayed within state/performance information area 214 .
  • the developer also may drag metric button 234 A into display area 211 , or select a portion of graphics image 210 for application of metric button 234 A.
  • the developer may select a portion of graphics image 210 after selecting metric button 234 A, and then detailed information on the number of frames per second for that selected portion may be displayed within state/performance information area 214 .
  • the developer may view performance data for any number of different metric types based upon selection of one or more of metric buttons 234 A- 234 N, and even possible selection of graphics image 210 (or a portion thereof).
  • metric data that may be displayed within window 220 may be provided by a graphics driver (e.g., graphics driver 18 shown in FIG. 4 ) of graphics device 200 .
  • This graphics driver may implement a hardware counter module (e.g., hardware counter module 114 of FIG. 4 ) and/or a processor usage module (e.g., processor usage module 112 of FIG. 4 ) to provide various data that may then be displayed as metric data within window 220 .
  • a graphics driver e.g., graphics driver 18 shown in FIG. 4
  • This graphics driver may implement a hardware counter module (e.g., hardware counter module 114 of FIG. 4 ) and/or a processor usage module (e.g., processor usage module 112 of FIG. 4 ) to provide various data that may then be displayed as metric data within window 220 .
  • the developer may, in some cases, also select textures button 236 .
  • various forms of texture information related to graphics image 210 may be displayed by graphics device 201 .
  • texture information may be displayed within window 220 , such as within state/performance information area 214 .
  • the texture information may be displayed within an additional (e.g., pop-up) window (not shown).
  • the developer may view the displayed texture information, but may also, in some cases, modify the texture information. In these cases, any modifications to the texture information may be propagated back to graphics device 200 as requested modifications.
  • changes to graphics images 210 may be displayed within display area 211 .
  • FIG. 11 includes certain texture information that may be displayed upon selection of textures button 236 .
  • the developer may, in some cases, also select override button 238 .
  • certain information such as instruction and/or state information, may be displayed (e.g., within window 220 or another window) which may be modified, or overridden, by the developer.
  • Any modifications or overrides may be included within one or more requested modifications that are sent to graphics device 200 .
  • graphics device 200 may implement a graphics driver, such as graphics driver 18 A ( FIG. 4 ), to process any requested modifications.
  • graphics device 200 may use override module 120 to process such requested modifications that comprise one or more overrides.
  • the developer may override one or more over graphics instructions 242 that are shown within graphics instructions area 208 .
  • the developer may type or otherwise enter information within graphics instructions area 208 to modify or override one or more of graphics instructions 242 . These modifications may then be sent to graphics device 200 , which will provide updated instructions/information to update the display of graphics image 210 within display area 211 .
  • the developer may change, for example, parameters, ordering, type, etc., of graphics instructions 242 to override one or more functions that are provided by instructions 242 .
  • mapping information 31 FIG. 1
  • the developer may also select override button 238 to override one or more functions associated with the processing pipeline that is implemented by graphics device 200 .
  • FIG. 12 shows an example of an override screen that may be displayed to the developer upon selection of override button 238 .
  • Window 220 further includes selection buttons 231 and 232 .
  • Selection button 231 is a partition button
  • selection button 232 is a navigation button.
  • the developer may select partition button 231 to view a graphical representation of partitions, such as rectangular-shaped partitions, that overlay graphics image 210 and graphically divide the scene displayed in display area 211 .
  • the graphical partitions may be displayed in display area 211 .
  • Display area 211 may also display information based upon an analysis of graphics data for graphics image 210 that determines which portions of the data are associated with multiple partitions. For example, display area 211 , or a separate display area or window, may display which polygons, which are used to render graphics image 210 , span across multiple partitions in conjunction with the graphical representation of the partitions. In some cases, a graphical indication, such as a color, may be displayed for each polygon (e.g., triangle) that spans across multiple partitions.
  • a graphical indication such as a color
  • a “heat map” may be displayed, where each triangle is displayed in a particular color.
  • Triangles that do not span across multiple partitions may be displayed in one color (e.g., blue).
  • Triangles that span across multiple partitions e.g., two to three partitions
  • may be displayed in a second color e.g., purple
  • Triangles that span across more than three partitions may be prominently displayed in a third color (e.g., red).
  • an application developer can quickly determine which triangles span across multiple partitions, and which ones span across more partitions than others. The developer may be able to use this information to determine how to reconfigure, redefine, or otherwise restructure triangles that span across multiple partitions to reduce performance (e.g., rendering) overhead when generating graphics image 210 .
  • the developer may also select navigation button 232 to navigate within display area 211 , and even possibly to change a perspective view of graphics image 210 within display area 211 .
  • a 3D graphical camera or navigation controller may be displayed.
  • the developer may interact with the controller to navigate to any area within display area 211 .
  • the developer may also use the controller to change a perspective view of graphics image 210 , such as by rotating graphics image 210 or zooming in/out.
  • any developer-initiated changes through selection of navigation button 232 and interaction with a graphical navigation controller may be propagated back to graphics device 200 as requested modifications (e.g., part of requested modifications 84 shown in FIG. 1 ).
  • Updated instructions/information then provided by graphics device 200 may then be used to update the display (e.g., perspective view) of graphics image 210 .
  • updated instructions may be displayed within graphics instructions area 208 .
  • Updated state/performance information may be displayed within state/performance information area 214 .
  • a graphical partitions may be displayed and overlaid upon a modified perspective view of graphics image 210 .
  • graphics data contained within the updated instructions/information for the modified perspective view of the graphics image 210 may be analyzed to determine which portions of the data are associated with multiple partitions.
  • the developer may effectively and efficiently determine how alternate perspectives, orientations, views, etc., for rendering and displaying graphics image 210 may affect performance and state of graphics device 200 .
  • This may be very useful to the developer in optimizing the graphics instructions 242 that are used to create and render graphics image 210 in the simulation environment displayed on display device 201 , and effectively of graphics image 202 that is displayed on graphics device 200 .
  • any changes in the position, perspective, orientation, etc., of graphics image 210 based upon developer-initiated selections and controls within window 220 , may also be seen as changes for graphics image 202 that may be displayed on graphics device 200 during the testing process.
  • graphics instructions 242 are a visual representation of graphics instructions that are executed by graphics device 200 to create graphics image 202 .
  • a representation of graphics image 202 i.e., graphics image 210
  • graphics image 210 is displayed within display area 211 based upon graphics instructions 242 and state/performance data received by graphics device 200 .
  • an application developer can interactively and dynamically engage in a trial-and-error, or debugging, process to optimize the execution of instructions on graphics device 200 , and to eliminate or mitigate any performance issues (e.g., bottlenecks) during instruction execution.
  • a trial-and-error, or debugging process to optimize the execution of instructions on graphics device 200 , and to eliminate or mitigate any performance issues (e.g., bottlenecks) during instruction execution.
  • the visual representation of a graphical scene that includes a number of different graphical partitions may allow a developer to identify portions of the graphics scene that exhibit reduced performance due to costs that may be associated with screen partitioning. The developer may review the partitioning and associated analysis information to investigate alternate compositions of the scene to help reduce these costs and/or related performance overhead.
  • processor or “controller,” as used herein, may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein.
  • various components illustrated herein may be realized by any suitable combination of hardware, software, firmware, or any combination thereof.
  • various components are depicted as separate units or modules. However, all or several of the various components described with reference to these figures may be integrated into combined units or modules within common hardware and/or software. Accordingly, the representation of features as components, units or modules is intended to highlight particular functional features for ease of illustration, and does not necessarily require realization of such features by separate hardware or software components.
  • various units may be implemented as programmable processes performed by one or more processors.
  • any features described herein as modules, devices, or components, including graphics device 100 and/or its constituent components, may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices.
  • such components may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device, such as an integrated circuit chip or chipset.
  • integrated circuit device such as an integrated circuit chip or chipset.
  • Such circuitry may be provided in a single integrated circuit chip device or in multiple, interoperable integrated circuit chip devices, and may be used in any of a variety of image, display, audio, or other multi-media applications and devices.
  • such components may form part of a mobile device, such as a wireless communication device handset.
  • the techniques may be realized at least in part by a computer-readable medium comprising code with instructions that, when executed by one or more processors, performs one or more of the methods described above.
  • the computer-readable medium may form part of a computer program product, which may include packaging materials.
  • the computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), embedded dynamic random access memory (eDRAM), static random access memory (SRAM), flash memory, magnetic or optical data storage media.
  • RAM random access memory
  • SDRAM synchronous dynamic random access memory
  • ROM read-only memory
  • NVRAM non-volatile random access memory
  • EEPROM electrically erasable programmable read-only memory
  • eDRAM embedded dynamic random access memory
  • SRAM static random access memory
  • flash memory magnetic or optical data storage media.
  • the techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by one or more processors. Any connection may be properly termed a computer-readable medium.
  • a computer-readable medium For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Combinations of the above should also be included within the scope of computer-readable media. Any software that is utilized may be executed by one or more processors, such as one or more DSP's, general purpose microprocessors, ASIC's, FPGA's, or other equivalent integrated or discrete logic circuitry.

Abstract

In general, this disclosure relates to techniques for providing a visual representation of a graphical scene that includes a number of different graphical partitions, which may allow a user to identify portions of the graphics scene that exhibit reduced performance due to costs associated with screen partitioning. One example device includes a display device and one or more processors. The one or more processors are configured to display one or more graphics images in a graphical scene on the display device, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene on the display device, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.

Description

    CLAIM OF PRIORITY UNDER 35 U.S.C. §119
  • The present Application for Patent claims priority to Provisional Application No. 61/083,659 entitled PARTITIONING-BASED PERFORMANCE ANALYSIS FOR GRAPHICS IMAGING filed Jul. 25, 2008, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
  • REFERENCE TO CO-PENDING APPLICATIONS FOR PATENT
  • The present Application for Patent is related to the following co-pending U.S. patent applications:
  • 61/083,656 filed Jul. 25, 2008, having Attorney Docket No. 080967P1, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated by reference herein; and
  • 61/083,665 filed Jul. 25, 2008 having Attorney Docket No. 080971P1, filed concurrently herewith, assigned to the assignee hereof, and expressly incorporated by reference herein.
  • TECHNICAL FIELD
  • This disclosure relates to display of graphics images.
  • BACKGROUND
  • Graphics processors are widely used to render two-dimensional (2D) and three-dimensional (3D) images for various applications, such as video games, graphics programs, computer-aided design (CAD) applications, simulation and visualization tools, and imaging. Display processors may be used to display the rendered output of the graphics processor for presentation to a user via a display device.
  • OpenGL® (Open Graphics Library) is a standard specification that defines an API (Application Programming Interface) that may be used when writing applications that produce 2D and 3D graphics. Other languages, such as Java, may define bindings to the OpenGL API's through their own standard processes. The interface includes multiple function calls, or instructions, that can be used to draw scenes from simple primitives. Graphics processors, multi-media processors, and even general purpose CPU's can then execute applications that are written using OpenGL function calls. OpenGL ES (embedded systems) is a variant of OpenGL that is designed for embedded devices, such as mobile wireless phones, digital multimedia players, personal digital assistants (PDA's), or video game consoles.
  • Graphics applications, such as 3D graphics applications, may describe or define contents of a scene by invoking API's, or instructions, that in turn use the underlying graphics hardware, such as one or more processors in a graphics device, to generate an image. The graphics hardware may undergo a series of state transitions that are exercised through these API's. A full set of states for each API call, such as a draw call or instruction, may describe the process with which the image is rendered from one or more graphics primitives, such as one or more triangles, by the hardware.
  • In addition, binning-based, or partitioning-based, graphics hardware may often be implemented using a process in which the individual graphics primitives destined for rendering may be clustered into binning partitions, or bins, in order to divide up a scene of images displayed on a screen of a display device. The hardware may do so due to on screen-size or resolution constraints, or due to on memory limitations associated with rendering operations. Graphics primitives that may span across multiple binning partitions may be divided into multiple fragments by the hardware along the edges of the partitions before the primitive fragments are rendered. The hardware may render all primitive fragments in each partition separately.
  • Thus, an individual primitive that may span, for example, across two binning partitions may be divided into two fragments, and each of these two fragments may then be independently rendered. However, the graphics images generated by each of these fragments may then need to be re-combined within a frame of image data before being displayed on the screen of the display device. Hence, primitive graphics data spanning across different partitions may be separately processed and rendered, and then the rendered image data may be recombined to form the final image.
  • SUMMARY
  • In general, this disclosure relates to techniques for providing a visual representation of a graphical scene that includes a number of different graphical partitions, which may allow a user to identify portions of the graphics scene that exhibit reduced performance due to costs that may be associated with screen partitioning. A graphics device, such as a mobile device, may provide partitioning, or binning, information to an external computing device (e.g., personal computer) based upon the number and type of partitions that have been created by a graphics driver. The graphics device may also provide graphics instructions and state information to the computing device.
  • The computing device may display one or more graphics images in a graphical scene based upon the graphics instructions and the state information. The computing device may also display a graphical representation of partitions that overlay the scene based upon the received partitioning information, and may also provide analysis information regarding potential partitioning costs or performance overhead. An application developer may use this information to investigate alternate compositions of the scene to help reduce these costs and/or performance overhead.
  • In one aspect, a method comprises displaying one or more graphics images in a graphical scene, displaying a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene, and analyzing graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • In one aspect, a device comprises a display device and one or more processors. The one or more processors are configured to display one or more graphics images in a graphical scene on the display device, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene on the display device, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • In one aspect, a computer-readable medium comprising computer-executable instructions for causing one or more processors to display one or more graphics images in a graphical scene, display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene, and analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • The techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in a processor, which may refer to one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP), or other equivalent integrated or discrete logic circuitry. Software comprising instructions to execute the techniques may be initially stored in a computer-readable medium and loaded and executed by a processor.
  • Accordingly, this disclosure also contemplates computer-readable media comprising instructions to cause a processor to perform any of a variety of techniques as described in this disclosure. In some cases, the computer-readable medium may form part of a computer program product, which may be sold to manufacturers and/or used in a device. The computer program product may include the computer-readable medium, and in some cases, may also include packaging materials.
  • The details of one or more aspects are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram illustrating a graphics device that may provide graphics instructions, state and/or performance information, and partitioning information, to an application computing device, according to one aspect of the disclosure.
  • FIG. 2 is a block diagram illustrating certain details of the graphics processing system, graphics driver, and application computing device shown in FIG. 1, according to one aspect of the disclosure.
  • FIG. 3 is a flow diagram illustrating additional details of operations that may be performed by the control processor, graphics processor, vertex processor, and display processor shown in FIG. 1, according to one aspect of the disclosure.
  • FIG. 4 is a block diagram illustrating additional details of the graphics driver shown in FIG. 2, according to one aspect of the disclosure.
  • FIG. 5A is a conceptual diagram illustrating a first example of graphics data that may span across four partitions of a screen area provided by a display device, according to one aspect of the disclosure.
  • FIG. 5B is a conceptual diagram illustrating graphics data of the first example of FIG. 5A that is split along partition boundaries.
  • FIG. 6 is a conceptual diagram illustrating a second example of graphics data that may span across eight partitions of a screen area provided by a display device, according to one aspect of the disclosure.
  • FIG. 7 is a flow diagram of a first method that may be performed by the application computing device shown in FIG. 1, according to one aspect of the disclosure.
  • FIG. 8 is a flow diagram of a second method that may be performed by the application computing device shown in FIG. 1, according to one aspect of the disclosure.
  • FIG. 9 is a conceptual diagram illustrating an example of a graphics device that is coupled to a display device for displaying information in a graphic window, according to one aspect of the disclosure.
  • FIG. 10 is a conceptual diagram illustrating another example of a graphics device coupled to a display device that displays information within a graphical window, according to one aspect of the disclosure.
  • DETAILED DESCRIPTION
  • FIG. 1 is a block diagram illustrating a graphics device 2 that may provide graphics instructions 30, state and/or performance information 32, and partitioning information 33, to an application computing device 20, according to one aspect of the disclosure. Graphics device 2 may be a stand-alone device or may be part of a larger system. For example, graphics device 2 may form part of a wireless communication device (such as a wireless mobile handset), or may be part of a digital camera, video camera, digital multimedia player, personal digital assistant (PDA), video game console, other video device, or a dedicated viewing station (such as a television). Graphics device 2 may also comprise a personal computer or a laptop device. Graphics device 2 may also be included in one or more integrated circuits, chips, or chipsets, which may be used in some or all of the devices described above.
  • In some cases, graphics device 2 may be capable of executing various applications, such as graphics applications, video applications, audio applications, and/or other multi-media applications. For example, graphics device 2 may be used for graphics applications, video game applications, video playback applications, digital camera applications, instant messaging applications, video teleconferencing applications, mobile applications, or video streaming applications.
  • Graphics device 2 may be capable of processing a variety of different data types and formats. For example, graphics device 2 may process still image data, moving image (video) data, or other multi-media data, as will be described in more detail below. The image data may include computer-generated graphics data. In the example of FIG. 1, graphics device 2 includes a graphics processing system 4, a storage medium 8 (which may comprise memory), and a display device 6. Programmable processors 10, 12, 14, and 16 may be included within graphics processing system 4. Programmable processor 10 is a control, or general-purpose, processor. Programmable processor 12 is a graphics processor, programmable processor 14 is a vertex processor, and programmable processor 16 is a display processor. Control processor 10 may be capable of controlling graphics processor 12, vertex processor 14, and/or display processor 16. In one aspect, graphics processing system 4 may include other forms of multi-media processors.
  • In graphics device 2, graphics processing system 4 is coupled both to storage medium 8 and to display device 6. Storage medium 8 may include any permanent or volatile memory that is capable of storing instructions and/or data, such as, for example, synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), embedded dynamic random access memory (eDRAM), static random access memory (SRAM), or flash memory. Display device 6 may be any device capable of displaying image data for display purposes, such as an LCD (liquid crystal display), plasma display device, or other television (TV) display device.
  • Vertex processor 14 is capable of managing vertex information and processing vertex transformations. In one aspect, vertex processor 14 may comprise a digital signal processor (DSP). Graphics processor 12 may be a dedicated graphics rendering device utilized to render, manipulate, and display computerized graphics. Graphics processor 12 may implement various complex graphics-related algorithms. For example, the complex algorithms may correspond to representations of two-dimensional or three-dimensional computerized graphics. Graphics processor 12 may implement a number of so-called “primitive” graphics operations, such as forming points, lines, and triangles or other polygon surfaces, to create complex, three-dimensional images on a display, such as display device 6.
  • Graphics processor 12 may carry out instructions that are stored in storage medium 8. Storage medium 8 is capable of storing application instructions 21 for an application (such as a graphics or video application), as well as one or more graphics drivers 18. Application instructions 21 may be loaded from storage medium 8 into graphics processing system 4 for execution. For example, one or more of control processor 10, graphics processor 12, and display processor 16 may execute instructions 21. In one aspect, application instructions 21 may comprise one or more downloadable modules that are downloaded dynamically, over the air, into storage medium 8. In one aspect, application instructions 21 may comprise a call stream of binary instructions that are generated or compiled from application programming interface (API) instructions created by an application developer.
  • Graphics drivers 18 may also be loaded from storage medium 8 into graphics processing system 4 for execution. For example, one or more of control processor 10, graphics processor 12, and display processor 16 may execute certain instructions from graphics drivers 18. In one example aspect, graphics drivers 18 are loaded and executed by graphics processor 12. Graphics drivers 18 will be described in further detail below.
  • Storage medium 8 also includes graphics data mapping information 23. Graphics data mapping information 23 includes information to map one or more of application instructions 21 to graphics data that may be rendered during execution of application instructions 21. The graphics data, which may be stored in storage medium 8 and/or buffers 15, may include one or more primitives (e.g., polygons). Graphics data mapping information 23 may maintains a mapping of individual primitives that are to be rendered to individual instructions. After the primitives have been rendered, mapping information 23 allows a mapping from individual instructions back to original graphics data that was used to create one or more images that are ultimately displayed on graphics device 6. Mapping information 23 may, in some cases, be useful for debugging and/or performance analysis.
  • As also shown in FIG. 1, graphics processing system 4 includes one or more buffers 15. Control processor 10, graphics processor 12, vertex processor 14, and/or display processor 16 each have access to buffers 15, and may store data in or retrieve data from buffers 15. Buffers 15 may comprise cache memory, and may be capable of storing both data and instructions. For example, buffers 15 may include one or more of application instructions 21 or one or more instructions from graphics drivers 18 that have been loaded into graphics processing system 4 from storage medium 8. Buffers 15 and/or storage medium 8 may also contain graphics data used during instruction execution.
  • Applications instructions 21 may, in certain cases, include instructions for a graphics application, such as a 3D graphics application. Application instructions 21 may comprise instructions that describe or define contents of a graphics scene that includes one or more graphics images. When application instructions 21 are loaded into and executed by graphics processing system 4, graphics processing system 4 may undergo a series of state transitions. One or more instructions within graphics drivers 18 may also be executed to render or display graphics images on display device 6 during execution of application instructions 21.
  • A full set of states for an instruction, such as a draw call, may describe a process with which an image is rendered by graphics processing system 4. However, an application developer who has written application instructions 21 may often have limited ability to interactively view or modify these states for purposes of debugging or experimenting with alternate methods of describing or rendering images in a defined scene. In addition, different hardware platforms may have different hardware designs and implementations of these states and/or state transitions.
  • In addition, binning-based graphics hardware, such as one or more of processors 10, 12, 14, and 16, may often be implemented using a process in which the individual primitives destined for rendering are clustered into rectangular-shaped binning partitions, or bins, in order to divide up a scene of images displayed on a screen of display device 6. The hardware may do so based on screen size or resolution constraints of display device 6, or based on memory limitations of storage medium 8 associated with rendering operations. Primitives that may span across multiple binning partitions may be divided into multiple fragments by one or more of processors 10, 12, 14, or 16 along the edges of the partitions before the primitive fragments are rendered. The primitive fragments in each partition may then be rendered separately. Binning partitions, in general, may be varied in number, depending on the hardware architecture, and may have various sizes and shapes. For example, the binning partitions may include multiple (e.g., four, eight) rectangular-shaped partitions.
  • Thus, an individual primitive that may span, for example, across two binning partitions may be divided, into two fragments, and each of these two fragments may then be independently rendered. However, the graphics images generated by each of these fragments may then need to be re-combined within a frame of image data before being displayed on the screen of display device 6. Thus, dividing individual primitives that span across multiple binning partitions can have potential processing overhead, and cause overall performance degradation.
  • In one aspect, an application developer may use application computing device 20, shown in FIG. 1, to assist in the processing of debugging and experimenting with alternate methods for describing or rendering images in a scene. Application computing device 20 may be capable of displaying a scene, and overlaying a graphical representation of binning partitions that may be implemented by graphics device 2. Application computing device 20 is coupled to graphics device 2. For example, in one aspect, application computing device 20 is coupled to graphics device 2 via a Universal Serial Bus (USB) connection. In other aspects, other types of connections, such as wireless or other forms of wired connections, may be used.
  • Application computing device 20 includes one or more processors 22, a display device 24, and a storage medium 26 (which may comprise memory). Processors 22 may include one or more of a control processor, a graphics processor, a vertex processor, and a display processor, according to one aspect. Storage medium 26 may include any permanent or volatile memory that is capable of storing instructions and/or data, such as, for example, synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), static random access memory (SRAM), or flash memory. Display device 24 may be any device capable of displaying image data for display purposes, such as an LCD (liquid crystal display), plasma display device, or other television (TV) display device.
  • Application computing device 20 is capable of capturing and analyzing graphics instructions 30, along with state and/or performance information 32, which is sent from graphics device 2. In one aspect, graphics drivers 18 are configured to send graphics instructions 30 and state/performance information 32 to application computing device 20. Graphics instructions 30 may include one or more of application instructions 21, and state/performance information 32 may be generated or captured during execution of graphics instructions 30 within graphics processing system 4.
  • State/performance information 32 includes information about the state and/or performance of graphics processing system 4 during instruction execution, and will be described in more detail below. State/performance information 32 may include graphics data (e.g., primitive and/or rasterized graphics data) that may be used, or is otherwise associated, with graphics instructions 30. Graphics processing system 4 may execute graphics instructions 30 to display an image, or a scene of images, on display device 6. Application computing device 20 is capable of using graphics instructions 30, along with state/performance information 32, to re-create the graphics image or scene that is also shown on display device 6 of graphics device 2.
  • Graphics device 2 may also send mapping and/or partitioning information 33 to application computing device 20. In one aspect, graphics drivers 18 are configured to send mapping/partitioning information 33 to application computing device 20. Mapping/partitioning information 33 may include one or more portions of graphics data mapping information 23, which includes information to map graphics data to individual instructions within graphics instructions 30. For example, mapping/partitioning information 33 may include information to map one or more primitives (e.g., polygons) to individual instructions within graphics instructions 30.
  • Mapping/partitioning information 33 may also include partitioning information that is generated and provided by graphics device 2. This partitioning information, in some cases, may be generated and provided by one or more of processors 10, 12, 14, and 16, such as control processor 10. Partitioning information may include information that identifies the number, type, size, and/or shape of binning partitions, or bins, that may be used within graphics processing system 4 to render graphics data into one or more graphics images, and display such images on display device 6. As described previously, graphics device 2 may partition a screen space, or size, of display device 6 into partitions, based upon, for example, memory-size limitations of buffers 15 and/or storage medium 8 during rendering operations. The partitioning information provides information about the partitions that are created and used. FIGS. 5 and 6 show examples of such partitions.
  • Simulation application 28 may be executed by one or more processors 22 of application computing device 20 to re-create the graphics image or scene upon receipt of graphics instructions 30 and state/performance information 32, and display the image, or scene of images, on display device 24. Simulation application 28 may comprise a software module that contains a number of application instructions. Simulation application 28 is stored in storage medium 26, and may be loaded and executed by processors 22. Simulation application 28 may be pre-loaded into storage medium 26, and may be customized to operate with graphics device 2.
  • In one aspect, simulation application 28 simulates the hardware operation of graphics device 2. Different versions of simulation application 28 may be stored in storage medium 26 and executed by processors 22 for different graphics devices having different hardware designs. In some cases, software libraries may also be stored within storage medium 26, which are used in conjunction with simulation application 28. In one aspect, simulation application 28 may be a generic application, and specific hardware or graphics device simulation functionality may be included within each separate library that may be linked with simulation application 28 during execution.
  • In one aspect, a visual representation of state/performance information 32 may be displayed to application developers on display device 24. In addition, a visual representation of graphics instructions 30 may also be displayed. Because, in many cases, graphics instructions 30 may comprise binary instructions, application computing device 20 may use instruction mapping information 31 to generate the visual representation of graphics instructions 30 on display device 24. Instruction mapping information 31 is stored within storage medium 26 and may be loaded into processors 22 in order to display a visual representation of graphics instructions 30.
  • In one aspect, instruction mapping information 31 may include mapping information, such as within a lookup table, to map graphics instructions 30 to corresponding API instructions that may have been previously compiled when generating graphics instructions 30. Application developers may write programs that use API instructions, but these API instructions are typically compiled into binary instructions, such as graphics instructions 30 (which are included within application instructions 21), for execution on graphics device 2. One or more instructions within graphics instructions 30 may be mapped to an individual API instruction. The mapped API instructions may then be displayed to an application developer on display device 24 to provide a visual representation of the graphics instructions 30 that are actually being executed.
  • In one aspect, a user, such as an application developer, may wish to change one or more of the graphics instructions 30 to determine, for example, the effects of such changes on performance. In this aspect, the user may change the visual representation of graphics instructions 30. Mapping information 31 may then be used to map these changes within the visual representation of graphics instructions 30 to binary instructions that can then be provided back to graphics device 2 within requested modifications 34, as will be described in more detail below.
  • As described above, the graphics image that is displayed on display device 24 of application computing device 20 may be a representation of an image that is displayed on graphics device 2. Because simulation application 28 may use graphics instructions 30 and state/performance information 32 to re-create an image or scene exactly as it is presented on graphics device 2, application developers that use application computing device 20 may be able to quickly identify potential performance issues or bottlenecks during execution of graphics applications 30, and even prototype modifications to improve the overall performance of graphics applications 30.
  • For example, an application developer may choose to make one or more requested modifications 34 to graphics instructions 30 and/or state/performance information 32 during execution of simulation application 28 on application computing device 20 and display of the re-created image on display device 24. Any such requested modifications 34 may be based upon observed performance issues, or bottlenecks, during execution of graphics instructions 30 or analysis of state/performance information 32. These requested modifications 34 may then be sent from application computing device 20 to graphics device 2, where they are processed by graphics processing system 4. In one aspect, one or more of graphics drivers 18 are executed within graphics processing system 4 to process requested modifications 34. Requested modifications 34, in some cases, may include modified instructions. In some cases, requested modifications may include modified state and/or performance information.
  • Upon processing of requested modifications 34, updated instructions and/or information 35 is sent back to application computing device 20, such as by one or more of graphics drivers 18. Updated instructions/information 35 may include updated graphics instructions for execution based upon requested modifications 34 that were processed by graphics device 2. Updated instructions/information 35 may also include updated state and/or performance information based upon the requested modifications 34 that were processed by graphics device 2.
  • The updated instructions/information 35 is processed by simulation application 28 to update the display of the re-created image information on display device 24, and also to provide a visual representation of updated instructions/information 35 to the application developer (which may include again using instruction mapping information 31). The application developer may then view the updated image information on display device 24, as well as the visual representation of updated instructions/information 35, to determine if the performance issues have been resolved or mitigated. The application developer may use an iterative process to debug graphics instructions 30 or prototype modifications to improve the overall performance graphics applications 30.
  • In one aspect, application computing device 20 uses mapping/partitioning information 23 to display a visual, graphical representation of partitions that overlay the graphics images displayed on display device 24. These partitions graphically divide the scene comprising these images on display device 24. For example, simulation application 28 may use partitioning module 27 to process mapping/partitioning information 33 to create the graphics representation of these partitions (e.g., multiple rectangular-shaped partitions) on a screen of display device 24.
  • Partitioning module 27 may be loaded from storage medium 26 and executed by processors 22. When executed, partitioning module 27 may also analyze graphics data, which may be included within state/performance information 32, for one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, partitioning module 27 may analyze one or more polygons that are used to create graphics images for display on display device 24, and determine which ones of these polygons may span across multiple partitions, as will be described in more detail below.
  • Storage medium 26 further includes a navigation module 29, which may also be executed by processors 22. Simulation application 28, during execution, may use navigation module 29 to display a navigation controller on display device. A user, such as an application developer, may interact with this navigation controller to view a modified perspective view of graphics images within a scene that is displayed on display device 24. Partitioning module 27 may then display a graphical representation of partitions that overlay the modified perspective view of the graphics images to graphically divide the modified scene. Partitioning module 27 may also then analyze one or more polygons that are used to create the graphics images in the modified perspective view to determine which ones of the polygons may span across multiple partitions.
  • FIG. 2 is a block diagram illustrating certain details of graphics processing system 4, graphics driver 18, and application computing device 20 shown in FIG. 1, according to one aspect. In FIG. 2, it is assumed that application computing device 20 is coupled to graphics processing system 4 of device 2. However, this is shown for illustration purposes only. In other scenarios, application computing device 20 may be coupled to many other forms of graphics processing systems and devices.
  • As shown in FIG. 2, graphics processing system 4 includes four programmable processors: control processor 10, vertex processor 14, graphics processor 12, and display processor 16, which are also shown in FIG. 1. Control processor 10 may control any of vertex processor 14, graphics processor 12, or display processor 16. In many cases, these processors 10, 12, 14, and 16 may be part of a graphics processing pipeline within system 4.
  • Control processor 10 may control one or more aspects of the flow of data or instruction execution through the pipeline, and may also provide geometry information for a graphics image to vertex processor 14. Vertex processor 14 may manage vertex transformation or geometry processing of the graphics image, which may be described or defined according to multiple vertices in primitive geometry form. Vertex processor 14 may provide its output to graphics processor 12, which may perform rendering or rasterization operations on the graphics image. Graphics processor 12 may provide its output to display processor 16, which prepares the graphics image, in pixel form, for display. Graphics processor 12 may also perform various operations on the pixel data, such as shading or scaling.
  • Often, graphics image data may be processed in this processing pipeline during execution of graphics instructions 30, which may be part of application instructions 21 (FIG. 1). As a result, graphics instructions 30 may be executed by one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. Application developers may typically not have much knowledge or control of which particular processors within graphics processing system 4 execute which ones of graphics instructions 30.
  • In some cases, one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16 may have performance issues, or serve as potential bottlenecks within the processing pipeline, during the execution of graphics instructions 30. In these cases, overall performance within graphics processing system 4 may be deteriorated, and the application developer may wish to make changes the graphics instructions 30 to improve performance. However, the developer may not necessarily know which ones of processors 10, 12, 14, or 16 may be the ones that have performance issues. These performance issues may include, for example, issues related to processor usage or utilization for one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16.
  • In particular, binning-based operations, in which primitive graphics data is divided up across multiple binning partitions prior to rendering, may often create certain performance issues. For example, if a polygon (such as triangle 146 shown in the example of FIG. 5A) spans across two different partitions (e.g., partitions 136 and 138 shown in FIG. 5A), the polygon may be divided into two constituent fragments, one for each partition, and then these two constituent fragments (e.g., fragments 146A and 146B shown in FIG. 5B) may be independently rendered into separate graphics images comprising pixel data. These two separate graphics images may then need to be combined prior to display in order to create a visual representation of triangle 146. The independent rendering operations for the two fragments of triangle 146, along with the combination operation for the two related graphics images, can cause performance overhead.
  • To assist with the problem of identifying performance bottlenecks and potential solutions, the graphics driver 18A of graphics device 2 may capture, or collect, graphics instructions 30 from graphics processing system 4 and route them to application computing device 20, as shown in FIG. 2. Graphics driver 18A is part of graphics drivers 18 shown in FIG. 1. Graphics driver 18A may be loaded and executed by one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. In addition, graphics driver 18A may also collect state and/or performance information 32 from one or more of control processor 10, vector processor 14, graphics processor 12, and display processor 16 and route this information 32 to application computing device 20, as well. In one aspect, graphics driver 18A may comprise an OpenGL ES driver when graphics instructions 30 include binary instructions that may have been generated or compiled from OpenGL ES API instructions.
  • Various forms of state data may be included within state/performance information 32. For example, the state data may include graphics data used during execution of, or otherwise associated with, graphics instructions 30. The state data may be related to a vertex array, such as position, color, coordinates, size, or weight data. State data may further include texture state data, point state data, line state data, polygon state data, culling state data, alpha test state data, blending state data, depth state data, stencil state data, or color state data. As described previously, state data may include both state information and actual data. In some cases, the state data may comprise data associated with one or more OpenGL tokens.
  • Various forms of performance data may also be included within state/performance information 32. In general, this performance data may include metrics or hardware counter data from one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. The performance data may include frame rate or cycle data. The cycle data may include data for cycles used for profiling, command arrays, vertex and index data, or other operations. In various aspects, various forms of state and performance data may be included within state/performance information 32 that is collected from graphics processing system 4 by graphics driver 18A.
  • As described previously, application computing device 20 may display a representation of a graphics image according to received graphics instructions 30 and state/performance information 32. Application computing device 20 may also display a visual representation of state/performance information 32. By viewing and interacting with the re-created graphics image and/or the visual representation of the state/performance information 32, an application developer may be able to quickly identify and resolve performance issues within graphics processing system 4 of graphics device 2 during execution of graphics instructions 30. For example, the application developer may be able to identify which specific ones of processors 10, 12, 14, and/or 16 may have performance issues.
  • In addition, graphics driver 18A also provides mapping and/or partitioning information 33 to application computing device 20. As described previously in reference to FIG. 1, partitioning module 27 may process the received mapping/partitioning information 33 to display a graphical representation of partitions on display device 24 that overlay the graphics image in a scene, in order to graphically divide the scene. Partitioning module 27 may also use mapping/partitioning information 33 to analyze graphics data, which may be included within state/performance information 32, to determine which portions of the data are associated with multiple ones of the partitions. Mapping/partitioning information 33 may include mapping information that maps the graphics data, which may be used to generate one or more graphics images, to identified instructions within graphics instructions 30.
  • In an attempt to identify a workaround or resolution to any identified performance issues, the developer may initiate one or more requested modifications 34 on application computing device 20. For example, the developer may interact with the re-created image or the representation of state/performance information 32 to create the requested modifications 34. In some cases, the developer may even directly change the state/performance information 32, as described in more detail below, to generate the requested modifications 34. In certain cases, requested modifications 34 may include one or more requests to disable execution of one or more of graphics instructions 30 in graphics processing system 4 of graphics device 2, or requests to modify one or more of graphics instructions 30.
  • In some cases, the user may interact with a navigation controller displayed on display device 24 to request that a modified perspective view of a graphics scene be displayed. Navigation module 29 may manage the display of and interaction with this navigation controller. Any requests entered by the user via a user interface may be included with requested modifications 34. These requests may include, for example, requests to rotate one or more graphics images within the scene, requests to zoom in, requests to zoom out, or other similar requests to change a perspective view of images within the scene.
  • Requested modifications 34 are sent from application computing device 20 to graphics driver 18A, which handles the requests for graphics device 2 during operation. In many cases, the requested modifications 34 may include requests to modify state information, which may include data, within one or more of processors 10, 12, 14, or 16 within graphics processing system 4 during execution of graphics instructions 30. Graphics driver 18A may then implement the changes within graphics processing system 4 that are included within requested modifications 34. These changes may alter the flow of execution among processors 10, 12, 14, and/or 16 for execution of graphics instructions 30. In certain cases, one or more of graphics instructions 30 may be disabled during execution in graphics processing system 4 according to requested modifications 34.
  • Graphics driver 18A is capable of sending updated instructions and/or information 35 to application computing device 20 in response to the processing of requested modifications 34. Updated instructions/information 35 may include updated state information collected from graphics processing system 4 by graphics driver 18A, including performance information. Updated instructions/information 35 may include updated graphics instructions and/or graphics data.
  • Application computing device 20 may use updated instructions/information 35 to display an updated representation of the graphics image, as well as a visual representation of updated instructions/information 35. The application developer may then be capable of assessing whether the previously identified performance issues have been resolved or otherwise addressed. For example, the application developer may be able to analyze the updated image, as well as the visual representation of updated instructions/information 35 to determine if certain textures, polygons, or other features have been optimized, or if other performance parameters have been improved.
  • Updated instructions/information 35 may also include updated mapping and/or partitioning information, such as an updated mapping of graphics data to instructions that are also included within instructions/information 35. If an updated perspective view of a scene is displayed on display device 24 as a result of updated instructions/information 35, partitioning module 27 may display a graphical representation of partitions that overlay the modified perspective view and that graphically divide the modified scene. Partitioning module 27 may also analyze graphics data for the modified perspective view (which may also be included within updated instructions/information 35) to determine which portions of the graphics data are associated with multiple ones of the partitions. The partitions may be determined based upon rendering operations performed on the graphics data.
  • In such fashion, the application developer may be able to rapidly and effectively debug or analyze execution of graphics instructions 30 within an environment on application computing device 20 that simulates the operation of graphics processing system 4 on graphics device 2. The developer may iteratively interact with the displayed image and state/performance information on application computing device 20 to analyze multiple graphics images in a scene or multiple image frames to maximize execution performance of graphics instructions 30. Examples of such interaction and displayed information on application computing device 20 will be presented in more detail below.
  • FIG. 3 is a flow diagram illustrating additional details of operations that may be performed by control processor 10, graphics processor 12, vertex processor 14, and display processor 16, according to one aspect. FIG. 3 also shows operations for frame buffer storage 100 and display 102. In one aspect, control processor 10, vertex processor 14, graphics processor 12, and/or display processor 16 perform various operations as a result of execution of one or more of graphics instructions 30.
  • As described previously, control processor 10 may control one or more aspects of the flow of data or instruction execution through the graphics processing pipeline, and may also provide geometry information to vertex processor 14. As shown in FIG. 3, control processor 10 may perform geometry storage at 90. In some cases, geometry information for one or more primitives may be stored by control processor 10 in buffers 15 (FIG. 1). In some cases, geometry information may be stored in storage medium 8.
  • Vertex processor 14 may then obtain the geometry information for a given primitive provided by control processor and/or stored in buffers 15 for processing at 92. In certain cases, vertex processor 14 may manage vertex transformation of the geometry information. In certain cases, vertex processor 14 may perform lighting operations on the geometry information.
  • Vertex processor 14 may provide its output to graphics processor 12, which may perform rendering or rasterization operations on the data at 94. Graphics processor 12 may provide its output to display processor 16, which prepares one or more graphics images, in pixel form, for display. In some cases, graphics processor 12 may split graphics data for a geometry, such as one or more polygons, based upon determined binning partitions. As described previously, one or more of processors within graphics device 2, such as graphics processor 12, may create multiple binning partitions that are associated with different screen areas of display 102 based upon certain factors, such as memory requirements or limitations. If a certain geometry (e.g., triangle) spans across multiple partitions, graphics processor may split up the geometry along partition boundaries into fragments, and independently render the fragments. In some cases, graphics processor 12 may provide mapping/partitioning information 33 to application computing device 20 based upon the number, type, size, shape, etc., of the determined partitions.
  • Display processor 16 may perform various operations on the pixel data, including fragment processing to process various fragments of the data, at 98. In certain cases, this may include one or more of depth testing, stencil testing, blending, or texture mapping, as is known in the art. If graphics processor 12 previously rendered multiple geometry fragments, fragment processing 98 of display processor 16 may then combine the rendered fragments for storage into a frame buffer. When performing texture mapping, display processor 16 may incorporate texture storage and filtering information at 96. In some cases, graphics processor 16 may perform other operations on the rasterized data, such as shading or scaling operations.
  • Display processor 16 provides the output pixel information for storage into a frame buffer at 100. In some cases, the frame buffer may be included within buffers 15 (FIG. 1). In other cases, the frame buffer may be included within storage medium 8. The frame buffer stores one or more frames of image data, which can then be displayed at 102, such as on display device 6.
  • As described previously, graphics instructions 30 may be executed by one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. Application developers may typically not have much knowledge or control of which particular processors within graphics processing system 4 execute which ones of graphics instructions 30. In certain cases, one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16 may have performance issues, or serve as potential bottlenecks within the processing pipeline, during the execution of graphics instructions 30.
  • It may often be difficult for an application developer to pinpoint the location of a bottleneck, or how best to resolve or mitigate the effects of such a bottleneck. Thus, in one aspect, graphics instructions 30 and/or state information may be provided from graphics device 2 to an external computing device, such as application computing device 20. The state information may include data from one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16 with respect to various operations, such as those shown in FIG. 3, that occur during the execution of graphics instructions 30. Application computing device 20 may create a graphics image that is shown on device 2 in order to help identify and resolve bottlenecks in an efficient and effective manner. Application computing device 20 may also display partitioning information, and analyze graphics data for one or more geometries to determine which portions of the data are associated with multiple ones of the partitions.
  • FIG. 4 is a block diagram illustrating additional details of graphics driver 18A shown in FIG. 2, according to one aspect. As described previously, graphics driver 18A may comprise instructions that can be executed within graphics processing system 4 (such as, for example, by one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16), and may be part of graphics drivers 18. Execution of graphics driver 18A allows graphics processing system 4 to communicate with application computing device 20. In one aspect, graphics driver 18A may comprise instructions that can be executed within graphics processing system 54, and may be part of graphics drivers 68.
  • Graphics driver 18A, when executed, includes various functional blocks, which are shown in FIG. 4 as transport interface 110, processor usage module 112, hardware counter module 114, state/performance data module 116 that can manage other state and/or performance data, API trace module 118, and override module 120. Graphics driver 18A uses transport interface module 110 to communicate with application computing device 20.
  • Processor usage module 112 collects and maintains processor usage information for one or more of control processor 10, vertex processor 14, graphics processor 12, and display processor 16. The processor usage information may include processor cycle and/or performance information. Cycle data may include data for clock cycles used for profiling, command arrays, vertex and index data, or other operations. Processor usage module 112 may then provide such processor usage information to application computing device 20 via transport interface module 110. In some cases, processor usage module 112 provides this information to device 20 as it receives the information, in an asynchronous fashion. In other cases, processor usage module 112 may provide the information upon receipt of a request from device 20.
  • Hardware counter module 114 collects and maintains various hardware counters during execution of instructions by one or more of control processor 10, graphics processor 12, vertex processor 14, or display processor 16. The counters may keep track of various state indicators and/or metrics with respect to instruction execution within graphics processing system 4. Hardware counter module 114 may provide information to device 20 asynchronously or upon request.
  • State/performance data module 116 collects and maintains other state and/or performance data for one or more of control processor 10, graphics processor 12, vertex processor 14, and display processor 16 in graphics processing system 4. For example, the state data may, in some cases, comprise graphics data. The state data may include data related to a vertex array, such as position, color, coordinates, size, or weight data. State data may further include texture state data, point state data, line state data, polygon state data, culling state data, alpha test state data, blending state data, depth state data, stencil state data, or color state data. Performance data may include various other metrics or cycle data. State/performance data module 116 may provide information to device 20 asynchronously or upon request.
  • Mapping/partitioning module 117 collects mapping and/or partitioning information 33 from one or more of control processor 10, graphics processor 12, vertex processor 14, and display processor 16, and may also collect information from graphics data mapping information 23 (FIG. 1). The mapping information may include information to map identified portions of graphics data, which are rendered to generate graphics images for display, to one or more of graphics instructions 30. This mapping information may be helpful in mapping individual instructions back to the original graphics data that was used to render the output images. The partitioning information may include information identifying a number, type, size, shape, etc. of partitions that are created and used within graphics processing system 4 when splitting apart graphics data into constituent fragments prior to rendering. Mapping/partitioning module 117 may provide mapping/partitioning information 33 to application computing device 20.
  • API trace module 118 manages a flow and/or trace of graphics instructions that are executed by graphics processing system 4 and transported to application computing device 20 via transport interface module 110. As described previously, graphics device 2 provides a copy of graphics instructions 30, which are executed by graphics processing system 4 in its processing pipeline, to device 20. API trace module 118 manages the capture and transport of these graphics instructions 30. API trace module 118 may also provide certain information used with instruction mapping information 31 (FIG. 1) to map graphics instructions 30 to a visual representation of graphics instructions 30, such as API instructions that may have been used to generate graphics instructions 30.
  • Override module 120 allows graphics driver 18A to change, or override, the execution of certain instructions within graphics processing system 4. As described previously, application computing device 20 may send one or more requested modifications, such as modifications 34, to graphics device 2. In certain cases, requested modifications 34 may include one or more requests to disable execution of one or more of graphics instructions 30 in graphics processing system 4, or requests to modify one or more of graphics instructions 30. In some cases, requested modifications 34 may include requests to change state/performance information 32.
  • Override module 120 may accept and process requested modifications 34. For example, override module 120 may receive from device 20 any requests to modify one or more of graphics instructions 30, along with any requests to modify state/performance information 32, and send such requests to graphics processing system 4. One or more of control processor 10, graphics processor 12, vertex processor 14, and display processor 16 may then process these requests and generate updated instructions/information 35. Override module 120 may then send updated instructions/information 35 to application computing device 20 for processing, as described previously.
  • In such fashion, graphics driver 18A provides an interface between graphics device 2 and application computing device 20. Graphics driver 18A is capable of providing graphics instructions and state/performance information 32 to application computing device 20, and also receiving requested modifications 34 from application computing device 20. After processing such requested modifications 34, graphics driver 18A is subsequently able to provide updated instructions/information 35 back to application computing device 20.
  • FIG. 5A is a conceptual diagram illustrating a first example of graphics data that may span across four partitions of a screen area 130 provided by a display device, such as display device 6 of graphics device 2, or display device 24 of application computing device 20 in FIG. 1. The data shown in FIG. 5A may, in some cases, be displayed on display device 6. In one aspect, the data shown in FIG. 5A is graphically shown on display device 24 of application computing device 20 based upon state/performance information 32 received from graphics device 2, and also upon mapping/partitioning information 33 received from graphics device 2. The state/performance information 32 may include graphics data for polygons (i.e., geometries) 140, 142, 144, and 146, and mapping/partitioning information 33 may include information for partitions 132, 134, 136, and 138. For example, the mapping/partitioning information 33 received by application computing device 20 may indicate that graphics device 2 uses four distinct partitions, represented by 132, 134, 136, and 138, when rendering graphics data.
  • In the example of FIG. 5A, four binning partitions 132, 134, 136, and 138 are implemented. These partitions represent four corresponding areas within screen area 130 that may be displayed on display device 6 or display device 24. As can be seen in the figure, polygons 140 and 142 are each defined by application instructions 21 (FIG. 1) to be located, or situated, completely within a corresponding partition. Polygon 140 is located within partition 132, and polygon 142 is located within partition 134. When rendering graphics data, graphics processor 12, for example, may render data within each of partitions 132, 134, 136, and 138 separately, and during independent rendering operations. Because polygon 140 is fully within partition 132, it may be rendered as a complete geometry during the rendering operation associated with partition 132. Likewise, because polygon 142 is fully within partition 134, it may be rendered as a complete geometry during the rendering operation associated within partition 134.
  • On the other hand, polygons 144 and 146 span across multiple partitions. Polygon 144 spans across all four partitions 132, 134, 136, and 138, while polygon 146 spans across two of the partitions 136 and 138. In order to render polygon 144, graphics processor 12 may split polygon 144 into four constituent fragments 144A, 144B, 144C, and 144D (shown in FIG. 5B). Graphics processor 12 may then independently render fragments 144A, 144B, 144C, and 144D during independent rendering operations. For example, during the rendering operation associated with partition 132, graphics processor 12 may render fragment 144A; during the rendering operation associated with partition 134, graphics processor 12 may render fragment 144B; during the rendering operation associated with partition 138, graphics processor 12 may render fragment 144C; and, during the rendering operation associated with partition 136, graphics processor 12 may render fragment 144D.
  • After these fragments 144A, 144B, 144C, and 144D have been independently rendered, display processor 16 may need to combine the rendered images for each of these fragments in order to display an accurate graphical representation of polygon 144. These separate rendering and combining operations may cause performance overhead.
  • Similarly, in order to render polygon 146, graphics processor 12 may split polygon 146 into two constituent fragments 146A and 146B (shown in FIG. 5B). Graphics processor 12 may then independently render fragments 146A and 146B during independent rendering operations. For example, during the rendering operation associated with partition 138, graphics processor 12 may render fragment 146A, during the rendering operation associated with partition 136, graphics processor 12 may render fragment 146B. After these fragments 146A and 146B have been independently rendered, display processor 16 may combine the rendered images for each of these fragments in order to display an accurate graphical representation of polygon 146.
  • The information shown in FIGS. 5A-5B may, in some cases, be displayed on display device 24 of application computing device 20. Application computing device 20 may use graphics instructions 30 and state/performance information 32 to display a representation, or graphics images, of polygons 140, 142, 144, and 146 within screen area 130 of display device 24. Application computing device 20 may also use mapping/partitioning information 33 to display a graphical representation of partitions 132, 134, 136, and 138 that overlay the graphics images and that graphically divide the scene of these images. Application computing device 20 may also analyze the graphics data for polygons 140, 142, 144, and 146 to determine which ones of these polygons span across multiple ones of the partitions 132, 134, 136, and/or 138.
  • When an application developer views the information displayed within window 130, the developer is able to obtain an idea of which polygons may be split by the hardware because they span across multiple partitions, and also where such partitions are located. The developer may be able to use this information to determine an optimized configuration or location of certain graphics data within a graphics application, such as an application that uses application instructions 21 (FIG. 1), when defining a scene. For example, upon reviewing the information presented in FIG. 5A, the developer may determine to rearrange, or reconfigure, polygons 144 and 146, such that they do not span across multiple partitions.
  • Because the developer is presented with a representation of the partitions that overlay the graphics images within window 130, as these partitions are defined by graphics device 2, the developer may better understand how to define, configure, or locate polygons 144 and 146 such that they do not span across multiple partitions, or such that they span across only a minimal number of partitions. In some cases, the developer may determine to re-define a polygon as sub-polygons, such that they may not need to be combined by display processor 16 after rendering. For example, the developer may re-define polygon 146 in a modified version of application instructions 21 as two separate polygons 146A and 146B, as shown in FIG. 5B. If these polygons are separately defined at the outset, the rendered versions of these polygons may then not need to be combined prior to display, which may reduce performance overhead.
  • FIG. 6 is a conceptual diagram illustrating a second example of graphics data that may span across eight partitions of a screen area 150 provided by a display device, such as display device 6 (in graphics device 2) or display device 24 (in application computing device 20) shown in FIG. 1. As described previously, graphics processing system 4 of graphics device 2 may create, or use, binning partitions associated with a screen area of display device 6 of various different shapes, sizes, and types, which may depend on various factors, such as memory size requirements or constraints, or other performance considerations. In the second example of FIG. 6, one or more of processors 10, 12, 14, or 16 may determine to create and use eight separate partitions, rather than the four partitions shown in the examples in FIGS. 5A and 5B.
  • Within screen area 150, the eight partitions are partitions 152, 154, 156, 158, 160, 162, 164, and 166. Each of these partitions is rectangular in shape. If, for purposes of illustration, screen area 150 is substantially the same size in area as screen area 130 (FIGS. 5A and 5B), each partition shown in FIG. 6 is one-half the size of each partition shown in FIGS. 5A and 5B.
  • Application instructions 21 may again, in the example of FIG. 6, be executed to create and/or render polygons 140, 142, 144, and 146. In FIGS. 5A and 5B, when only four partitions were used, polygons 140 and 142 did not span across multiple partitions. Thus, if graphics device 2 implements only four binning partitions, a graphics application that includes application instructions 21 may not experience additional performance overhead caused from the rendering of polygons 140 and 142, since these polygons do not span across multiple partitions. However, if graphics device 2 implements eight binning partitions, as shown in FIG. 6, polygons 140 and 140 each span across two separate partitions-polygon 140 spans across partitions 152 and 154, while polygon 142 spans across partitions 156 and 158.
  • In one aspect, a graphical representation of partitions 152, 154, 156, 158, 160, 162, 164, and 166 may be displayed to an application developer on display device 24. Any graphical display of such partitions that overlay graphics images, such as representations of polygons 140, 142, 144, and 146, may be quite useful to the developer. Often, the developer will have little information or idea on the number, type, shape, size, etc., of the partitions that are created and used by any individual device, such as graphics device 2. By being able to view a graphical representation of such partitions overlaid upon graphics images in a scene, the developer obtains a better idea of which graphics images or primitive graphics data, for example, may span across multiple partitions, and may therefore have certain rendering performance overhead. As a result, the developer may be able to redefine, reconfigure, resize, or otherwise change the graphics data generated and manipulated by a graphics application, such as one that includes application instructions 21.
  • FIG. 7 is a flow diagram of a method that may be performed by application computing device 20 through execution of simulation application 28 (FIG. 1), according to one aspect. Application computing device 20 may receive mapping/partitioning information 33 from an external graphics device, such as graphics device 2 (170). Application computing device 20 may also receive graphics instructions 30 graphics device 2 (172). Graphics instructions 30 are executed by graphics device 2 to display one or more graphics images, such as three-dimensional (3D) graphics images, on display device 6. In one aspect, graphics instructions 30 comprise a call stream that, when executed, renders the graphics images. In one aspect, the call stream comprises binary instructions generated from application programming interface (API) instructions.
  • Application computing device 20 may further receive state and/or performance information 32 from graphics device 2 (174). State/performance information 32 is associated with execution of graphics instructions 30 on graphics device 2. State/performance information 32 may include state information that indicates one or more states of graphics device 2 as it renders a graphics image. The state information may include state information from one or more processors of graphics device 2 that execute graphics instructions 30, such as control processor 10, graphics processor 12, vertex processor 14, and/or display processor 16. In some cases, the state information may comprise graphics data, such as primitive polygon data that is used by graphics processor 12 to render graphics image data.
  • Application computing device 20 may display a representation of one or more graphics images based on graphics instructions 30 and the state/performance information 32 in a graphical scene (176). In such fashion, application computing device 20 is capable of displaying a representation of these graphics images within a simulated environment that simulates graphics device 2. The simulated environment may be provided via execution of simulation application 28 on processors 22 of application computing device 20.
  • Application computing device 20 may display a graphical representation of partitions that overlay the graphics images and that graphically divide the scene (178). For example, application computing device 20 may display a graphical representation of the partitions shown in FIGS. 5A and 5B, in one scenario. In some cases, the partitions may comprise rectangular-shaped partitions. Application computing device 20 may display the graphical representation of the partitions based upon the received mapping/partitioning information 33.
  • In addition, application computing device 20 may analyze graphics data for the displayed graphics images and determine which portions are associated with multiple partitions (180). For example, application computing device 20 may analyze graphics primitives, such as polygon data used to generate or render the display graphics images, and determine which polygons (e.g., triangles) span across multiple partitions.
  • The receiving of the graphics instructions (172), receiving of the state information (174), displaying the representation of the graphics image (176), displaying of the partitions (178), and the analyzing of the graphics data (180) may be repeated for multiple image frames of the one or more graphics images if there are more frames to process (182). In this fashion, application computing device 20 is capable of displaying both still and moving graphics images (including 3D images) on display device 24, and displaying a graphical representation of partitions that overlay the images and graphically divide the scene. As the graphics images change, or as alternate perspective views of the images are shown, the application developer can continuously ascertain the relationship between the graphics data associated with the images and the location of the partitions.
  • FIG. 8 is a flow diagram of a method that may be performed by application computing device 20 through execution of simulation application 28 (FIG. 1), according to one aspect. In one aspect, processors 22 may execute simulation application 28 to display a navigation controller on display device 24.
  • Application computing device 20 may receive mapping/partitioning information 33 from an external graphics device, such as graphics device 2 (190). Application computing device 20 may also display a perspective view of one or more graphics images on its display device 24 (191). For example, application computing device 20 may display a perspective view of graphics images based upon received graphics instructions 30 and/or state/performance information 32.
  • Application computing device 20 may display a graphics representation of partitions that overlay the graphics images on display device 24 based upon the received mapping/partitioning information 33 (192). Application computing device 20 may also analyze graphics data for the graphics images, such as graphics data included within state/performance information 32, to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, the graphics data may comprise a plurality of graphics primitives, such as triangles. Application computing device 20 may determine which ones of the triangles span across multiple ones of the partitions (193). These triangles may comprise triangles that have been at least partially rendered in multiple partitions.
  • In one aspect, application computing device 20 displays a graphical representation of the triangles that span across multiple ones of the partitions on display device 24 in conjunction with displaying the graphical representation of the partitions. In some cases, application computing device 20 may display a graphical indication, such as a color, for each triangle that spans across multiple partitions (194).
  • For example, application computing device 20 may, in one aspect, display a “heat map” representation of the triangles on display device 24, where each triangle has an associated graphical indicator, such as a color. In addition to color, other forms of graphical indicators (e.g., dashed liens, blinking indicators, highlighted indictors) may be used in certain scenarios to distinguish triangles from one another. Triangles that do not span across multiple partitions may be displayed in one color (e.g., blue). Triangles that span across multiple partitions (e.g., two to three partitions) may be displayed in a second color (e.g., purple). Triangles that span across more than three partitions may be prominently displayed in a third color (e.g., red). Thus, in this example, an application developer can quickly determine which triangles span across multiple partitions, and which ones span across more partitions than others. The developer may be able to use this information to determine how to reconfigure, redefine, or otherwise restructure triangles that span across multiple partitions to reduce performance (e.g., rendering) overhead.
  • Application computing device 20 may use navigation module 29 (FIG. 1) to display a navigation controller within a user interface displayed on display device 24. For example, the navigation controller may comprise a 3D camera controller. The application developer may interact with the navigation controller to navigate around the scene of graphics images that are displayed on display device 24. Application computing device 20 may receive user input from the developer via the navigation controller to modify a perspective view of the images (195).
  • Application computing device 20 may then display a modified perspective view of the graphics images in a modified graphics scene based upon the user input to the navigation controller. For example, the developer may interact with the navigation controller to rotate around a scene of images, to zoom in or zoom out of the scene, or to otherwise change a perspective view of the scene, which may then display a modified perspective view of images within the modified scene, including new images. The user input provided to the navigation controller may be sent back to graphics device 2 as requested modifications 34, and the display of the updated perspective view may be based upon the updated instructions/information 35 provided by graphics device 2 back to application computing device 20. In one aspect, requested modifications 34 may include at least one of a request to disable execution of one or more of graphics instructions 30 on graphics device 2, a request to modify one or more of graphics instructions 30 on graphics device 2, and a request to modify state/performance information 32 on graphics device 2.
  • In one aspect, application computing device 20 may also display a graphical representation of the partitions that overlay the modified perspective view of the graphics images and that graphically divided the modified scene. Application computing device 20 may analyze graphics data for the modified perspective view of the graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
  • The displaying of a perspective view of graphics image(s) (191), displaying of partitions that overlay the graphics image(s) (192), determining which primitive triangle(s) span across multiple partitions (193), displaying of a graphical indication for each determined triangle (194), and receiving user input via a navigation controller to modify a perspective view of a scene (195) may be repeated for multiple perspective views of the scene (196). As the graphics images change, or as alternate perspective views of the images are shown, the application developer can continuously ascertain the relationship between the graphics data associated with the images and the location of the partitions.
  • FIG. 9 is a conceptual diagram illustrating an example of a graphics device 200 that is coupled to a display device 201 for displaying information in a graphic window 203, according to one aspect. If, for example, graphics device 200 is part of graphics device 2 (FIG. 1), display device 201 may be part of graphics device 24 in application computing device 20. Graphics device 200 is capable of displaying a 3D graphics image 202. Display device 201 is capable of displaying, within window 203, a 3D graphics image 210, which is a re-creation of graphics image 202, based upon graphics instructions and state/performance information that is sent from graphics device 200. Display device 201 is also capable of displaying visual representations of these instructions and state/performance information, such that a developer may change these instructions and information to modify graphics image 210 or an entire scene that includes graphics image 210. Display device 201 may be included within any type of computing device (not shown) that is coupled to graphics device 200 and is capable of receiving such instructions and state/performance information from graphics device 200. (For purposes of simplicity, the computing device that includes display device 201 has been left out of the conceptual diagram shown in FIG. 10.)
  • As described previously, graphics device 200 is capable of display 3D graphics image 202 (which is a cube in the example of FIG. 9). In the example of FIG. 9, graphics device 200 also has a keypad 204. A user may interact with keypad 204 to manipulate graphics device 200. Keypad 204 may include a number of keys and/or buttons. Graphics device 200 is capable of sending graphics instructions and state/performance information to a device (e.g., application computing device 20) that includes graphics device 201 via connector 206. In one aspect, connector 206 comprises a Universal Serial Bus (USB) connector. In other aspects, different forms of connectors may be used. In some aspects, wireless communication may replace connector 206.
  • As shown in the example of FIG. 9, display device 201 may display various types of information within a graphical user interface. In this example, display device 201 displays graphical window 203 within the graphical user interface. Window 203 includes a display area 211, a graphics instructions area 208, and a state/performance information area 214. Display area 211 includes 3D graphics image 210, which, as described previously, is a re-creation of 3D graphics image 202. In this example, 3D graphics image comprises a cube. The information displayed on display device 201 comprises a representation, or simulation, of information displayed on graphics device 202 for purposes of debugging and testing, according to one aspect.
  • Graphics instructions area 208 includes a visual representation of one or more graphics instructions that have been received from graphics device 200. As described previously, the visual representation of such instructions may comprise a representation of such instructions. For example, if graphics device 200 sends binary graphics instructions, display device 201 may display a representation of such binary instructions in another form, such as higher-level application programming interface (API) instructions (e.g., OpenGL instructions). Mapping information (such as mapping information 31 shown in FIG. 1) may be used to map received binary instructions into another format that may be displayed within graphics instructions area 208.
  • State/performance information area 214 includes a visual representation of state and/or performance information that has been received from graphics device 200. The received graphics instructions and state/performance information may be used to display 3D graphics image 210 within display area. In one aspect, graphics device 200 may utilize a graphics driver that implements a state/performance data module (such as state/performance data module 116 shown in FIG. 4) to provide various state and/or performance data. The received state/performance information may include graphics data (e.g., primitive data and/or rasterized data).
  • Window 203 also includes one or more selectors 212A-212N. A user may select any of these selectors 212A-212N. Each selector 212A-212N may be associated with different functions, such as statistical and navigation functions, as will be described in more detail below. Window 203 further includes selectors 216A-216N and 218A-218N, each of which may be selected by a user. Each selector 216A-216N and 218A-218N may also be associated with different functions, such as metric functions, override functions, and/or texture functions, as will be described in more detail below in reference to FIG. 10.
  • A user, such as an application developer, may change information displayed within window 203. For example, the user may modify one or more of the instructions displayed within graphics instructions area 208, or any of the state/performance information within state/performance information area 214.
  • Any changes initiated by the user within window 203 may then be sent back to graphics device 200 as requested modifications. Graphics device 200 may then process these modifications, and provide updated instructions and/or information which may then be displayed within graphics instructions area 208 and/or state/performance information area 214. The updated instructions and/or information may also be used to display a modified version of 3D graphics image 210 within display area 211.
  • In one aspect, the state and/or performance information that may be displayed within area 214 may be analyzed by the computing device that includes display device 201 (such as application computing device 20 shown in FIG. 1) to identify potential bottlenecks during execution of the graphics instructions on graphics device 200. Ultimately, a user, such as an application developer, may wish to view the information presented in window 203 during a debugging process to optimize the execution of graphics instructions on graphics device 200. As described previously, bottlenecks may be introduced anywhere within the graphics processing pipeline in graphics device 200, and it may be difficult for an application developer to isolate such bottlenecks for performance optimization. Through analysis of state and/or performance information, potential bottlenecks and possible workarounds can be displayed in window 203, such as within one or more sub-windows or pop-up windows, or within area 214 of window 203.
  • In one aspect, window 203 may display a report on the bottlenecks encountered in the call-stream of the graphics instructions received from graphics device 200, and may also display possible workarounds. In some cases, these possible workarounds may be presented as “what-if” scenarios to the user. For example, rendering a non-optimized triangle-list in a call-stream may be presented as one possible scenario, while pre-processing that list through a triangle-strip optimization framework may be presented as a second possible scenario. The user may select any of these possible workaround scenarios as requested modifications, and the requested modifications are then transmitted back to graphics device 200, where the performance may be measured. Graphics device 200 then sends updated instructions/information, which may be presented within graphics instruction area 208 and/or state/performance information area 214. The user can then view the results, and compare results for various different potential workarounds to identify an optimum solution. The user can use this process to quickly identify a series of steps that can be taken in order to remove bottlenecks from their application.
  • The user may iteratively continue to make adjustments within window 203 for purposes of experimentation, or trial/error debugging. The user may experiment with various different forms or combinations of graphics instructions and state/performance information to identify changes in the images or scenes that are displayed within display area 211. The user can use the simulation environment provided by the contents of window 203 to interactively view and modify the graphics instructions, which may be part of a call-stream, and states provided by graphics device 200 without having to recompile any source code and re-execute the compiled code on graphics device 200.
  • In some cases, the user may manipulate one or more of buttons 212A-212N to manipulate a graphical navigation controller, such as graphical camera, to modify a perspective view of graphics image 210. Such manipulation may be captured as requested modifications that are then sent back to graphics device 200. The updated instructions/information provided by graphics device 200 is then used to modify the perspective view of graphics image 210.
  • In some cases, various texture and/or state information may be provided in area 214 of window 203 as modifiable entities. In addition, a user may even select, for example, a pixel of graphics image 210 within display area 211, such that one or more corresponding instructions within graphics instruction area 208 are identified. In this fashion, a user can effectively drill backwards to a rendering instruction or call that was used to render or create that pixel or other portions of graphics image 210. Because graphics device 201 may re-create image 210 in window 203 exactly as it is presented on graphics device 200, the user is able to quickly isolate issues in their application (which may be based on the various graphics instructions displayed in graphics instructions area 208), and modify any states within state/performance area 214 to prototype new effects.
  • In one aspect, display device 201 is also capable of displaying partitioning information, as well as polygon data that may span across multiple partitions. For example, the application developer may select a button, such as one of buttons 212A-212N, to cause display device 201 to display a graphical representation of partitions (e.g., rectangular-shaped partitions) that overlay image 210 and graphically divide the scene in display area 211. In some cases, when device 200 is part of graphics device 2, the displayed partitions may be based on received mapping/partitioning information 33 (FIG. 1). The device that includes display device 201 may also analyze graphics data (e.g., polygon data) for graphics image 210 to determine which portions of the graphics data are associated with multiple ones of the partitions. For example, if multiple polygons were used to render graphics image 210, the device may analyze the polygons to determine which ones of these polygons span across multiple partitions.
  • FIG. 10 is a conceptual diagram illustrating another example of graphics device 200 coupled to display device 201 that displays information within graphical window 220, according to one aspect. In this aspect, window 220 includes various instruction information as well as metric information.
  • For example, within graphics instructions area 208, various graphics instructions 242 are shown. Graphics instructions 242 may be a subset of graphics instructions that are provided by graphics device 200. For example, if graphics device 200 is part of graphics device 2, graphics instructions 242 may be a subset of graphics instructions 30. In some cases, mapping information (such as mapping information 31 shown in FIG. 1) may be used to map incoming instructions received from graphics device 200 to a visual representation of these instructions, materialized as instructions 242 that are displayed within graphics instructions area 208. For example, if the received instructions are in binary form, instructions 242 may comprise API instructions that were used to generate the instructions in binary form.
  • As is shown in the example of FIG. 10, graphics instructions 242 include both high-level instructions and low-level instructions. A user, such as an application developer, may use scrollbar 244 to view the full-set of instructions 242. Certain high-level instructions may include one or more low-level instructions, such as lower-level API instructions. The application developer may, in some cases, select (e.g., such as by clicking) on a particular high-level instruction in order to view any low-level instructions that are part of, or executed by, the associated high-level instruction. As described previously, received graphics instructions, such as instructions 242, are used to generate the representation of graphics image 202, which comprises graphics image 210 shown in display area 211 of window 220.
  • Various selection buttons are shown below state/performance information area 214 in FIG. 10. These selection buttons include a textures button 236, an override button 238, and a metrics button 240. In the example of FIG. 10, the application developer has selected the metrics button 240. Upon selection of this button, various metrics options may be displayed. For example, one or more metric buttons 234A-234N may be displayed above state/performance area 214. Each metric button 234A-234N may be associated with a particular metric. In some cases, one or more of these metrics may be predefined or preconfigured metric types, and in some cases, the application developer may select or customize one or more of the metrics. Example metrics may include, for example, any one or more of the following: frames per second, % busy (for one or more processors), bus busy, memory busy, vertex busy, vertices per second, triangles per second, pixel clocks per second, fragments per second, etc. The application developer may select any of metric buttons 234A-234N to view additional details regarding the selected metrics.
  • For example, if metric button 234A is associated with the number of frames per second, the application developer may select metric button 234A to view additional details on the number of frames per second (related to performance) for graphics image 210, or select portions of graphics image 210. The developer may, in some cases, select metric button 234A, or drag metric button 234A into state/performance information area 214. The detailed information on the number of frames per second may be displayed within state/performance information area 214. The developer also may drag metric button 234A into display area 211, or select a portion of graphics image 210 for application of metric button 234A. For example, the developer may select a portion of graphics image 210 after selecting metric button 234A, and then detailed information on the number of frames per second for that selected portion may be displayed within state/performance information area 214. In such fashion, the developer may view performance data for any number of different metric types based upon selection of one or more of metric buttons 234A-234N, and even possible selection of graphics image 210 (or a portion thereof).
  • In one aspect, metric data that may be displayed within window 220 may be provided by a graphics driver (e.g., graphics driver 18 shown in FIG. 4) of graphics device 200. This graphics driver may implement a hardware counter module (e.g., hardware counter module 114 of FIG. 4) and/or a processor usage module (e.g., processor usage module 112 of FIG. 4) to provide various data that may then be displayed as metric data within window 220.
  • The developer may, in some cases, also select textures button 236. Upon selection, various forms of texture information related to graphics image 210 may be displayed by graphics device 201. For example, texture information may be displayed within window 220, such as within state/performance information area 214. In some cases, the texture information may be displayed within an additional (e.g., pop-up) window (not shown). The developer may view the displayed texture information, but may also, in some cases, modify the texture information. In these cases, any modifications to the texture information may be propagated back to graphics device 200 as requested modifications. Upon receipt of updated instructions/information from graphics device 200, changes to graphics images 210 may be displayed within display area 211. FIG. 11 includes certain texture information that may be displayed upon selection of textures button 236.
  • The developer may, in some cases, also select override button 238. After selection of override button 238, certain information, such as instruction and/or state information, may be displayed (e.g., within window 220 or another window) which may be modified, or overridden, by the developer. Any modifications or overrides may be included within one or more requested modifications that are sent to graphics device 200. In one aspect, graphics device 200 may implement a graphics driver, such as graphics driver 18A (FIG. 4), to process any requested modifications. For example, graphics device 200 may use override module 120 to process such requested modifications that comprise one or more overrides.
  • In some cases, the developer may override one or more over graphics instructions 242 that are shown within graphics instructions area 208. In these cases, the developer may type or otherwise enter information within graphics instructions area 208 to modify or override one or more of graphics instructions 242. These modifications may then be sent to graphics device 200, which will provide updated instructions/information to update the display of graphics image 210 within display area 211. The developer may change, for example, parameters, ordering, type, etc., of graphics instructions 242 to override one or more functions that are provided by instructions 242. In one aspect, mapping information 31 (FIG. 1) may be used to map, or convert, changes to graphics instructions 242 into corresponding instructions of another format (e.g., binary instructions) that may then be provided to graphics device 200.
  • In some cases, the developer may also select override button 238 to override one or more functions associated with the processing pipeline that is implemented by graphics device 200. FIG. 12 shows an example of an override screen that may be displayed to the developer upon selection of override button 238.
  • Window 220 further includes selection buttons 231 and 232. Selection button 231 is a partition button, and selection button 232 is a navigation button. The developer may select partition button 231 to view a graphical representation of partitions, such as rectangular-shaped partitions, that overlay graphics image 210 and graphically divide the scene displayed in display area 211. Upon user selection of partition button 231, the graphical partitions may be displayed in display area 211.
  • Display area 211, or a separate display area or window, may also display information based upon an analysis of graphics data for graphics image 210 that determines which portions of the data are associated with multiple partitions. For example, display area 211, or a separate display area or window, may display which polygons, which are used to render graphics image 210, span across multiple partitions in conjunction with the graphical representation of the partitions. In some cases, a graphical indication, such as a color, may be displayed for each polygon (e.g., triangle) that spans across multiple partitions.
  • For example, in one aspect, a “heat map” may be displayed, where each triangle is displayed in a particular color. Triangles that do not span across multiple partitions may be displayed in one color (e.g., blue). Triangles that span across multiple partitions (e.g., two to three partitions) may be displayed in a second color (e.g., purple). Triangles that span across more than three partitions may be prominently displayed in a third color (e.g., red). Thus, in this example, an application developer can quickly determine which triangles span across multiple partitions, and which ones span across more partitions than others. The developer may be able to use this information to determine how to reconfigure, redefine, or otherwise restructure triangles that span across multiple partitions to reduce performance (e.g., rendering) overhead when generating graphics image 210.
  • The developer may also select navigation button 232 to navigate within display area 211, and even possibly to change a perspective view of graphics image 210 within display area 211. For example, upon selection of navigation button 232, a 3D graphical camera or navigation controller may be displayed. The developer may interact with the controller to navigate to any area within display area 211. The developer may also use the controller to change a perspective view of graphics image 210, such as by rotating graphics image 210 or zooming in/out.
  • In one aspect, any developer-initiated changes through selection of navigation button 232 and interaction with a graphical navigation controller may be propagated back to graphics device 200 as requested modifications (e.g., part of requested modifications 84 shown in FIG. 1). Updated instructions/information then provided by graphics device 200 may then be used to update the display (e.g., perspective view) of graphics image 210. In addition, updated instructions may be displayed within graphics instructions area 208. Updated state/performance information may be displayed within state/performance information area 214.
  • In one aspect, a graphical partitions may be displayed and overlaid upon a modified perspective view of graphics image 210. In addition, graphics data contained within the updated instructions/information for the modified perspective view of the graphics image 210 may be analyzed to determine which portions of the data are associated with multiple partitions.
  • As a result, the developer may effectively and efficiently determine how alternate perspectives, orientations, views, etc., for rendering and displaying graphics image 210 may affect performance and state of graphics device 200. This may be very useful to the developer in optimizing the graphics instructions 242 that are used to create and render graphics image 210 in the simulation environment displayed on display device 201, and effectively of graphics image 202 that is displayed on graphics device 200. In one aspect, any changes in the position, perspective, orientation, etc., of graphics image 210, based upon developer-initiated selections and controls within window 220, may also be seen as changes for graphics image 202 that may be displayed on graphics device 200 during the testing process.
  • Through interaction with graphical window 220 within a graphical user interface, the application developer can attempt to identify performance issues and/or bottlenecks during execution of graphics instructions 242, which are a visual representation of graphics instructions that are executed by graphics device 200 to create graphics image 202. A representation of graphics image 202 (i.e., graphics image 210) is displayed within display area 211 based upon graphics instructions 242 and state/performance data received by graphics device 200. By viewing graphics instructions 242, graphics image 210, and the state/performance information, as well as the effects that are based upon user-initiated modifications to one or more of these, an application developer can interactively and dynamically engage in a trial-and-error, or debugging, process to optimize the execution of instructions on graphics device 200, and to eliminate or mitigate any performance issues (e.g., bottlenecks) during instruction execution.
  • In addition, the visual representation of a graphical scene that includes a number of different graphical partitions may allow a developer to identify portions of the graphics scene that exhibit reduced performance due to costs that may be associated with screen partitioning. The developer may review the partitioning and associated analysis information to investigate alternate compositions of the scene to help reduce these costs and/or related performance overhead.
  • The techniques described in this disclosure may be implemented within a general purpose microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other equivalent logic devices. Accordingly, the terms “processor” or “controller,” as used herein, may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein.
  • The various components illustrated herein may be realized by any suitable combination of hardware, software, firmware, or any combination thereof. In the figures, various components are depicted as separate units or modules. However, all or several of the various components described with reference to these figures may be integrated into combined units or modules within common hardware and/or software. Accordingly, the representation of features as components, units or modules is intended to highlight particular functional features for ease of illustration, and does not necessarily require realization of such features by separate hardware or software components. In some cases, various units may be implemented as programmable processes performed by one or more processors.
  • Any features described herein as modules, devices, or components, including graphics device 100 and/or its constituent components, may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. In various aspects, such components may be formed at least in part as one or more integrated circuit devices, which may be referred to collectively as an integrated circuit device, such as an integrated circuit chip or chipset. Such circuitry may be provided in a single integrated circuit chip device or in multiple, interoperable integrated circuit chip devices, and may be used in any of a variety of image, display, audio, or other multi-media applications and devices. In some aspects, for example, such components may form part of a mobile device, such as a wireless communication device handset.
  • If implemented in software, the techniques may be realized at least in part by a computer-readable medium comprising code with instructions that, when executed by one or more processors, performs one or more of the methods described above. The computer-readable medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), embedded dynamic random access memory (eDRAM), static random access memory (SRAM), flash memory, magnetic or optical data storage media.
  • The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by one or more processors. Any connection may be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Combinations of the above should also be included within the scope of computer-readable media. Any software that is utilized may be executed by one or more processors, such as one or more DSP's, general purpose microprocessors, ASIC's, FPGA's, or other equivalent integrated or discrete logic circuitry.
  • Various aspects have been described herein. These and other aspects are within the scope of the following claims.

Claims (80)

1. A method comprising:
displaying one or more graphics images in a graphical scene;
displaying a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene; and
analyzing graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
2. The method of claim 1, further comprising:
repeating the displaying of one or more graphics images, displaying a graphical representation of a plurality of partitions, and analyzing graphics data for multiple frames of the one or more graphics images.
3. The method of claim 1, where at least one of the one or more graphics images comprises a three-dimensional graphics image.
4. The method of claim 1, wherein displaying the graphical representation of the plurality of partitions comprises displaying a graphical representation of a plurality of rectangular-shaped partitions that overlay the one or more graphics images and graphically divide the scene.
5. The method of claim 1, wherein the graphics data comprises a plurality of graphics primitives that are used to render the one or more graphics images.
6. The method of claim 5, wherein the plurality of graphics primitives comprises a plurality of triangles.
7. The method of claim 6, wherein analyzing the graphics data comprises determining which ones of the triangles span across the multiple ones of the partitions.
8. The method of claim 7, wherein determining which ones of the triangles span across the multiple ones of the partitions comprises determining which ones of the triangles have been at least partially rendered in the multiple ones of the partitions.
9. The method of claim 7, further comprising:
displaying a graphical representation of the triangles that span across the multiple ones of the partitions.
10. The method of claim 9, wherein displaying the graphical representation of the triangles that span across the multiple ones of the partitions comprises displaying the graphical representation of the triangles that span across the multiple ones of the partitions in conjunction with displaying the graphical representation of the partitions.
11. The method of claim 9, wherein displaying the graphical representation of the triangles that span across the multiple ones of the partitions comprises displaying a graphical indication for each triangle, wherein the graphical indication provides a visual indication of a number of partitions that are spanned across by each corresponding triangle.
12. The method of claim 11, wherein the graphical indication for each corresponding triangle comprises a color.
13. The method of claim 11, wherein displaying the graphical indication for each triangle comprises prominently displaying the graphical indications for those triangles that span across more partitions than other triangles.
14. The method of claim 1, wherein displaying the graphical representation of the plurality of partitions comprises:
receiving partitioning information from an external graphics device; and
displaying the graphical representation of the plurality of partitions based on the received partitioning information.
15. The method of claim 1, further comprising:
displaying a navigation controller;
receiving user input to interact with the navigation controller; and
displaying a modified perspective view of the one or more graphics images in a modified graphics scene based on the user input to the navigation controller.
16. The method of claim 15, wherein:
displaying the graphical representation of the plurality of partitions comprises displaying the graphical representation of the plurality of partitions that overlay the modified perspective view of the one or more graphics images and that graphically divide the modified scene; and
analyzing the graphics data comprises analyzing graphics data for the modified perspective view of the one or more graphics images to determine which portions of the graphics data are associated with the multiple ones of the partitions.
17. The method of claim 1, further comprising:
receiving graphics instructions from an external graphics device; and
receiving state information from the external graphics device, wherein the state information is associated with execution of the graphics instructions on the external graphics device, and
wherein displaying the one or more graphics images in the graphical scene comprises displaying the one or more graphics images based on the graphics instructions and the state information.
18. The method of claim 17, wherein the state information comprises the graphics data for the one or more graphics images.
19. The method of claim 17, further comprising:
receiving mapping information from the external graphics device that maps the graphics data to the graphics instructions.
20. The method of claim 1, wherein the partitions are determined based upon rendering operations performed on the graphics data.
21. A computer-readable medium comprising computer-executable instructions for causing one or more processors to:
display one or more graphics images in a graphical scene;
display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene; and
analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
22. The computer-readable medium of claim 21, further comprising computer-executable instructions for causing the one or more processors to:
repeat the displaying of one or more graphics images, displaying a graphical representation of a plurality of partitions, and analyzing graphics data for multiple frames of the one or more graphics images.
23. The computer-readable medium of claim 21, where at least one of the one or more graphics images comprises a three-dimensional graphics image.
24. The computer-readable medium of claim 21, wherein the computer-executable instructions for causing the one or more processors to display the graphical representation of the plurality of partitions comprise computer-executable instructions for causing the one or more processors to display a graphical representation of a plurality of rectangular-shaped partitions that overlay the one or more graphics images and graphically divide the scene.
25. The computer-readable medium of claim 21, wherein the graphics data comprises a plurality of graphics primitives that are used to render the one or more graphics images.
26. The computer-readable medium of claim 25, wherein the plurality of graphics primitives comprises a plurality of triangles.
27. The computer-readable medium of claim 26, wherein the computer-executable instructions for causing the one or more processors to analyze the graphics data comprise computer-executable instructions for causing the one or more processors to determine which ones of the triangles span across the multiple ones of the partitions.
28. The computer-readable medium of claim 27, wherein the computer-executable instructions for causing the one or more processors to determine which ones of the triangles span across the multiple ones of the partitions comprise computer-executable instructions for causing the one or more processors to determine which ones of the triangles have been at least partially rendered in the multiple ones of the partitions.
29. The computer-readable medium of claim 27, further comprising computer-executable instructions for causing the one or more processors to:
display a graphical representation of the triangles that span across the multiple ones of the partitions.
30. The computer-readable medium of claim 29, wherein the computer-executable instructions for causing the one or more processors to display the graphical representation of the triangles that span across the multiple ones of the partitions comprise computer-executable instructions for causing the one or more processors to display the graphical representation of the triangles that span across the multiple ones of the partitions in conjunction with displaying the graphical representation of the partitions.
31. The computer-readable medium of claim 29, wherein the computer-executable instructions for causing the one or more processors to display the graphical representation of the triangles that span across the multiple ones of the partitions comprise computer-executable instructions for causing the one or more processors to display a graphical indication for each triangle, wherein the graphical indication provides a visual indication of a number of partitions that are spanned across by each corresponding triangle.
32. The computer-readable medium of claim 31, wherein the graphical indication for each corresponding triangle comprises a color.
33. The computer-readable medium of claim 31, wherein the computer-executable instructions for causing the one or more processors to display the graphical indication for each triangle comprise computer-executable instructions for causing the one or more processors to prominently display the graphical indications for those triangles that span across more partitions than other triangles.
34. The computer-readable medium of claim 21, wherein the computer-executable instructions for causing the one or more processors to display the graphical representation of the plurality of partitions comprise computer-executable instructions for causing the one or more processors to:
receive partitioning information from an external graphics device; and
display the graphical representation of the plurality of partitions based on the received partitioning information.
35. The computer-readable medium of claim 21, further comprising computer-executable instructions for causing the one or more processors to:
display a navigation controller;
receive user input to interact with the navigation controller; and
display a modified perspective view of the one or more graphics images in a modified graphics scene based on the user input to the navigation controller.
36. The computer-readable medium of claim 35, wherein:
the computer-executable instructions for causing the one or more processors to display the graphical representation of the plurality of partitions comprise computer-executable instructions for causing the one or more processors to display the graphical representation of the plurality of partitions that overlay the modified perspective view of the one or more graphics images and that graphically divide the modified scene; and
the computer-executable instructions for causing the one or more processors to analyze the graphics data comprise computer-executable instructions for causing the one or more processors to analyze graphics data for the modified perspective view of the one or more graphics images to determine which portions of the graphics data are associated with the multiple ones of the partitions.
37. The computer-readable medium of claim 21, further comprising computer-executable instructions for causing the one or more processors to:
receive graphics instructions from an external graphics device; and
receive state information from the external graphics device, wherein the state information is associated with execution of the graphics instructions on the external graphics device, and
wherein the computer-executable instructions for causing the one or more processors to display the one or more graphics images in the graphical scene comprise computer-executable instructions for causing the one or more processors to display the one or more graphics images based on the graphics instructions and the state information.
38. The computer-readable medium of claim 37, wherein the state information comprises the graphics data for the one or more graphics images.
39. The computer-readable medium of claim 37, further comprising computer-executable instructions for causing the one or more processors to:
receive mapping information from the external graphics device that maps the graphics data to the graphics instructions.
40. The computer-readable medium of claim 21, wherein the partitions are determined based upon rendering operations performed on the graphics data.
41. A device comprising:
a display device; and
one or more processors configured to:
display one or more graphics images in a graphical scene on the display device;
display a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene on the display device; and
analyze graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
42. The device of claim 41, wherein the one or more processors are further configured to repeat the displaying of one or more graphics images, displaying a graphical representation of a plurality of partitions, and analyzing graphics data for multiple frames of the one or more graphics images.
43. The device of claim 41, where at least one of the one or more graphics images comprises a three-dimensional graphics image.
44. The device of claim 41, wherein the one or more processors are configured to display the graphical representation of the plurality of partitions at least by displaying a graphical representation of a plurality of rectangular-shaped partitions that overlay the one or more graphics images and graphically divide the scene.
45. The device of claim 41, wherein the graphics data comprises a plurality of graphics primitives that are used to render the one or more graphics images.
46. The device of claim 45, wherein the plurality of graphics primitives comprises a plurality of triangles.
47. The device of claim 46, wherein the one or more processors are configured to analyze the graphics data at least by determining which ones of the triangles span across the multiple ones of the partitions.
48. The device of claim 47, wherein the one or more processors are configured to determine which ones of the triangles span across the multiple ones of the partitions at least by determining which ones of the triangles have been at least partially rendered in the multiple ones of the partitions.
49. The device of claim 47, wherein the one or more processors are further configured to display a graphical representation of the triangles that span across the multiple ones of the partitions.
50. The device of claim 49, wherein the one or more processors are configured to display the graphical representation of the triangles that span across the multiple ones of the partitions at least by displaying the graphical representation of the triangles that span across the multiple ones of the partitions in conjunction with displaying the graphical representation of the partitions.
51. The device of claim 49, wherein the one or more processors are configured to display the graphical representation of the triangles that span across the multiple ones of the partitions at least by displaying a graphical indication for each triangle, wherein the graphical indication provides a visual indication of a number of partitions that are spanned across by each corresponding triangle.
52. The device of claim 51, wherein the graphical indication for each corresponding triangle comprises a color.
53. The device of claim 51, wherein the one or more processors are configured to display the graphical indication for each triangle at least by prominently displaying the graphical indications for those triangles that span across more partitions than other triangles.
54. The device of claim 41, wherein the one or more processors are configured to display the graphical representation of the plurality of partitions at least by receiving partitioning information from an external graphics device and displaying the graphical representation of the plurality of partitions based on the received partitioning information.
55. The device of claim 41, wherein the one or more processors are further configured to display a navigation controller, receive user input to interact with the navigation controller, and display a modified perspective view of the one or more graphics images in a modified graphics scene based on the user input to the navigation controller.
56. The device of claim 55, wherein:
the one or more processors are configured to display the graphical representation of the plurality of partitions at least by displaying the graphical representation of the plurality of partitions that overlay the modified perspective view of the one or more graphics images and that graphically divide the modified scene; and
the one or more processors are configured to analyze the graphics data at least by analyzing graphics data for the modified perspective view of the one or more graphics images to determine which portions of the graphics data are associated with the multiple ones of the partitions.
57. The device of claim 41, wherein the one or more processors are further configured to receive graphics instructions from an external graphics device and receive state information from the external graphics device, wherein the state information is associated with execution of the graphics instructions on the external graphics device, and wherein the one or more processors are configured to display the one or more graphics images in the graphical scene at least by displaying the one or more graphics images based on the graphics instructions and the state information.
58. The device of claim 57, wherein the state information comprises the graphics data for the one or more graphics images.
59. The device of claim 57, wherein the one or more processors are further configured to receive mapping information from the external graphics device that maps the graphics data to the graphics instructions.
60. The device of claim 41, wherein the partitions are determined based upon rendering operations performed on the graphics data.
61. A device comprising:
means for displaying one or more graphics images in a graphical scene;
means for displaying a graphical representation of partitions that overlay the one or more graphics images and that graphically divide the scene; and
means for analyzing graphics data for the one or more graphics images to determine which portions of the graphics data are associated with multiple ones of the partitions.
62. The device of claim 61, further comprising:
means for repeating the displaying of one or more graphics images, displaying a graphical representation of a plurality of partitions, and analyzing graphics data for multiple frames of the one or more graphics images.
63. The device of claim 61, where at least one of the one or more graphics images comprises a three-dimensional graphics image.
64. The device of claim 61, wherein the means for displaying the graphical representation of the plurality of partitions comprises means for displaying a graphical representation of a plurality of rectangular-shaped partitions that overlay the one or more graphics images and graphically divide the scene.
65. The device of claim 61, wherein the graphics data comprises a plurality of graphics primitives that are used to render the one or more graphics images.
66. The device of claim 65, wherein the plurality of graphics primitives comprises a plurality of triangles.
67. The device of claim 66, wherein the means for analyzing the graphics data comprises means for determining which ones of the triangles span across the multiple ones of the partitions.
68. The device of claim 67, wherein the means for determining which ones of the triangles span across the multiple ones of the partitions comprises means for determining which ones of the triangles have been at least partially rendered in the multiple ones of the partitions.
69. The device of claim 67, further comprising:
means for displaying a graphical representation of the triangles that span across the multiple ones of the partitions.
70. The device of claim 69, wherein the means for displaying the graphical representation of the triangles that span across the multiple ones of the partitions comprises means for displaying the graphical representation of the triangles that span across the multiple ones of the partitions in conjunction with displaying the graphical representation of the partitions.
71. The device of claim 69, wherein the means for displaying the graphical representation of the triangles that span across the multiple ones of the partitions comprises means for displaying a graphical indication for each triangle, wherein the graphical indication provides a visual indication of a number of partitions that are spanned across by each corresponding triangle.
72. The device of claim 71, wherein the graphical indication for each corresponding triangle comprises a color.
73. The device of claim 71, wherein the means for displaying the graphical indication for each triangle comprises means for prominently displaying the graphical indications for those triangles that span across more partitions than other triangles.
74. The device of claim 61, wherein the means for displaying the graphical representation of the plurality of partitions comprises:
means for receiving partitioning information from an external graphics device; and
means for displaying the graphical representation of the plurality of partitions based on the received partitioning information.
75. The device of claim 61, further comprising:
means for displaying a navigation controller;
means for receiving user input to interact with the navigation controller;
means for displaying a modified perspective view of the one or more graphics images in a modified graphics scene based on the user input to the navigation controller.
76. The device of claim 75, wherein:
the means for displaying the graphical representation of the plurality of partitions comprises means for displaying the graphical representation of the plurality of partitions that overlay the modified perspective view of the one or more graphics images and that graphically divide the modified scene; and
the means for analyzing the graphics data comprises means for analyzing graphics data for the modified perspective view of the one or more graphics images to determine which portions of the graphics data are associated with the multiple ones of the partitions.
77. The device of claim 61, further comprising:
means for receiving graphics instructions from an external graphics device; and
means for receiving state information from the external graphics device, wherein the state information is associated with execution of the graphics instructions on the external graphics device, and
wherein the means for displaying the one or more graphics images in the graphical scene comprises means for displaying the one or more graphics images based on the graphics instructions and the state information.
78. The device of claim 77, wherein the state information comprises the graphics data for the one or more graphics images.
79. The device of claim 77, further comprising:
means for receiving mapping information from the external graphics device that maps the graphics data to the graphics instructions.
80. The device of claim 61, wherein the partitions are determined based upon rendering operations performed on the graphics data.
US12/507,767 2008-07-25 2009-07-22 Partitioning-based performance analysis for graphics imaging Abandoned US20100020069A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US12/507,767 US20100020069A1 (en) 2008-07-25 2009-07-22 Partitioning-based performance analysis for graphics imaging
PCT/US2009/051772 WO2010011980A1 (en) 2008-07-25 2009-07-24 Partitioning-based performance analysis for graphics imaging
CN2009801274650A CN102089784A (en) 2008-07-25 2009-07-24 Partitioning-based performance analysis for graphics imaging
JP2011520245A JP5242788B2 (en) 2008-07-25 2009-07-24 Partition-based performance analysis for graphics imaging
KR1020117004633A KR101286938B1 (en) 2008-07-25 2009-07-24 Partitioning-based performance analysis for graphics imaging
CA2730298A CA2730298A1 (en) 2008-07-25 2009-07-24 Partitioning-based performance analysis for graphics imaging
EP09790829A EP2319015A1 (en) 2008-07-25 2009-07-24 Partitioning-based performance analysis for graphics imaging
TW098125262A TW201015483A (en) 2008-07-25 2009-07-27 Partitioning-based performance analysis for graphics imaging

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US8366508P 2008-07-25 2008-07-25
US8365908P 2008-07-25 2008-07-25
US8365608P 2008-07-25 2008-07-25
US12/507,767 US20100020069A1 (en) 2008-07-25 2009-07-22 Partitioning-based performance analysis for graphics imaging

Publications (1)

Publication Number Publication Date
US20100020069A1 true US20100020069A1 (en) 2010-01-28

Family

ID=41568212

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/507,767 Abandoned US20100020069A1 (en) 2008-07-25 2009-07-22 Partitioning-based performance analysis for graphics imaging
US12/507,732 Expired - Fee Related US8587593B2 (en) 2008-07-25 2009-07-22 Performance analysis during visual creation of graphics images

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/507,732 Expired - Fee Related US8587593B2 (en) 2008-07-25 2009-07-22 Performance analysis during visual creation of graphics images

Country Status (1)

Country Link
US (2) US20100020069A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100020087A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Performance analysis during visual creation of graphics images
US20100020098A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Mapping graphics instructions to associated graphics data during performance analysis
US20110148901A1 (en) * 2009-12-17 2011-06-23 James Adams Method and System For Tile Mode Renderer With Coordinate Shader
US20120081376A1 (en) * 2010-10-01 2012-04-05 Sowerby Andrew M Graphics System which Utilizes Fine Grained Analysis to Determine Performance Issues
US20120268470A1 (en) * 2006-12-07 2012-10-25 Sony Computer Entertainment Inc. Heads-up-display software development tool
US20140282176A1 (en) * 2013-03-14 2014-09-18 Adobe Systems Incorporated Method and system of visualizing rendering data
US20140298246A1 (en) * 2013-03-29 2014-10-02 Lenovo (Singapore) Pte, Ltd. Automatic display partitioning based on user number and orientation
US9417767B2 (en) 2010-10-01 2016-08-16 Apple Inc. Recording a command stream with a rich encoding format for capture and playback of graphics content
US9645916B2 (en) 2014-05-30 2017-05-09 Apple Inc. Performance testing for blocks of code
US20180047129A1 (en) * 2014-04-05 2018-02-15 Sony Interactive Entertainment America Llc Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9159160B1 (en) * 2011-10-30 2015-10-13 Lockheed Martin Corporation Texture sharing between application modules
US9214005B2 (en) 2012-12-18 2015-12-15 Google Technology Holdings LLC Methods and systems for overriding graphics commands
US9137320B2 (en) * 2012-12-18 2015-09-15 Google Technology Holdings LLC Methods and systems for overriding graphics commands
US9934122B2 (en) * 2014-07-09 2018-04-03 Microsoft Technology Licensing, Llc Extracting rich performance analysis from simple time measurements
US9747659B2 (en) * 2015-06-07 2017-08-29 Apple Inc. Starvation free scheduling of prioritized workloads on the GPU
KR102482874B1 (en) 2015-09-11 2022-12-29 삼성전자 주식회사 Apparatus and Method of rendering
KR102491606B1 (en) 2018-01-09 2023-01-26 삼성전자주식회사 Processor device collecting performance information through command-set-based replay

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5701405A (en) * 1995-06-21 1997-12-23 Apple Computer, Inc. Method and apparatus for directly evaluating a parameter interpolation function used in rendering images in a graphics system that uses screen partitioning
US5706479A (en) * 1995-10-02 1998-01-06 Apple Computer, Inc. Method and apparatus for dynamically detecting overflow of a multi-layer buffer
US5760778A (en) * 1995-08-15 1998-06-02 Friedman; Glenn M. Algorithm for representation of objects to enable robotic recongnition
US5889994A (en) * 1997-03-27 1999-03-30 Hewlett-Packard, Co. Method for cataloging graphics primitives by rendering state
USH1812H (en) * 1997-10-24 1999-11-02 Sun Microsystems, Inc. Method for encoding bounding boxes of drawing primitives to be rendered for multi-resolution supersampled frame buffers
US6091422A (en) * 1998-04-03 2000-07-18 Avid Technology, Inc. System for editing complex visual data providing a continuously updated rendering
US6145099A (en) * 1996-08-13 2000-11-07 Nec Corporation Debugging system
US20020093520A1 (en) * 2001-01-12 2002-07-18 Larson Ronald D. Polygon anti-aliasing with any number of samples on an irregular sample grid using a hierarchical tiler
US20030156131A1 (en) * 2002-02-21 2003-08-21 Samir Khazaka Method and apparatus for emulating a mobile device
US20030198377A1 (en) * 2002-04-18 2003-10-23 Stmicroelectronics, Inc. Method and system for 3D reconstruction of multiple views with altering search path and occlusion modeling
US20050018901A1 (en) * 2003-07-23 2005-01-27 Orametrix, Inc. Method for creating single 3D surface model from a point cloud
US20050244040A1 (en) * 2004-05-03 2005-11-03 Xingyuan Li Method and apparatus for automatically segmenting a microarray image
US20060109240A1 (en) * 2004-11-23 2006-05-25 Fu Rong Y Apparatus and method for enhancing the capability of the display output of portable devices
US7095416B1 (en) * 2003-09-22 2006-08-22 Microsoft Corporation Facilitating performance analysis for processing
US7095146B2 (en) * 2001-08-06 2006-08-22 Tokyo R&D Co., Ltd. Motor having housing member
US7167171B2 (en) * 2004-06-29 2007-01-23 Intel Corporation Methods and apparatuses for a polygon binning process for rendering
US20080007563A1 (en) * 2006-07-10 2008-01-10 Microsoft Corporation Pixel history for a graphics application
US7478187B2 (en) * 2006-03-28 2009-01-13 Dell Products L.P. System and method for information handling system hot insertion of external graphics
US20090089714A1 (en) * 2007-09-28 2009-04-02 Yahoo! Inc. Three-dimensional website visualization
US20090097757A1 (en) * 2007-10-15 2009-04-16 Casey Wimsatt System and method for teaching social skills, social thinking, and social awareness
US7623892B2 (en) * 2003-04-02 2009-11-24 Palm, Inc. System and method for enabling a person to switch use of computing devices
US20100020098A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Mapping graphics instructions to associated graphics data during performance analysis
US20100020087A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Performance analysis during visual creation of graphics images
US8296738B1 (en) * 2007-08-13 2012-10-23 Nvidia Corporation Methods and systems for in-place shader debugging and performance tuning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000020354A (en) 1998-07-07 2000-01-21 Hitachi Ltd Editor for number of execution steps
US6952215B1 (en) 1999-03-31 2005-10-04 International Business Machines Corporation Method and system for graphics rendering using captured graphics hardware instructions
JP2001191274A (en) 1999-12-30 2001-07-17 Sony Corp Data holding device, robot device, modification device and modification method
US7446777B2 (en) 2003-09-26 2008-11-04 Rensselaer Polytechnic Institute System and method of computing and displaying property-encoded surface translator descriptors
US8194071B2 (en) 2004-05-24 2012-06-05 St-Ericsson Sa Tile based graphics rendering
US8589142B2 (en) 2005-06-29 2013-11-19 Qualcomm Incorporated Visual debugging system for 3D user interface program
DE102006014902B4 (en) 2006-03-30 2009-07-23 Siemens Ag Image processing device for the extended display of three-dimensional image data sets
US20080049015A1 (en) 2006-08-23 2008-02-28 Baback Elmieh System for development of 3D content used in embedded devices

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5701405A (en) * 1995-06-21 1997-12-23 Apple Computer, Inc. Method and apparatus for directly evaluating a parameter interpolation function used in rendering images in a graphics system that uses screen partitioning
US5760778A (en) * 1995-08-15 1998-06-02 Friedman; Glenn M. Algorithm for representation of objects to enable robotic recongnition
US5706479A (en) * 1995-10-02 1998-01-06 Apple Computer, Inc. Method and apparatus for dynamically detecting overflow of a multi-layer buffer
US6145099A (en) * 1996-08-13 2000-11-07 Nec Corporation Debugging system
US5889994A (en) * 1997-03-27 1999-03-30 Hewlett-Packard, Co. Method for cataloging graphics primitives by rendering state
USH1812H (en) * 1997-10-24 1999-11-02 Sun Microsystems, Inc. Method for encoding bounding boxes of drawing primitives to be rendered for multi-resolution supersampled frame buffers
US6091422A (en) * 1998-04-03 2000-07-18 Avid Technology, Inc. System for editing complex visual data providing a continuously updated rendering
US20020093520A1 (en) * 2001-01-12 2002-07-18 Larson Ronald D. Polygon anti-aliasing with any number of samples on an irregular sample grid using a hierarchical tiler
US7095146B2 (en) * 2001-08-06 2006-08-22 Tokyo R&D Co., Ltd. Motor having housing member
US20030156131A1 (en) * 2002-02-21 2003-08-21 Samir Khazaka Method and apparatus for emulating a mobile device
US20030198377A1 (en) * 2002-04-18 2003-10-23 Stmicroelectronics, Inc. Method and system for 3D reconstruction of multiple views with altering search path and occlusion modeling
US7623892B2 (en) * 2003-04-02 2009-11-24 Palm, Inc. System and method for enabling a person to switch use of computing devices
US20050018901A1 (en) * 2003-07-23 2005-01-27 Orametrix, Inc. Method for creating single 3D surface model from a point cloud
US7095416B1 (en) * 2003-09-22 2006-08-22 Microsoft Corporation Facilitating performance analysis for processing
US20050244040A1 (en) * 2004-05-03 2005-11-03 Xingyuan Li Method and apparatus for automatically segmenting a microarray image
US7167171B2 (en) * 2004-06-29 2007-01-23 Intel Corporation Methods and apparatuses for a polygon binning process for rendering
US20060109240A1 (en) * 2004-11-23 2006-05-25 Fu Rong Y Apparatus and method for enhancing the capability of the display output of portable devices
US7478187B2 (en) * 2006-03-28 2009-01-13 Dell Products L.P. System and method for information handling system hot insertion of external graphics
US20080007563A1 (en) * 2006-07-10 2008-01-10 Microsoft Corporation Pixel history for a graphics application
US8296738B1 (en) * 2007-08-13 2012-10-23 Nvidia Corporation Methods and systems for in-place shader debugging and performance tuning
US20090089714A1 (en) * 2007-09-28 2009-04-02 Yahoo! Inc. Three-dimensional website visualization
US20090097757A1 (en) * 2007-10-15 2009-04-16 Casey Wimsatt System and method for teaching social skills, social thinking, and social awareness
US20100020098A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Mapping graphics instructions to associated graphics data during performance analysis
US20100020087A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Performance analysis during visual creation of graphics images

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120268470A1 (en) * 2006-12-07 2012-10-25 Sony Computer Entertainment Inc. Heads-up-display software development tool
US9013494B2 (en) * 2006-12-07 2015-04-21 Sony Computer Entertainment Inc. Heads-up-display software development tool
US8587593B2 (en) 2008-07-25 2013-11-19 Qualcomm Incorporated Performance analysis during visual creation of graphics images
US20100020098A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Mapping graphics instructions to associated graphics data during performance analysis
US20100020087A1 (en) * 2008-07-25 2010-01-28 Qualcomm Incorporated Performance analysis during visual creation of graphics images
US9792718B2 (en) 2008-07-25 2017-10-17 Qualcomm Incorporated Mapping graphics instructions to associated graphics data during performance analysis
US20110148901A1 (en) * 2009-12-17 2011-06-23 James Adams Method and System For Tile Mode Renderer With Coordinate Shader
US8692848B2 (en) * 2009-12-17 2014-04-08 Broadcom Corporation Method and system for tile mode renderer with coordinate shader
US9886739B2 (en) 2010-10-01 2018-02-06 Apple Inc. Recording a command stream with a rich encoding format for capture and playback of graphics content
US8933948B2 (en) * 2010-10-01 2015-01-13 Apple Inc. Graphics system which utilizes fine grained analysis to determine performance issues
US9417767B2 (en) 2010-10-01 2016-08-16 Apple Inc. Recording a command stream with a rich encoding format for capture and playback of graphics content
US20120081376A1 (en) * 2010-10-01 2012-04-05 Sowerby Andrew M Graphics System which Utilizes Fine Grained Analysis to Determine Performance Issues
US9619529B2 (en) * 2013-03-14 2017-04-11 Adobe Systems Incorporated Method and system of visualizing rendering data
US20140282176A1 (en) * 2013-03-14 2014-09-18 Adobe Systems Incorporated Method and system of visualizing rendering data
US20140298246A1 (en) * 2013-03-29 2014-10-02 Lenovo (Singapore) Pte, Ltd. Automatic display partitioning based on user number and orientation
US20180047129A1 (en) * 2014-04-05 2018-02-15 Sony Interactive Entertainment America Llc Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US10438312B2 (en) 2014-04-05 2019-10-08 Sony Interactive Entertainment LLC Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US10915981B2 (en) * 2014-04-05 2021-02-09 Sony Interactive Entertainment LLC Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US11748840B2 (en) 2014-04-05 2023-09-05 Sony Interactive Entertainment LLC Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US9645916B2 (en) 2014-05-30 2017-05-09 Apple Inc. Performance testing for blocks of code

Also Published As

Publication number Publication date
US20100020087A1 (en) 2010-01-28
US8587593B2 (en) 2013-11-19

Similar Documents

Publication Publication Date Title
US20100020069A1 (en) Partitioning-based performance analysis for graphics imaging
US9792718B2 (en) Mapping graphics instructions to associated graphics data during performance analysis
KR101286318B1 (en) Displaying a visual representation of performance metrics for rendered graphics elements
EP1594091B1 (en) System and method for providing an enhanced graphics pipeline
TWI584223B (en) Method and system of graphics processing enhancement by tracking object and/or primitive identifiers,graphics processing unit and non-transitory computer readable medium
EP2321730B1 (en) Performance analysis during visual creation of graphics images
CN116185743B (en) Dual graphics card contrast debugging method, device and medium of OpenGL interface
US11270494B2 (en) Shadow culling
KR101286938B1 (en) Partitioning-based performance analysis for graphics imaging
CN117523062B (en) Method, device, equipment and storage medium for previewing illumination effect

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ELMIEH, BABACK;RITTS, JAMES P.;DORBIE, ANGUS;REEL/FRAME:023101/0480

Effective date: 20090722

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE