WO2000045253A9 - Division unit in a processor using a piece-wise quadratic approximation technique - Google Patents

Division unit in a processor using a piece-wise quadratic approximation technique

Info

Publication number
WO2000045253A9
WO2000045253A9 PCT/US2000/001780 US0001780W WO0045253A9 WO 2000045253 A9 WO2000045253 A9 WO 2000045253A9 US 0001780 W US0001780 W US 0001780W WO 0045253 A9 WO0045253 A9 WO 0045253A9
Authority
WO
WIPO (PCT)
Prior art keywords
term
multiplier
result
storage
value
Prior art date
Application number
PCT/US2000/001780
Other languages
French (fr)
Other versions
WO2000045253A1 (en
Inventor
Ravi Shankar
Subramania I Sudharsanan
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Publication of WO2000045253A1 publication Critical patent/WO2000045253A1/en
Publication of WO2000045253A9 publication Critical patent/WO2000045253A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/535Dividing only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/499Denomination or exception handling, e.g. rounding or overflow
    • G06F7/49942Significance control
    • G06F7/49947Rounding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • G06F9/30014Arithmetic instructions with variable precision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2207/00Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F2207/38Indexing scheme relating to groups G06F7/38 - G06F7/575
    • G06F2207/3804Details
    • G06F2207/386Special constructional features
    • G06F2207/3884Pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/483Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
    • G06F7/487Multiplying; Dividing
    • G06F7/4873Dividing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/499Denomination or exception handling, e.g. rounding or overflow
    • G06F7/49942Significance control
    • G06F7/49947Rounding
    • G06F7/49957Implementation of IEEE-754 Standard
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/499Denomination or exception handling, e.g. rounding or overflow
    • G06F7/49942Significance control
    • G06F7/49947Rounding
    • G06F7/49963Rounding to nearest

Definitions

  • the present invention relates to computational and calculation functional units of computers, controllers and processors. More specifically, the present invention relates to functional units that perform division operations.
  • Computer systems have evolved into versatile systems with a vast range of utility including demanding applications such as multimedia, network communications of a large data bandwidth, signal processing, and the like. Accordingly, general-purpose computers are called upon to rapidly handle large volumes of data. Much of the data handling, particularly for video playback, voice recognition, speech process, three-dimensional graphics, and the like, involves computations that must be executed quickly and with a short latency.
  • One technique for executing computations rapidly while handling the large data volumes is to include multiple computation paths in a processor.
  • Each of the data paths includes hardware for performing computations so that multiple computations may be performed in parallel.
  • including multiple computation units greatly increases the size of the integrated circuits implementing the processor. What are needed in a computation functional unit are computation techniques and computation integrated circuits that operate with high speed while consuming only a small amount of integrated circuit area.
  • a division instruction is highly burdensome and difficult to implement in silicon, typically utilizing many clock cycles and consuming a large integrated circuit area.
  • a computation unit computes a division operation Y X by determining the value of a divisor reciprocal 1/X and multiplying the reciprocal by a numerator Y.
  • the reciprocal 1/X value is determined using a quadratic approximation having a form:
  • Ax 2 + Bx + C where coefficients A, B, and C are constants that are stored in a storage or memory such as a read-only memory (ROM).
  • ROM read-only memory
  • the bit length of the coefficients determines the error in a final result.
  • Storage size is reduced through use of "least mean square error" techniques in the determination of the coefficients that are stored in the coefficient storage. During the generation of partial products ⁇ 2, A ⁇ 2, and Bx, the process of rounding is eliminated, thereby reducing the computational logic to implement the division functionality.
  • a method of computing a floating point division operation uses a piece-wise quadratic approximation to determine a value 1/X where X is a floating point number having a numerical format including a sign bit, an exponent field, and a mantissa field.
  • a floating point division Y/X is executed by computing the value 1/X and multiplying the result by a value Y.
  • the value x is defined as a plurality of lower order bits of the mantissa.
  • Coefficients A, B, and C are derived for the division operation to reduce the least mean square error using a least squares approximation of a plurality of equally-spaced points within an interval.
  • an interval includes 256 equally-spaced points.
  • the coefficients are stored in a storage and accessed during execution of the division computation instruction.
  • a lookup table in storage is indexed using the leading or higher order bits of the mantissa. Since the most significant bit of the mantissa is always 1, some embodiments use a plurality of higher order bits but not including the most significant bit to index into the lookup table storage.
  • the method produces a "pre-rounded" Y/X result that is rounded to the nearest value.
  • the pre-rounded result is truncated at a round bit position and incremented at the round bit position to generate an incremented quotient that is within one LSB of a correct solution.
  • the incremented quotient multiplied by the divisor is compared with the dividend by subtraction. If the remainder is negative, then the pre-rounded result is more than half an LSB below the correct value and is incremented. If the remainder is positive, then the prerounded result is less than half an LSB below the correct value and is not incremented. If the remainder is zero, the result is selected based on the LSB of the pre-rounded result.
  • FIGUREs 1A and IB are respectively a schematic block diagram showing an embodiment of a general functional unit and a simplified schematic timing diagram showing timing of a general functional unit pipeline.
  • FIGURE 2 is a schematic block diagram that illustrates an embodiment of a long- latency pipeline used in the general functional unit.
  • FIGURE 3 is a graphic shows the format of a single-precision floating point number.
  • FIGURES 4A and 4B are graphs showing exponential functions that describe a technique utilized to perform a single-precision floating-point division operation.
  • FIGURE 5 is a table showing a data flow for the floating point division operation.
  • FIGURE 6 is a table showing different cases for rounding to the nearest even scheme.
  • FIGURE 7 is a schematic block diagram illustrating a single integrated circuit chip implementation of a processor in accordance with an embodiment of the present invention.
  • FIGURE 8 is a schematic block diagram showing the core of the processor.
  • FIGURE 9 is a schematic block diagram that shows a logical view of the register file and functional units in the processor.
  • FIGURE 10 is a schematic timing diagram that illustrates timing of the processor pipeline.
  • FIGURES 1A and IB a schematic block diagram shows an embodiment of a general functional unit 822 (illustrated more generally as part of a processor in FIGURE 8), a simplified schematic timing diagram illustrating timing of general functional unit pipelines 100, and a bypass diagram showing possible bypasses for the general functional unit 822.
  • the general functional unit 822 supports instructions that execute in several different pipelines. Instructions include single-cycle ALU operations, four-cycle getir instructions, and five-cycle setir instructions. Long-latency instructions are not fully pipelined.
  • the general functional unit 822 supports six-cycle and 34-cycle long operations and includes a dedicated pipeline for load/store operations.
  • the general functional unit 822 and a pipeline control unit 826 include four pipelines, Gpipel 150, Gpipe2 152, Gpipe3 154, and a load/store pipeline 156.
  • the load/store pipeline 156 and the Gpipel 150 are included in the pipeline control unit 826.
  • the Gpipe2 152 and Gpipe3 154 are located in the general functional unit 822.
  • the general functional unit 822 includes a controller 160 that supplies control signals for the pipelines Gpipel 150, Gpipe2 152, and Gpipe3 154.
  • the pipelines include execution stages (En) and annex stages (An).
  • the general functional unit pipelines 100 include a load pipeline 110, a 1-cycle pipeline 112, a 6-cycle pipeline 114, and a 34-cycle pipeline 116.
  • Pipeline stages include execution stages (E and En), annex stages (An), trap-handling stages (T), and write-back stages (WB). Stages An and En are prioritized with smaller priority numbers n having a higher priority.
  • the processor 700 supports precise traps. Precise exceptions are detected by E4/A3 stages of media functional unit and general functional unit operations. One-cycle operations are stages in annex and trap stages (Al, A2, A3, T) until all exceptions in one NLIW group are detected. Traps are generated in the trap-generating stages (T). When the general functional unit 822 detects a trap in a NLIW group, all instructions in the NLIW group are canceled.
  • the long-latency instruction is held in a register, called an A4-stage register, inside the annex and is broadcast to the register file segments 824 only when the NLIW group under execution does not include a one-cycle GFU instruction that is to be broadcast.
  • results of a long-latency instruction are bypassed from the E6-stage of a 6-cycle instruction to any GFU and MFU instruction in the decoding (D) stage. If a long-latency instruction is stalled by another instruction in the NLIW group, results of the stalled long-latency instruction are bypassed from the annex (A4) stage to all instructions in the general functional unit 822 and all media functional units 220 in the decoding (D) stage.
  • Data from the T-stage of the pipelines are broadcast to all the register file segments 824, which latch the data in the writeback (WB) stage before writing the data to the storage cells.
  • WB writeback
  • a schematic block diagram illustrates an embodiment of a long-latency pipeline 120 used in the general functional unit (GFU) 822.
  • the long-latency pipeline 120 executes six-cycle instructions.
  • the six-cycle instructions include a single-precision floating point division (fdiv) instruction, a single- precision floating point reciprocal square root (frecsqrt) instruction, a fixed-point power computation (ppower) instruction, and a fixed-point reciprocal square root (precsqrt) instruction.
  • the single-precision floating point division (fdiv) instruction has the form: fdiv rsl, rs2, rd where rsl and rs2 designate a numerator source operand and a denominator source operand, respectively.
  • the rd operand designates a destination register for holding the result.
  • the single-precision floating point reciprocal square root (frecsqrt) instruction has the form: frecsqrt rsl, rd where rsl designates a source operand and the rd operand identifies the destination register that holds the reciprocal square root result.
  • the fixed-point power computation (ppower) instruction has the form: ppower rsl, rs2, rd where rsl and rs2 designate source operands and rd identifies a destination register operand.
  • the ppower instruction computes rsl**rs2 for each half of the source registers.
  • the fixed-point reciprocal square root (precsqrt) instruction has the form: precsqrt rsl, rd where rsl designates a source operand and the rd operand identifies the destination register that holds the reciprocal square root result.
  • the precsqrt instruction computes the reciprocal square root for each half of rsl.
  • the illustrative long-latency pipeline 120 has eight megacell circuits including a 16- bit normalization megacell 210, a 24-bit compare megacell 212, a 16-bit by 16-bit multiplier megacell 214, an exponent add megacell 216, a 16-bit barrel shifter megacell 218, a 25-by-24 multiplier megacell 220, and a compressor and adder megacell 222, and a multiplexer and incrementer megacell 224.
  • the 16-bit normalization megacell 210 contains a leading zero detector and a shifter that shifts a sixteen bit value according to the status of the leading zero detection.
  • the 16-bit normalization megacell 210 also includes two 4-bit registers that store the shift count values.
  • the 24-bit compare megacell 212 compares two 24-bit mantissa values.
  • the 24-bit compare megacell 212 generates only equal and less-than signals.
  • the 16-bit by 16-bit multiplier megacell 214 multiplies two 16-bit values.
  • the actual datapath of the 16-bit by 16-bit multiplier megacell 214 is 18 bit cells wide and includes eight 18-bit rows.
  • the 16-bit by 16-bit multiplier megacell 214 is a radix 4 booth recoder multiplier that generates an output signal in the form of a 32-bit product in binary form.
  • the booth recorders in the 16-bit by 16-bit multiplier megacell 214 are recoded off the binary format in contrast to a carry-save format.
  • the exponent add megacell 216 subtracts the exponent for a floating point divide operation.
  • the exponent add megacell 216 also performs shifting for execution of a square root operation.
  • the 16-bit barrel shifter megacell 218 is a 16-bit barrel shifter.
  • the 16-bit barrel shifter megacell 218 is a subset of a 32-bit shifter.
  • the 25-by-24 multiplier megacell 220 is a 25-bit by 24-bit multiplier.
  • the 25-by-24 multiplier megacell 220 has an actual datapath of 27 bit cells with twelve rows of the 27 bit cells.
  • the 25-by-24 multiplier megacell 220 is a radix 4 booth recoded multiplier that generates an output signal in the form of a 28-bit product in a carry-save format.
  • the booth recoders are recoded from the carry-save format in contrast to a binary format.
  • the compressor and adder megacell 222 includes a 4:2 compressor followed by a 28- bit adder.
  • the 28-bit adder uses a kogge-stone algorithm with lings modification.
  • the multiplexer and incrementer megacell 224 produces two 24-bit products, a sum of two 28-bit numbers in the carry-save format and the increment of the sum.
  • the final multiplexer selects a correct answer based on the sign of the result from the compressor and adder megacell 222.
  • the adder of the multiplexer and incrementer megacell 224 uses conditional sum adders.
  • a graphic shows the format of a single-precision floating point number 300.
  • the single-precision floating point format 300 has three fields including one bit for the sign 302, eight bits for the exponent 304, and 23 bits for the mantissa 306.
  • the sign bit 302 equal to zero designates a positive number.
  • the sign bit 302 equal to one designates a negative number.
  • the value of the exponent 304 ranges from 0 to 255.
  • the bias of the exponent 304 is +127. Of the 256 values in the range 0 to 255, only the values of 0 and 255 are reserved for special values.
  • the maximum positive exponent is +127.
  • the minimum negative exponent is -126.
  • the lower order 23 bits designate the mantissa 306, which is an unsigned fractional number. An implicit value of 1 is included prior to the unsigned fraction.
  • the range of values of the mantissa 306 is from 1.0 to (2-2 ⁇ 23). The mantissa range is defined only for normal numbers.
  • the floating point number represents a denormal number.
  • the value of the denormal number is given by the equation, as follows:
  • FIGURE 4A a graph of an exponential function is shown that describes a technique utilized to perform a single-precision floating-point division operation.
  • the fdiv instruction is implemented using a "piece-wise quadratic approximation to 1/X" operation.
  • the floating point division (Y/X) is executed by calculating the value of the reciprocal term 1/X and then multiplying the resultant value by Y to obtain the division result.
  • A, B, and C are constant coefficients that are stored in a 256 word
  • the A, B, and C coordinates are generated using a "generalized-inverse" method for least-squares approximation of 256 equally spaced points within each interval.
  • the ROM lookup table is indexed using the leading 9 bits of the mantissa 306. Since the MSB is always 1, only the next eight bits are used to index to the ROM lookup table.
  • the coefficients A, B, and C are 11, 18, and 28 bits wide, respectively.
  • the coefficients have sufficient precision to give a final result for single-precision accuracy. Since x has eight leading zeros, the MSB of Ax 2 affects only the 17 m or lesser significant bits of the approximation. Similarly, the MSB of Bx affects only the 9 m or lesser significant bits of the approximation. Coefficients are computed to minimize the least mean square error.
  • a prerounded result for the Y/X. division is computed by multiplying the approximated value of 1/X and Y. The result is "rounded" correctly in a later cycle.
  • the rounding mode is a "round to nearest" operation.
  • xj for a range of integers j from 0 to 255.
  • Solving for XJ produces 256 equations to solve for the coefficients Aj, Bj, and Cj using a singular-value decomposition method.
  • Aj, Bj, and Q are computed for all 256 intervals.
  • a table shows a data flow for the floating point division operation.
  • a control logic such as microcode and sequencer, or logic circuits.
  • the first cycle the value of x 2 is calculated and the coefficients A, B, and C are accessed from the ROM lookup table.
  • the result of the 16-bit by 16-bit multiplication is in the final binary form.
  • coefficient A is multiplied by x 2 and the coefficient B is multiplied by x.
  • the Bx result is in a carry-save format.
  • the value 1/X is approximated by adding the values Ax 2 , Bx, and C using the 28-bit adder.
  • a prerounded result of Y/X is determined with the result in a carry-save format.
  • the division result is rounded by precalculating the prerounded value.
  • a suitable rounded value is selected.
  • the sign of the result is obtained by performing an exclusive-OR operation on the sign of the value X and the value Y.
  • the exponent of the result is computed by subtracting the exponent of X from the exponent of Y.
  • the exponent of the result may have to be decremented by one if the mantissa of Y is less than the mantissa of X.
  • the bias of the exponents is taken into account while subtracting the exponent.
  • a table shows different cases for rounding to the nearest even scheme.
  • the rounding operation takes a number regarded as infinitely precise and modifies the number to fit the format of the destination.
  • the IEEE standard for binary floating point arithmetic defines four possible rounding schemes: round to the nearest, round towards +infinity, round towards -infinity, and round to 0.
  • the most difficult rounding scheme to implement is round to nearest.
  • the representable value nearest to the infinitely precise result is delivered and if the two nearest representable values are equally near, the result with a least significant bit of zero is delivered.
  • the described round to nearest technique attains the smallest error and therefore produces a best numerical result.
  • the described round to nearest mode utilizes an extra addition and the carry may propagate fully across the number.
  • FIGURE 6 shows different cases of rounding.
  • the result obtained by a multiplication of the approximation of 1/X and Y may have an error of 1 in the least significant bit.
  • a correct result for example, Z
  • the incremented quotient QU is then within a least-significant bit (LSB) of the correct solution.
  • the incremented quotient multiplied by the divisor is compared with the dividend by subtraction. If the remainder is negative, the quotient Ql is more than half an LSB below the correct value, and is thus incremented. If the remainder is positive, the quotient Ql is less than half below the correct answer, and the quotient is not to be incremented. If the remainder is equal to zero, the final value is selected based on the LSB of Ql . To compute the correct result in the case of other rounding modes, the quotient is merely incremented or truncated depending on the operation code.
  • LSB least-significant bit
  • Pseudocode that describes an example of a suitable technique for rounding to the nearest number is, as follows:
  • FIGURE 7 a schematic block diagram illustrates a single integrated circuit chip implementation of a processor 700 that includes a memory interface 702, a geometry decompressor 704, two media processing units 710 and 712, a shared data cache 706, and several interface controllers.
  • the interface controllers support an interactive graphics environment with real-time constraints by integrating fundamental components of memory, graphics, and input output bridge functionality on a single die.
  • the components are mutually linked and closely linked to the processor core with high bandwidth, low-latency communication channels to manage multiple high-bandwidth data streams efficiently and with a low response time.
  • the interface controllers include a an UltraPort Architecture
  • the illustrative memory interface 702 is a direct Rambus dynamic RAM (DRDRAM) controller.
  • the shared data cache 706 is a dual-ported storage that is shared among the media processing units 710 and 712 with one port allocated to each media processing unit.
  • the data cache 706 is four- way set associative, follows a write-back protocol, and supports hits in the fill buffer (not shown).
  • the data cache 706 allows fast data sharing and eliminates the need for a complex, error-prone cache coherency protocol between the media processing units 710 and 712.
  • the UPA controller 716 is a custom interface that attains a suitable balance between high-performance computational and graphic subsystems.
  • the UPA is a cache-coherent, processor-memory interconnect.
  • the UPA attains several advantageous characteristics including a scaleable bandwidth through support of multiple bused interconnects for data and addresses, packets that are switched for improved bus utilization, higher bandwidth, and precise interrupt processing.
  • the UPA performs low latency memory accesses with high throughput paths to memory.
  • the UPA includes a buffered cross-bar memory interface for increased bandwidth and improved scaleability.
  • the UPA supports high-performance graphics with two-cycle single-word writes on the 64-bit UPA interconnect.
  • the UPA interconnect architecture utilizes point-to-point packet switched messages from a centralized system controller to maintain cache coherence. Packet switching improves bus bandwidth utilization by removing the latencies commonly associated with transaction-based designs.
  • the PCI controller 720 is used as the primary system I/O interface for connecting standard, high-volume, low-cost peripheral devices, although other standard interfaces may also be used.
  • the PCI bus effectively transfers data among high bandwidth peripherals and low bandwidth peripherals, such as CD-ROM players, DND players, and digital cameras.
  • Two media processing units 710 and 712 are included in a single integrated circuit chip to support an execution environment exploiting thread level parallelism in which two independent threads can execute simultaneously.
  • the threads may arise from any sources such as the same application, different applications, the operating system, or the runtime environment.
  • Parallelism is exploited at the thread level since parallelism is rare beyond four, or even two, instructions per cycle in general purpose code.
  • the illustrative processor 700 is an eight-wide machine with eight execution units for executing instructions.
  • a typical "general-purpose" processing code has an instruction level parallelism of about two so that, on average, most (about six) of the eight execution units would be idle at any time.
  • the illustrative processor 700 employs thread level parallelism and operates on two independent threads, possibly attaining twice the performance of a processor having the same resources and clock rate but utilizing traditional non-thread parallelism.
  • Thread level parallelism is particularly useful for JavaT applications which are bound to have multiple threads of execution.
  • JavaTM methods including “suspend”, “resume”, “sleep”, and the like include effective support for threaded program code.
  • JavaTM c ⁇ ass libraries are thread-safe to promote parallelism.
  • the thread model of the processor 700 supports a dynamic compiler which runs as a separate thread using one media processing unit 710 while the second media processing unit 712 is used by the current application.
  • the compiler applies optimizations based on "on-the-fly" profile feedback information while dynamically modifying the executing code to improve execution on each subsequent run. For example, a "garbage collector” may be executed on a first media processing unit 710, copying objects or gathering pointer information, while the application is executing on the other media processing unit 712.
  • processor 700 shown in FIGURE 7 includes two processing units on an integrated circuit chip, the architecture is highly scaleable so that one to several closely- coupled processors may be formed in a message-based coherent architecture and resident on the same die to process multiple threads of execution.
  • processor 700 a limitation on the number of processors formed on a single die thus arises from capacity constraints of integrated circuit technology rather than from architectural constraints relating to the interactions and interconnections between processors.
  • the media processing units 710 and 712 each include an instruction cache 810, an instruction aligner 812, an instruction buffer 814, a pipeline control unit 826, a split register file 816, a plurality of execution units, and a load/store unit 818.
  • the media processing units 710 and 712 use a plurality of execution units for executing instructions.
  • the execution units for a media processing unit 710 include three media functional units (MFU) 820 and one general functional unit (GFU) 822.
  • the media functional units 820 are multiple single-instruction-multiple-datapath (MSIMD) media functional units.
  • Each of the media functional units 820 is capable of processing parallel 16- bit components.
  • Various parallel 16-bit operations supply the single- instruction-multiple- datapath capability for the processor 700 including add, multiply-add, shift, compare, and the like.
  • the media functional units 820 operate in combination as tightly-coupled digital signal processors (DSPs).
  • DSPs digital signal processors
  • Each media functional unit 820 has an separate and individual sub- instruction stream, but all three media functional units 820 execute synchronously so that the subinstructions progress lock-step through pipeline stages.
  • the general functional unit 822 is a RISC processor capable of executing arithmetic logic unit (ALU) operations, loads and stores, branches, and various specialized and esoteric functions such as parallel power operations, reciprocal square root operations, and many others.
  • the general functional unit 822 supports less common parallel operations such as the parallel reciprocal square root instruction.
  • the illustrative instruction cache 810 has a 16 Kbyte capacity and includes hardware support to maintain coherence, allowing dynamic optimizations through self-modifying code.
  • Software is used to indicate that the instruction storage is being modified when modifications occur.
  • the 16K capacity is suitable for performing graphic loops, other multimedia tasks or processes, and general-purpose JavaTM co de.
  • Coherency is maintained by hardware that supports write-through, non-allocating caching.
  • Self-modifying code is supported through explicit use of "store-to-instruction-space" instructions store2i.
  • Software uses the storeli instruction to maintain coherency with the instruction cache 810 so that the instruction caches 810 do not have to be snooped on every single store operation issued by the media processing unit 710.
  • the pipeline control unit 826 is connected between the instruction buffer 814 and the functional units and schedules the transfer of instructions to the functional units.
  • the pipeline control unit 826 also receives status signals from the functional units and the load/store unit 818 and uses the status signals to perform several control functions.
  • the pipeline control unit 826 maintains a scoreboard, generates stalls and bypass controls.
  • the pipeline control unit 826 also generates traps and maintains special registers.
  • Each media processing unit 710 and 712 includes a split register file 816, a single logical register file including 128 thirty-two bit registers.
  • the split register file 816 is split into a plurality of register file segments 824 to form a multi-ported structure that is replicated to reduce the integrated circuit die area and to reduce access time.
  • a separate register file segment 824 is allocated to each of the media functional units 820 and the general functional unit 822.
  • each register file segment 824 has 128 32-bit registers.
  • the first 96 registers (0-95) in the register file segment 824 are global registers. All functional units can write to the 96 global registers.
  • the global registers are coherent across all functional units (MFU and GFU) so that any write operation to a global register by any functional unit is broadcast to all register file segments 824.
  • Registers 96-127 in the register file segments 824 are local registers. Local registers allocated to a functional unit are not accessible or "visible" to other functional units.
  • the media processing units 710 and 712 are highly structured computation blocks that execute software-scheduled data computation operations with fixed, deterministic and relatively short instruction latencies, operational characteristics yielding simplification in both function and cycle time.
  • the operational characteristics support multiple instruction issue through a pragmatic very large instruction word (VLIW) approach that avoids hardware interlocks to account for software that does not schedule operations properly. Such hardware interlocks are typically complex, error-prone, and create multiple critical paths.
  • VLIW very large instruction word
  • a VLIW instruction word always includes one instruction that executes in the general functional unit (GFU) 822 and from zero to three instructions that execute in the media functional units (MFU) 820.
  • a MFU instruction field within the VLIW instruction word includes an operation code (opcode) field, three source register (or immediate) fields, and one destination register field.
  • Instructions are executed in-order in the processor 700 but loads can finish out-of- order with respect to other instructions and with respect to other loads, allowing loads to be moved up in the instruction stream so that data can be streamed from main memory.
  • the execution model eliminates the usage and overhead resources of an instruction window, reservation stations, a re-order buffer, or other blocks for handling instruction ordering. Elimination of the instruction ordering structures and overhead resources is highly advantageous since the eliminated blocks typically consume a large portion of an integrated circuit die. For example, the eliminated blocks consume about 30% of the die area of a Pentium II processor.
  • the media processing units 710 and 712 are high-performance but simplified with respect to both compilation and execution.
  • the media processing units 710 and 712 are most generally classified as a simple 2-scalar execution engine with full bypassing and hardware interlocks on load operations.
  • the instructions include loads, stores, arithmetic and logic (ALU) instructions, and branch instructions so that scheduling for the processor 700 is essentially equivalent to scheduling for a simple 2-scalar execution engine for each of the two media processing units 710 and 712.
  • the processor 700 supports full bypasses between the first two execution units within the media processing unit 710 and 712 and has a scoreboard in the general functional unit 822 for load operations so that the compiler does not need to handle nondeterministic latencies due to cache misses.
  • the processor 700 scoreboards long latency operations that are executed in the general functional unit 822, for example a reciprocal square-root operation, to simplify scheduling across execution units.
  • the scoreboard (not shown) operates by tracking a record of an instruction packet or group from the time the instruction enters a functional unit until the instruction is finished and the result becomes available.
  • a VLIW instruction packet contains one GFU instruction and from zero to three MFU instructions. The source and destination registers of all instructions in an incoming VLIW instruction packet are checked against the scoreboard.
  • any true dependencies or output dependencies stall the entire packet until the result is ready.
  • Use of a scoreboarded result as an operand causes instruction issue to stall for a sufficient number of cycles to allow the result to become available. If the referencing instruction that provokes the stall executes on the general functional unit 822 or the first media functional unit 820, then the stall only endures until the result is available for intra-unit bypass. For the case of a load instruction that hits in the data cache 106, the stall may last only one cycle. If the referencing instruction is on the second or third media functional units 820, then the stall endures until the result reaches the writeback stage in the pipeline where the result is bypassed in transmission to the split register file 816.
  • the scoreboard automatically manages load delays that occur during a load hit.
  • all loads enter the scoreboard to simplify software scheduling and eliminate NOPs in the instruction stream.
  • the scoreboard is used to manage most interlocks between the general functional unit 822 and the media functional units 820. All loads and non-pipelined long-latency operations of the general functional unit 822 are scoreboarded. The long-latency operations include division idiv,fdiv instructions, reciprocal square root frecsqrt, precsqrt instructions, and power ppower instructions. None of the results of the media functional units 820 is scoreboarded. Non-scoreboarded results are available to subsequent operations on the functional unit that produces the results following the latency of the instruction.
  • the illustrative processor 700 has a rendering rate of over fifty million triangles per second without accounting for operating system overhead. Therefore, data feeding specifications of the processor 700 are far beyond the capabilities of cost-effective memory systems.
  • Sufficient data bandwidth is achieved by rendering of compressed geometry using the geometry decompressor 104, an on-chip real-time geometry decompression engine. Data geometry is stored in main memory in a compressed format. At render time, the data geometry is fetched and decompressed in real-time on the integrated circuit of the processor 700.
  • the geometry decompressor 104 advantageously saves memory space and memory transfer bandwidth.
  • the compressed geometry uses an optimized generalized mesh structure that explicitly calls out most shared vertices between triangles, allowing the processor 700 to transform and light most vertices only once.
  • the triangle throughput of the transform-and-light stage is increased by a factor of four or more over the throughput for isolated triangles.
  • multiple vertices are operated upon in parallel so that the utilization rate of resources is high, achieving effective spatial software pipelining.
  • operations are overlapped in time by operating on several vertices simultaneously, rather than overlapping several loop iterations in time.
  • high trip count loops are software-pipelined so that most media functional units 820 are fully utilized.
  • a schematic block diagram shows a logical view of the register file 816 and functional units in the processor 700.
  • the physical implementation of the core processor 700 is simplified by replicating a single functional unit to form the three media processing units 710.
  • the media processing units 710 include circuits that execute various arithmetic and logical operations including general-purpose code, graphics code, and video-image-speech (VIS) processing.
  • VIS processing includes video processing, image processing, digital signal processing (DSP) loops, speech processing, and voice recognition algorithms, for example.
  • a media processing unit 710 includes a 32-bit floating-point multiplier-adder to perform signal transform operations, clipping, facedness operations, sorting, triangle set-up operations, and the like.
  • the media processing unit 710 similarly includes a 16X16-bit integer multiplier-adder for perform operations such as lighting, transform normal lighting, computation and normalization of vertex view vectors, and specular light source operations.
  • the media processing unit 710 supports clipping operations and 1/square root operations for lighting tasks, and reciprocal operations for screen space dividing, clipping, set-up, and the like.
  • the media processing unit 710 supports 16/32-bit integer add operations, 16X16-bit integer multiplication operations, parallel shifting, and pack, unpack, and merge operations.
  • the media processing unit 710 supports 32- bit integer addition and subtraction, and 32-bit shift operations.
  • the media processing unit 710 supports a group load operation for unit stride code, a bit extract operation for alignment and multimedia functionality, a pdist operation for data compression and averaging, and a byte shuffle operation for multimedia functionality.
  • the media processing unit 710 supports the operations by combining functionality and forming a plurality of media functional units 820 and a general functional unit 822.
  • the media functional units 820 support a 32-bit floating-point multiply and add operation, a 16X16-bit integer multiplication and addition operation, and a 8/16/32-bit parallel add operation.
  • the media functional units 820 also support a clip operation, a bit extract operation, a pdist operation, and a byte shuffle operation.
  • Other functional units that are in some way incompatible with the media functional unit 820 or consume too much die area for a replicated structure, are included in the general functional unit 822.
  • the general functional unit 822 therefore includes a load/store unit, a reciprocal unit, a 1 /square root unit, a pack, unpack and merge unit, a normal and parallel shifter, and a 32-bit adder.
  • Computation instructions perform the real work of the processor 700 while load and store instructions may considered mere overhead for supplying and storing computational data to and from the computational functional units.
  • the processor 700 supports group load dg) and store long ⁇ stl) instructions.
  • a single load group loads eight consecutive 32-bit words into the split register file 816.
  • a single store long sends the contents of two 32-bit registers to a next level of memory hierarchy.
  • the group load and store long instructions are used to transfer data among the media processing units 710, the UPA controller 716, and the geometry decompressor 704.
  • a simplified schematic timing diagram illustrates timing of the processor pipeline 7000.
  • the pipeline 7000 includes nine stages including three initiating stages, a plurality of execution phases, and two terminating stages.
  • the three initiating stages are optimized to include only those operations necessary for decoding instructions so that jump and call instructions, which are pervasive in the JavaTM language, execute quickly. Optimization of the initiating stages advantageously facilitates branch prediction since branches, jumps, and calls execute quickly and do not introduce many bubbles.
  • the first of the initiating stages is a fetch stage 1010 during which the processor 700 fetches instructions from the 16Kbyte two-way set-associative instruction cache 810.
  • the fetched instructions are aligned in the instruction aligner 812 and forwarded to the instruction buffer 814 in an align stage 1012, a second stage of the initiating stages.
  • the aligning operation properly positions the instructions for storage in a particular segment of the four register file segments and for execution in an associated functional unit of the three media functional units 820 and one general functional unit 822.
  • a decoding stage 1014 of the initiating stages the fetched and aligned VLIW instruction packet is decoded and the scoreboard (not shown) is read arid updated in parallel.
  • the four register file segments each holds either floating-point data or integer data.
  • a single execution stage 1022 is performed for critical single-cycle operations 1020 such as, add, logical, compare, and clip instructions.
  • Address- cycle type operations 1030 such as load instructions, are executed in two execution cycles including an address computation stage 1032 followed by a single-cycle cache access 1034.
  • General arithmetic operations 1040 such as floating-point and integer multiply and addition instructions, are executed in four stages X 1042, X2 1044, X3 1046, and X4 1048.
  • Extended operations 1050 are long instructions such as floating-point divides, reciprocal square roots, 16-bit fixed-point calculations, 32-bit floating-point calculations, and parallel power instructions, that last for six cycles, but are not pipelined.
  • the two terminating stages include a trap-handling stage 1060 and a write-back stage 1062 during which result data is written-back to the split register file 816.
  • Computational instructions have fundamental importance in defining the architecture and the instruction set of the processor 700. Computational instructions are only semantically separated into integer and floating-point categories since the categories operate on the same set of registers.
  • the general functional unit 822 executes a fixed-point power computation instruction ppower.
  • the power instruction has the form ppower r[rsl],r[rs2],r[rd] and computes "r[rsl]**r[rs2J" where each of the sources is operated upon as a pair of independent 16-bit S2.13 format fixed-point quantities.
  • the result is a pair of independent 16-bit S2.13 format fixed-point powers placed in the register rfrd].
  • Zero to any power is defined to give a zero result.
  • the general functional unit 822 includes functional units that execute a floating-point division fdiv instruction, a floating-point reciprocal ⁇ /reczp instruction, a floating-point square root fsqrt instruction, and a floating-point reciprocal square root frecsqrt instruction, each for single-precision numbers.
  • the floating-point division instruction has the form/Jz ' v rsl,rs2,rd and computes a single-precision floating-point division "r[rsl]/r[rs2]" and delivers the result in rfrd].
  • the floating-point reciprocal instruction has the form frecip rsl.rd and executes a single-precision floating-point reciprocal with a latency of eight cycles.
  • the floating-point square root instruction has the formJ-s ⁇ rt rsl,rd and executes a single-precision floating-point square root operation.
  • the floating-point reciprocal square root instruction has the form frecsqrt rsl.rd and executes a single-precision floating-point reciprocal of the square root operation on the quantity in rfrsl] and places the result in rfrd].
  • the general functional unit 822 also supports a fixed-point parallel reciprocal square root precsqrt instruction.
  • the fixed-point reciprocal square root instruction has the form precsqrt rsl.rd.
  • Precsqrt computes a pair of S2.13 format fixed-point reciprocal square roots of the pair of S2.13 format values on register rfrsl]. Results are delivered in register rfrd]. The result for a source operand that is less than or equal to zero is undefined.
  • the general functional unit 822 executes an integer divide idiv instruction that computes either "r[rsl]/r[rs2]" or “r[rsl]/sign_ext(imml4) " and places the result in rfrd].

Abstract

A computation unit computes a division operation Y/X by determining the value of a divisir reciprocal l/X and multiplying the reciprocal by a numerator Y. The reciprocal l/X value is determined using a quadratic approximation having a form: Ax2+Bx+C, where coefficients A, B, and C are constants that are stored in a storage or memory such as a read-only memory (ROM). The bit length of the coefficient determines the error in a final result. Storage size is reduced through use of 'least mean square error' techniques in the determination of the coefficients that are stored in the coefficient storage. During the generation of partial products x2, Ax2, and Bx, the process of rounding is eliminated, thereby reducing the computational logic to implement the division functionality.

Description

DIVISION UNIT IN A PROCESSOR USING A PIECE-WISE QUADRATIC APPROXIMATION TECHNIQUE
BACKGROUND OF THE INVENTION
Field of the Invention
The present invention relates to computational and calculation functional units of computers, controllers and processors. More specifically, the present invention relates to functional units that perform division operations.
Description of the Related Art
Computer systems have evolved into versatile systems with a vast range of utility including demanding applications such as multimedia, network communications of a large data bandwidth, signal processing, and the like. Accordingly, general-purpose computers are called upon to rapidly handle large volumes of data. Much of the data handling, particularly for video playback, voice recognition, speech process, three-dimensional graphics, and the like, involves computations that must be executed quickly and with a short latency.
One technique for executing computations rapidly while handling the large data volumes is to include multiple computation paths in a processor. Each of the data paths includes hardware for performing computations so that multiple computations may be performed in parallel. However, including multiple computation units greatly increases the size of the integrated circuits implementing the processor. What are needed in a computation functional unit are computation techniques and computation integrated circuits that operate with high speed while consuming only a small amount of integrated circuit area.
Execution time in processors and computers is naturally enhanced through high speed data computations, therefore the computer industry constantly strives to improve the speed efficiency of mathematical function processing execution units. Computational operations are typically performed through iterative processing techniques, look-up of information in large-capacity tables, or a combination of table accesses and iterative processing. In conventional systems, a mathematical function of one or more variables is executed by using a part of a value relating to a particular variable as an address to retrieve either an initial value of a function or a numeric value used in the computation from a large-capacity table information storage unit. A high-speed computation is executed by operations using the retrieved value. Table look-up techniques advantageously increase the execution speed of computational functional units. However, the increase in speed gained through table accessing is achieved at the expense of a large consumption of integrated circuit area and power.
A division instruction is highly burdensome and difficult to implement in silicon, typically utilizing many clock cycles and consuming a large integrated circuit area.
What is needed is a method for implementing division in a computing circuit that is simple, fast, and reduces the amount of computation circuitry.
SUMMARY OF THE INVENTION
A computation unit computes a division operation Y X by determining the value of a divisor reciprocal 1/X and multiplying the reciprocal by a numerator Y. The reciprocal 1/X value is determined using a quadratic approximation having a form:
Ax2 + Bx + C, where coefficients A, B, and C are constants that are stored in a storage or memory such as a read-only memory (ROM). The bit length of the coefficients determines the error in a final result. Storage size is reduced through use of "least mean square error" techniques in the determination of the coefficients that are stored in the coefficient storage. During the generation of partial products χ2, Aχ2, and Bx, the process of rounding is eliminated, thereby reducing the computational logic to implement the division functionality.
A method of computing a floating point division operation uses a piece-wise quadratic approximation to determine a value 1/X where X is a floating point number having a numerical format including a sign bit, an exponent field, and a mantissa field. A floating point division Y/X is executed by computing the value 1/X and multiplying the result by a value Y. The value 1/X is computed in a computing device using a piece- wise quadratic approximation in the form: l/X = Ax2 + Bx + C.
The value x is defined as a plurality of lower order bits of the mantissa. Coefficients A, B, and C are derived for the division operation to reduce the least mean square error using a least squares approximation of a plurality of equally-spaced points within an interval. In one embodiment, an interval includes 256 equally-spaced points. The coefficients are stored in a storage and accessed during execution of the division computation instruction.
In some embodiments, a lookup table in storage is indexed using the leading or higher order bits of the mantissa. Since the most significant bit of the mantissa is always 1, some embodiments use a plurality of higher order bits but not including the most significant bit to index into the lookup table storage.
The method produces a "pre-rounded" Y/X result that is rounded to the nearest value. The pre-rounded result is truncated at a round bit position and incremented at the round bit position to generate an incremented quotient that is within one LSB of a correct solution. The incremented quotient multiplied by the divisor is compared with the dividend by subtraction. If the remainder is negative, then the pre-rounded result is more than half an LSB below the correct value and is incremented. If the remainder is positive, then the prerounded result is less than half an LSB below the correct value and is not incremented. If the remainder is zero, the result is selected based on the LSB of the pre-rounded result.
BRIEF DESCRIPTION OF THE DRAWINGS
The features of the described embodiments are specifically set forth in the appended claims. However, embodiments of the invention relating to both structure and method of operation, may best be understood by referring to the following description and accompanying drawings.
FIGUREs 1A and IB are respectively a schematic block diagram showing an embodiment of a general functional unit and a simplified schematic timing diagram showing timing of a general functional unit pipeline.
FIGURE 2 is a schematic block diagram that illustrates an embodiment of a long- latency pipeline used in the general functional unit.
FIGURE 3 is a graphic shows the format of a single-precision floating point number. FIGURES 4A and 4B are graphs showing exponential functions that describe a technique utilized to perform a single-precision floating-point division operation.
FIGURE 5 is a table showing a data flow for the floating point division operation.
FIGURE 6 is a table showing different cases for rounding to the nearest even scheme.
FIGURE 7 is a schematic block diagram illustrating a single integrated circuit chip implementation of a processor in accordance with an embodiment of the present invention.
FIGURE 8 is a schematic block diagram showing the core of the processor.
FIGURE 9 is a schematic block diagram that shows a logical view of the register file and functional units in the processor.
FIGURE 10 is a schematic timing diagram that illustrates timing of the processor pipeline.
The use of the same reference symbols in different drawings indicates similar or identical items.
DESCRIPTION OF THE EMBODIMENT(S)
Referring to FIGURES 1A and IB respectively, a schematic block diagram shows an embodiment of a general functional unit 822 (illustrated more generally as part of a processor in FIGURE 8), a simplified schematic timing diagram illustrating timing of general functional unit pipelines 100, and a bypass diagram showing possible bypasses for the general functional unit 822. The general functional unit 822 supports instructions that execute in several different pipelines. Instructions include single-cycle ALU operations, four-cycle getir instructions, and five-cycle setir instructions. Long-latency instructions are not fully pipelined. The general functional unit 822 supports six-cycle and 34-cycle long operations and includes a dedicated pipeline for load/store operations.
The general functional unit 822 and a pipeline control unit 826 (also shown generally in FIGURE 8), in combination, include four pipelines, Gpipel 150, Gpipe2 152, Gpipe3 154, and a load/store pipeline 156. The load/store pipeline 156 and the Gpipel 150 are included in the pipeline control unit 826. The Gpipe2 152 and Gpipe3 154 are located in the general functional unit 822. The general functional unit 822 includes a controller 160 that supplies control signals for the pipelines Gpipel 150, Gpipe2 152, and Gpipe3 154. The pipelines include execution stages (En) and annex stages (An).
Referring to FIGURE IB, the general functional unit pipelines 100 include a load pipeline 110, a 1-cycle pipeline 112, a 6-cycle pipeline 114, and a 34-cycle pipeline 116. Pipeline stages include execution stages (E and En), annex stages (An), trap-handling stages (T), and write-back stages (WB). Stages An and En are prioritized with smaller priority numbers n having a higher priority.
The processor 700 supports precise traps. Precise exceptions are detected by E4/A3 stages of media functional unit and general functional unit operations. One-cycle operations are stages in annex and trap stages (Al, A2, A3, T) until all exceptions in one NLIW group are detected. Traps are generated in the trap-generating stages (T). When the general functional unit 822 detects a trap in a NLIW group, all instructions in the NLIW group are canceled.
When a long-latency operation is in the final execute stage (E6 stage for the 6-cycle pipeline 114 or E34 stage for the 34-cycle pipeline 116), and a valid instruction is under execution in the A3-stage of the annex, then the long-latency instruction is held in a register, called an A4-stage register, inside the annex and is broadcast to the register file segments 824 only when the NLIW group under execution does not include a one-cycle GFU instruction that is to be broadcast.
Results of long- latency instructions are bypassed to more recently issued GFU and
MFU instructions as soon as the results are available. For example, results of a long-latency instruction are bypassed from the E6-stage of a 6-cycle instruction to any GFU and MFU instruction in the decoding (D) stage. If a long-latency instruction is stalled by another instruction in the NLIW group, results of the stalled long-latency instruction are bypassed from the annex (A4) stage to all instructions in the general functional unit 822 and all media functional units 220 in the decoding (D) stage.
Data from the T-stage of the pipelines are broadcast to all the register file segments 824, which latch the data in the writeback (WB) stage before writing the data to the storage cells.
Referring to FIGURE 2, a schematic block diagram illustrates an embodiment of a long-latency pipeline 120 used in the general functional unit (GFU) 822. The long-latency pipeline 120 executes six-cycle instructions. In the illustrative embodiment, the six-cycle instructions include a single-precision floating point division (fdiv) instruction, a single- precision floating point reciprocal square root (frecsqrt) instruction, a fixed-point power computation (ppower) instruction, and a fixed-point reciprocal square root (precsqrt) instruction.
The single-precision floating point division (fdiv) instruction has the form: fdiv rsl, rs2, rd where rsl and rs2 designate a numerator source operand and a denominator source operand, respectively. The rd operand designates a destination register for holding the result.
The single-precision floating point reciprocal square root (frecsqrt) instruction has the form: frecsqrt rsl, rd where rsl designates a source operand and the rd operand identifies the destination register that holds the reciprocal square root result.
The fixed-point power computation (ppower) instruction has the form: ppower rsl, rs2, rd where rsl and rs2 designate source operands and rd identifies a destination register operand. The ppower instruction computes rsl**rs2 for each half of the source registers.
The fixed-point reciprocal square root (precsqrt) instruction has the form: precsqrt rsl, rd where rsl designates a source operand and the rd operand identifies the destination register that holds the reciprocal square root result. The precsqrt instruction computes the reciprocal square root for each half of rsl.
The illustrative long-latency pipeline 120 has eight megacell circuits including a 16- bit normalization megacell 210, a 24-bit compare megacell 212, a 16-bit by 16-bit multiplier megacell 214, an exponent add megacell 216, a 16-bit barrel shifter megacell 218, a 25-by-24 multiplier megacell 220, and a compressor and adder megacell 222, and a multiplexer and incrementer megacell 224. The 16-bit normalization megacell 210 contains a leading zero detector and a shifter that shifts a sixteen bit value according to the status of the leading zero detection. The 16-bit normalization megacell 210 also includes two 4-bit registers that store the shift count values.
The 24-bit compare megacell 212 compares two 24-bit mantissa values. The 24-bit compare megacell 212 generates only equal and less-than signals.
The 16-bit by 16-bit multiplier megacell 214 multiplies two 16-bit values. The actual datapath of the 16-bit by 16-bit multiplier megacell 214 is 18 bit cells wide and includes eight 18-bit rows. The 16-bit by 16-bit multiplier megacell 214 is a radix 4 booth recoder multiplier that generates an output signal in the form of a 32-bit product in binary form. The booth recorders in the 16-bit by 16-bit multiplier megacell 214 are recoded off the binary format in contrast to a carry-save format.
The exponent add megacell 216 subtracts the exponent for a floating point divide operation. The exponent add megacell 216 also performs shifting for execution of a square root operation.
The 16-bit barrel shifter megacell 218 is a 16-bit barrel shifter. The 16-bit barrel shifter megacell 218 is a subset of a 32-bit shifter.
The 25-by-24 multiplier megacell 220 is a 25-bit by 24-bit multiplier. The 25-by-24 multiplier megacell 220 has an actual datapath of 27 bit cells with twelve rows of the 27 bit cells. The 25-by-24 multiplier megacell 220 is a radix 4 booth recoded multiplier that generates an output signal in the form of a 28-bit product in a carry-save format. The booth recoders are recoded from the carry-save format in contrast to a binary format.
The compressor and adder megacell 222 includes a 4:2 compressor followed by a 28- bit adder. The 28-bit adder uses a kogge-stone algorithm with lings modification.
The multiplexer and incrementer megacell 224 produces two 24-bit products, a sum of two 28-bit numbers in the carry-save format and the increment of the sum. The final multiplexer selects a correct answer based on the sign of the result from the compressor and adder megacell 222. The adder of the multiplexer and incrementer megacell 224 uses conditional sum adders.
Referring to FIGURE 3, a graphic shows the format of a single-precision floating point number 300. The single-precision floating point format 300 has three fields including one bit for the sign 302, eight bits for the exponent 304, and 23 bits for the mantissa 306. The sign bit 302 equal to zero designates a positive number. The sign bit 302 equal to one designates a negative number. The value of the exponent 304 ranges from 0 to 255. The bias of the exponent 304 is +127. Of the 256 values in the range 0 to 255, only the values of 0 and 255 are reserved for special values. The maximum positive exponent is +127. The minimum negative exponent is -126. The lower order 23 bits designate the mantissa 306, which is an unsigned fractional number. An implicit value of 1 is included prior to the unsigned fraction. The range of values of the mantissa 306 is from 1.0 to (2-2~23). The mantissa range is defined only for normal numbers.
The value of a floating point number is given by the equation, as follows:
F = (-l)Sl.M(2E-127)
For the sign bit 302 (S), the mantissa 306 (M), and the exponent 304 (E).
Several special cases are represented differently than the equation in which the floating point number format is otherwise represented as follows:
(1) If the exponent 304 is 255 and the mantissa 306 is zero then the floating point number represents +/- infinity where the sign of infinity is defined by the sign bit 302.
(2) If the exponent 304 is equal to 255 and M is not equal to zero, then the floating point number is defined as not-a-number (NaN).
(3) If the exponent 304 is equal to zero and the mantissa 306 is equal to zero then the floating point number represents +/- 0. The sign of zero is defined by the sign bit 302.
(4) If the exponent 304 is equal to zero and the mantissa 306 is not equal to zero, then the floating point number represents a denormal number. The value of the denormal number is given by the equation, as follows:
F = (-l)Sθ.M(2E-126)
Referring to FIGURE 4A, a graph of an exponential function is shown that describes a technique utilized to perform a single-precision floating-point division operation. The fdiv instruction is implemented using a "piece-wise quadratic approximation to 1/X" operation. The floating point division (Y/X) is executed by calculating the value of the reciprocal term 1/X and then multiplying the resultant value by Y to obtain the division result. The piece- wise quadratic approximation for computing 1/X uses the equation, as follows: l/X = Aχ2 + Bx + C, where X is defined as a floating point number, the mantissa, and x is the lower-order 15 bits of the mantissa 306. A, B, and C are constant coefficients that are stored in a 256 word
ROM. The A, B, and C coordinates are generated using a "generalized-inverse" method for least-squares approximation of 256 equally spaced points within each interval. The ROM lookup table is indexed using the leading 9 bits of the mantissa 306. Since the MSB is always 1, only the next eight bits are used to index to the ROM lookup table.
The coefficients A, B, and C are 11, 18, and 28 bits wide, respectively. The coefficients have sufficient precision to give a final result for single-precision accuracy. Since x has eight leading zeros, the MSB of Ax2 affects only the 17m or lesser significant bits of the approximation. Similarly, the MSB of Bx affects only the 9m or lesser significant bits of the approximation. Coefficients are computed to minimize the least mean square error.
A prerounded result for the Y/X. division is computed by multiplying the approximated value of 1/X and Y. The result is "rounded" correctly in a later cycle. The rounding mode is a "round to nearest" operation.
To determine coefficients Aj, Bj, and Cj for the floating point division function 1/X, for each interval i, where i is an integer from 0 to 255, 256 equally-spaced points are selected. At each of the 256 points. An equation, as follows:
Figure imgf000011_0001
is solved for xj, for a range of integers j from 0 to 255. The values of xj are the lower-order bits of the mantissa X from xo = 0x0 to X255 = OxOOOOOff, as is shown in FIGURE 4B. Solving for XJ produces 256 equations to solve for the coefficients Aj, Bj, and Cj using a singular-value decomposition method. Aj, Bj, and Q are computed for all 256 intervals.
Referring to FIGURE 5, a table shows a data flow for the floating point division operation. In a functional unit, a processor, and the like, a control logic such as microcode and sequencer, or logic circuits. In the first cycle, the value of x2 is calculated and the coefficients A, B, and C are accessed from the ROM lookup table. The result of the 16-bit by 16-bit multiplication is in the final binary form. In the second cycle, coefficient A is multiplied by x2 and the coefficient B is multiplied by x. The Bx result is in a carry-save format. In the third cycle, the value 1/X is approximated by adding the values Ax2, Bx, and C using the 28-bit adder.
In the fourth cycle, a prerounded result of Y/X is determined with the result in a carry-save format. In the fifth cycle, the division result is rounded by precalculating the prerounded value. In the sixth cycle, a suitable rounded value is selected.
The sign of the result is obtained by performing an exclusive-OR operation on the sign of the value X and the value Y. The exponent of the result is computed by subtracting the exponent of X from the exponent of Y. The exponent of the result may have to be decremented by one if the mantissa of Y is less than the mantissa of X. The bias of the exponents is taken into account while subtracting the exponent.
Referring to FIGURE 6, a table shows different cases for rounding to the nearest even scheme. The rounding operation takes a number regarded as infinitely precise and modifies the number to fit the format of the destination. The IEEE standard for binary floating point arithmetic defines four possible rounding schemes: round to the nearest, round towards +infinity, round towards -infinity, and round to 0. The most difficult rounding scheme to implement is round to nearest. In the round to nearest mode, the representable value nearest to the infinitely precise result is delivered and if the two nearest representable values are equally near, the result with a least significant bit of zero is delivered. The described round to nearest technique attains the smallest error and therefore produces a best numerical result. However, the described round to nearest mode utilizes an extra addition and the carry may propagate fully across the number. FIGURE 6 shows different cases of rounding.
The result obtained by a multiplication of the approximation of 1/X and Y (for example, a value Z1) may have an error of 1 in the least significant bit. To determine a correct result (for example, Z) and perform a correct rounding, the following operations are performed. First the precision of Zl is increased by one bit. A value 1 is then added to the next least significant bit, increasing the value to 25 bits. Then the remainder is computed via the equation, as follows: Rem = Zl* (X-Y).
If the remainder is positive, the originally-approximated value is correct. If the remainder is negative, another half is added to Zl to attain a correct result. If the remainder is zero, one- half is either added or subtracted, depending on the value of the LSB. Thus referring again to FIGURE 5, to compute the correct result in the case of rounding to the nearest value, Ql is truncated at the round bit position and incremented at the round bit position (Ql 1) in cycle 5.
In cycle 6, the incremented quotient QU is then within a least-significant bit (LSB) of the correct solution. The incremented quotient multiplied by the divisor is compared with the dividend by subtraction. If the remainder is negative, the quotient Ql is more than half an LSB below the correct value, and is thus incremented. If the remainder is positive, the quotient Ql is less than half below the correct answer, and the quotient is not to be incremented. If the remainder is equal to zero, the final value is selected based on the LSB of Ql . To compute the correct result in the case of other rounding modes, the quotient is merely incremented or truncated depending on the operation code.
Pseudocode that describes an example of a suitable technique for rounding to the nearest number is, as follows:
Ql l = Ql « I + l; Remainder = (Q 11 *D) — Dividend;
IF {Remainder< 0) THEN Quotient = Ql + 1;
ELSE IF {Remainder > 0) THEN Quotient = Ql;
ELSE IF {{Remainder = 0) & LSB of Q 1=0);
THEN Quotient = Ql; ELSE IF {{Remainder = 0) & LSB of Q1=l);
THEN Quotient = Ql + 1.
Referring to FIGURE 7, a schematic block diagram illustrates a single integrated circuit chip implementation of a processor 700 that includes a memory interface 702, a geometry decompressor 704, two media processing units 710 and 712, a shared data cache 706, and several interface controllers. The interface controllers support an interactive graphics environment with real-time constraints by integrating fundamental components of memory, graphics, and input output bridge functionality on a single die. The components are mutually linked and closely linked to the processor core with high bandwidth, low-latency communication channels to manage multiple high-bandwidth data streams efficiently and with a low response time. The interface controllers include a an UltraPort Architecture
Interconnect (UP A) controller 716 and a peripheral component interconnect (PCI) controller 720. The illustrative memory interface 702 is a direct Rambus dynamic RAM (DRDRAM) controller. The shared data cache 706 is a dual-ported storage that is shared among the media processing units 710 and 712 with one port allocated to each media processing unit. The data cache 706 is four- way set associative, follows a write-back protocol, and supports hits in the fill buffer (not shown). The data cache 706 allows fast data sharing and eliminates the need for a complex, error-prone cache coherency protocol between the media processing units 710 and 712.
The UPA controller 716 is a custom interface that attains a suitable balance between high-performance computational and graphic subsystems. The UPA is a cache-coherent, processor-memory interconnect. The UPA attains several advantageous characteristics including a scaleable bandwidth through support of multiple bused interconnects for data and addresses, packets that are switched for improved bus utilization, higher bandwidth, and precise interrupt processing. The UPA performs low latency memory accesses with high throughput paths to memory. The UPA includes a buffered cross-bar memory interface for increased bandwidth and improved scaleability. The UPA supports high-performance graphics with two-cycle single-word writes on the 64-bit UPA interconnect. The UPA interconnect architecture utilizes point-to-point packet switched messages from a centralized system controller to maintain cache coherence. Packet switching improves bus bandwidth utilization by removing the latencies commonly associated with transaction-based designs.
The PCI controller 720 is used as the primary system I/O interface for connecting standard, high-volume, low-cost peripheral devices, although other standard interfaces may also be used. The PCI bus effectively transfers data among high bandwidth peripherals and low bandwidth peripherals, such as CD-ROM players, DND players, and digital cameras.
Two media processing units 710 and 712 are included in a single integrated circuit chip to support an execution environment exploiting thread level parallelism in which two independent threads can execute simultaneously. The threads may arise from any sources such as the same application, different applications, the operating system, or the runtime environment. Parallelism is exploited at the thread level since parallelism is rare beyond four, or even two, instructions per cycle in general purpose code. For example, the illustrative processor 700 is an eight-wide machine with eight execution units for executing instructions. A typical "general-purpose" processing code has an instruction level parallelism of about two so that, on average, most (about six) of the eight execution units would be idle at any time. The illustrative processor 700 employs thread level parallelism and operates on two independent threads, possibly attaining twice the performance of a processor having the same resources and clock rate but utilizing traditional non-thread parallelism.
Thread level parallelism is particularly useful for JavaT applications which are bound to have multiple threads of execution. JavaTM methods including "suspend", "resume", "sleep", and the like include effective support for threaded program code. In addition, JavaTM cιass libraries are thread-safe to promote parallelism. Furthermore, the thread model of the processor 700 supports a dynamic compiler which runs as a separate thread using one media processing unit 710 while the second media processing unit 712 is used by the current application. In the illustrative system, the compiler applies optimizations based on "on-the-fly" profile feedback information while dynamically modifying the executing code to improve execution on each subsequent run. For example, a "garbage collector" may be executed on a first media processing unit 710, copying objects or gathering pointer information, while the application is executing on the other media processing unit 712.
Although the processor 700 shown in FIGURE 7 includes two processing units on an integrated circuit chip, the architecture is highly scaleable so that one to several closely- coupled processors may be formed in a message-based coherent architecture and resident on the same die to process multiple threads of execution. Thus, in the processor 700, a limitation on the number of processors formed on a single die thus arises from capacity constraints of integrated circuit technology rather than from architectural constraints relating to the interactions and interconnections between processors.
Referring to FIGURE 8, a schematic block diagram shows the core of the processor 700. The media processing units 710 and 712 each include an instruction cache 810, an instruction aligner 812, an instruction buffer 814, a pipeline control unit 826, a split register file 816, a plurality of execution units, and a load/store unit 818. In the illustrative processor 700, the media processing units 710 and 712 use a plurality of execution units for executing instructions. The execution units for a media processing unit 710 include three media functional units (MFU) 820 and one general functional unit (GFU) 822. The media functional units 820 are multiple single-instruction-multiple-datapath (MSIMD) media functional units. Each of the media functional units 820 is capable of processing parallel 16- bit components. Various parallel 16-bit operations supply the single- instruction-multiple- datapath capability for the processor 700 including add, multiply-add, shift, compare, and the like. The media functional units 820 operate in combination as tightly-coupled digital signal processors (DSPs). Each media functional unit 820 has an separate and individual sub- instruction stream, but all three media functional units 820 execute synchronously so that the subinstructions progress lock-step through pipeline stages. The general functional unit 822 is a RISC processor capable of executing arithmetic logic unit (ALU) operations, loads and stores, branches, and various specialized and esoteric functions such as parallel power operations, reciprocal square root operations, and many others. The general functional unit 822 supports less common parallel operations such as the parallel reciprocal square root instruction.
The illustrative instruction cache 810 has a 16 Kbyte capacity and includes hardware support to maintain coherence, allowing dynamic optimizations through self-modifying code. Software is used to indicate that the instruction storage is being modified when modifications occur. The 16K capacity is suitable for performing graphic loops, other multimedia tasks or processes, and general-purpose JavaTM code. Coherency is maintained by hardware that supports write-through, non-allocating caching. Self-modifying code is supported through explicit use of "store-to-instruction-space" instructions store2i. Software uses the storeli instruction to maintain coherency with the instruction cache 810 so that the instruction caches 810 do not have to be snooped on every single store operation issued by the media processing unit 710.
The pipeline control unit 826 is connected between the instruction buffer 814 and the functional units and schedules the transfer of instructions to the functional units. The pipeline control unit 826 also receives status signals from the functional units and the load/store unit 818 and uses the status signals to perform several control functions. The pipeline control unit 826 maintains a scoreboard, generates stalls and bypass controls. The pipeline control unit 826 also generates traps and maintains special registers.
Each media processing unit 710 and 712 includes a split register file 816, a single logical register file including 128 thirty-two bit registers. The split register file 816 is split into a plurality of register file segments 824 to form a multi-ported structure that is replicated to reduce the integrated circuit die area and to reduce access time. A separate register file segment 824 is allocated to each of the media functional units 820 and the general functional unit 822. In the illustrative embodiment, each register file segment 824 has 128 32-bit registers. The first 96 registers (0-95) in the register file segment 824 are global registers. All functional units can write to the 96 global registers. The global registers are coherent across all functional units (MFU and GFU) so that any write operation to a global register by any functional unit is broadcast to all register file segments 824. Registers 96-127 in the register file segments 824 are local registers. Local registers allocated to a functional unit are not accessible or "visible" to other functional units. The media processing units 710 and 712 are highly structured computation blocks that execute software-scheduled data computation operations with fixed, deterministic and relatively short instruction latencies, operational characteristics yielding simplification in both function and cycle time. The operational characteristics support multiple instruction issue through a pragmatic very large instruction word (VLIW) approach that avoids hardware interlocks to account for software that does not schedule operations properly. Such hardware interlocks are typically complex, error-prone, and create multiple critical paths. A VLIW instruction word always includes one instruction that executes in the general functional unit (GFU) 822 and from zero to three instructions that execute in the media functional units (MFU) 820. A MFU instruction field within the VLIW instruction word includes an operation code (opcode) field, three source register (or immediate) fields, and one destination register field.
Instructions are executed in-order in the processor 700 but loads can finish out-of- order with respect to other instructions and with respect to other loads, allowing loads to be moved up in the instruction stream so that data can be streamed from main memory. The execution model eliminates the usage and overhead resources of an instruction window, reservation stations, a re-order buffer, or other blocks for handling instruction ordering. Elimination of the instruction ordering structures and overhead resources is highly advantageous since the eliminated blocks typically consume a large portion of an integrated circuit die. For example, the eliminated blocks consume about 30% of the die area of a Pentium II processor.
To avoid software scheduling errors, the media processing units 710 and 712 are high-performance but simplified with respect to both compilation and execution. The media processing units 710 and 712 are most generally classified as a simple 2-scalar execution engine with full bypassing and hardware interlocks on load operations. The instructions include loads, stores, arithmetic and logic (ALU) instructions, and branch instructions so that scheduling for the processor 700 is essentially equivalent to scheduling for a simple 2-scalar execution engine for each of the two media processing units 710 and 712.
The processor 700 supports full bypasses between the first two execution units within the media processing unit 710 and 712 and has a scoreboard in the general functional unit 822 for load operations so that the compiler does not need to handle nondeterministic latencies due to cache misses. The processor 700 scoreboards long latency operations that are executed in the general functional unit 822, for example a reciprocal square-root operation, to simplify scheduling across execution units. The scoreboard (not shown) operates by tracking a record of an instruction packet or group from the time the instruction enters a functional unit until the instruction is finished and the result becomes available. A VLIW instruction packet contains one GFU instruction and from zero to three MFU instructions. The source and destination registers of all instructions in an incoming VLIW instruction packet are checked against the scoreboard. Any true dependencies or output dependencies stall the entire packet until the result is ready. Use of a scoreboarded result as an operand causes instruction issue to stall for a sufficient number of cycles to allow the result to become available. If the referencing instruction that provokes the stall executes on the general functional unit 822 or the first media functional unit 820, then the stall only endures until the result is available for intra-unit bypass. For the case of a load instruction that hits in the data cache 106, the stall may last only one cycle. If the referencing instruction is on the second or third media functional units 820, then the stall endures until the result reaches the writeback stage in the pipeline where the result is bypassed in transmission to the split register file 816.
The scoreboard automatically manages load delays that occur during a load hit. In an illustrative embodiment, all loads enter the scoreboard to simplify software scheduling and eliminate NOPs in the instruction stream.
The scoreboard is used to manage most interlocks between the general functional unit 822 and the media functional units 820. All loads and non-pipelined long-latency operations of the general functional unit 822 are scoreboarded. The long-latency operations include division idiv,fdiv instructions, reciprocal square root frecsqrt, precsqrt instructions, and power ppower instructions. None of the results of the media functional units 820 is scoreboarded. Non-scoreboarded results are available to subsequent operations on the functional unit that produces the results following the latency of the instruction.
The illustrative processor 700 has a rendering rate of over fifty million triangles per second without accounting for operating system overhead. Therefore, data feeding specifications of the processor 700 are far beyond the capabilities of cost-effective memory systems. Sufficient data bandwidth is achieved by rendering of compressed geometry using the geometry decompressor 104, an on-chip real-time geometry decompression engine. Data geometry is stored in main memory in a compressed format. At render time, the data geometry is fetched and decompressed in real-time on the integrated circuit of the processor 700. The geometry decompressor 104 advantageously saves memory space and memory transfer bandwidth. The compressed geometry uses an optimized generalized mesh structure that explicitly calls out most shared vertices between triangles, allowing the processor 700 to transform and light most vertices only once. In a typical compressed mesh, the triangle throughput of the transform-and-light stage is increased by a factor of four or more over the throughput for isolated triangles. For example, during processing of triangles, multiple vertices are operated upon in parallel so that the utilization rate of resources is high, achieving effective spatial software pipelining. Thus operations are overlapped in time by operating on several vertices simultaneously, rather than overlapping several loop iterations in time. For other types of applications with high instruction level parallelism, high trip count loops are software-pipelined so that most media functional units 820 are fully utilized.
Referring to FIGURE 9, a schematic block diagram shows a logical view of the register file 816 and functional units in the processor 700. The physical implementation of the core processor 700 is simplified by replicating a single functional unit to form the three media processing units 710. The media processing units 710 include circuits that execute various arithmetic and logical operations including general-purpose code, graphics code, and video-image-speech (VIS) processing. VIS processing includes video processing, image processing, digital signal processing (DSP) loops, speech processing, and voice recognition algorithms, for example.
A media processing unit 710 includes a 32-bit floating-point multiplier-adder to perform signal transform operations, clipping, facedness operations, sorting, triangle set-up operations, and the like. The media processing unit 710 similarly includes a 16X16-bit integer multiplier-adder for perform operations such as lighting, transform normal lighting, computation and normalization of vertex view vectors, and specular light source operations. The media processing unit 710 supports clipping operations and 1/square root operations for lighting tasks, and reciprocal operations for screen space dividing, clipping, set-up, and the like. For VIS operations, the media processing unit 710 supports 16/32-bit integer add operations, 16X16-bit integer multiplication operations, parallel shifting, and pack, unpack, and merge operations. For general-purpose code, the media processing unit 710 supports 32- bit integer addition and subtraction, and 32-bit shift operations. The media processing unit 710 supports a group load operation for unit stride code, a bit extract operation for alignment and multimedia functionality, a pdist operation for data compression and averaging, and a byte shuffle operation for multimedia functionality.
The media processing unit 710 supports the operations by combining functionality and forming a plurality of media functional units 820 and a general functional unit 822. The media functional units 820 support a 32-bit floating-point multiply and add operation, a 16X16-bit integer multiplication and addition operation, and a 8/16/32-bit parallel add operation. The media functional units 820 also support a clip operation, a bit extract operation, a pdist operation, and a byte shuffle operation. Other functional units that are in some way incompatible with the media functional unit 820 or consume too much die area for a replicated structure, are included in the general functional unit 822. The general functional unit 822 therefore includes a load/store unit, a reciprocal unit, a 1 /square root unit, a pack, unpack and merge unit, a normal and parallel shifter, and a 32-bit adder.
Computation instructions perform the real work of the processor 700 while load and store instructions may considered mere overhead for supplying and storing computational data to and from the computational functional units. To reduce the number of load and store instructions in proportion to the number of computation instructions, the processor 700 supports group load dg) and store long {stl) instructions. A single load group loads eight consecutive 32-bit words into the split register file 816. A single store long sends the contents of two 32-bit registers to a next level of memory hierarchy. The group load and store long instructions are used to transfer data among the media processing units 710, the UPA controller 716, and the geometry decompressor 704.
Referring to FIGURE 10, a simplified schematic timing diagram illustrates timing of the processor pipeline 7000. The pipeline 7000 includes nine stages including three initiating stages, a plurality of execution phases, and two terminating stages. The three initiating stages are optimized to include only those operations necessary for decoding instructions so that jump and call instructions, which are pervasive in the JavaTM language, execute quickly. Optimization of the initiating stages advantageously facilitates branch prediction since branches, jumps, and calls execute quickly and do not introduce many bubbles.
The first of the initiating stages is a fetch stage 1010 during which the processor 700 fetches instructions from the 16Kbyte two-way set-associative instruction cache 810. The fetched instructions are aligned in the instruction aligner 812 and forwarded to the instruction buffer 814 in an align stage 1012, a second stage of the initiating stages. The aligning operation properly positions the instructions for storage in a particular segment of the four register file segments and for execution in an associated functional unit of the three media functional units 820 and one general functional unit 822. In a third stage, a decoding stage 1014 of the initiating stages, the fetched and aligned VLIW instruction packet is decoded and the scoreboard (not shown) is read arid updated in parallel. The four register file segments each holds either floating-point data or integer data.
Following the decoding stage 1014, the execution stages are performed. The particular stages that are performed within the execution stages vary depending on the particular instruction to be executed. A single execution stage 1022 is performed for critical single-cycle operations 1020 such as, add, logical, compare, and clip instructions. Address- cycle type operations 1030, such as load instructions, are executed in two execution cycles including an address computation stage 1032 followed by a single-cycle cache access 1034. General arithmetic operations 1040, such as floating-point and integer multiply and addition instructions, are executed in four stages X 1042, X2 1044, X3 1046, and X4 1048.
Extended operations 1050 are long instructions such as floating-point divides, reciprocal square roots, 16-bit fixed-point calculations, 32-bit floating-point calculations, and parallel power instructions, that last for six cycles, but are not pipelined.
The two terminating stages include a trap-handling stage 1060 and a write-back stage 1062 during which result data is written-back to the split register file 816.
Computational instructions have fundamental importance in defining the architecture and the instruction set of the processor 700. Computational instructions are only semantically separated into integer and floating-point categories since the categories operate on the same set of registers.
The general functional unit 822 executes a fixed-point power computation instruction ppower. The power instruction has the form ppower r[rsl],r[rs2],r[rd] and computes "r[rsl]**r[rs2J" where each of the sources is operated upon as a pair of independent 16-bit S2.13 format fixed-point quantities. The result is a pair of independent 16-bit S2.13 format fixed-point powers placed in the register rfrd]. Zero to any power is defined to give a zero result.
The general functional unit 822 includes functional units that execute a floating-point division fdiv instruction, a floating-point reciprocal^/reczp instruction, a floating-point square root fsqrt instruction, and a floating-point reciprocal square root frecsqrt instruction, each for single-precision numbers. The floating-point division instruction has the form/Jz'v rsl,rs2,rd and computes a single-precision floating-point division "r[rsl]/r[rs2]" and delivers the result in rfrd]. The floating-point reciprocal instruction has the form frecip rsl.rd and executes a single-precision floating-point reciprocal with a latency of eight cycles. The floating-point square root instruction has the formJ-s^rt rsl,rd and executes a single-precision floating-point square root operation. The floating-point reciprocal square root instruction has the form frecsqrt rsl.rd and executes a single-precision floating-point reciprocal of the square root operation on the quantity in rfrsl] and places the result in rfrd].
The general functional unit 822 also supports a fixed-point parallel reciprocal square root precsqrt instruction. The fixed-point reciprocal square root instruction has the form precsqrt rsl.rd. Precsqrt computes a pair of S2.13 format fixed-point reciprocal square roots of the pair of S2.13 format values on register rfrsl]. Results are delivered in register rfrd]. The result for a source operand that is less than or equal to zero is undefined.
The general functional unit 822 executes an integer divide idiv instruction that computes either "r[rsl]/r[rs2]" or "r[rsl]/sign_ext(imml4) " and places the result in rfrd].
While the invention has been described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the invention is not limited to them. Many variations, modifications, additions and improvements of the embodiments described are possible. For example, those skilled in the art will readily implement the steps necessary to provide the structures and methods disclosed herein, and will understand that the process parameters, materials, and dimensions are given by way of example only and can be varied to achieve the desired structure as well as modifications which are within the scope of the invention. Variations and modifications of the embodiments disclosed herein may be made based on the description set forth herein, without departing from the scope and spirit of the invention as set forth in the following claims.

Claims

WHAT IS CLAIMED IS:
L A method of computing a floating point division operation in a computing device comprising: computing a piece- wise quadratic approximation of the number X using an equation of the form: 1/X = Ax2 + Bx + C and a multiplication of (1/X)*Y, the number X having a mantissa and an exponent, the computing operation including: accessing the A, B, and C coefficients from a storage; computing the value Ax +Bx+C result, the result having a mantissa and an exponent; multiplying the computed value Ax2 + Bx + C times a multiplier Y to generate a pre-rounded result; and rounding the pre-rounded result to the nearest value.
2. A method according to Claim 1 wherein: the number X has a mantissa and an exponent in a plurality of parallel data paths; and the operation of computing the value Ax +Bx+C further comprises: squaring the x term of the number X to obtain an x2 term; multiplying the x2 term times the coefficient A to obtain an Ax2 term; multiplying the x term times the coefficient B to obtain a Bx term; and summing the Ax2 term, the Bx term, and the C term to form a reciprocal term 1/X that is multiplied by the multiplier Y to determine a pre-rounded result.
3. A method according to either Claim 1 or Claim 2 further comprising: rounding the pre-rounded result according to IEEE-754 specification including: selecting a round bit position; truncating the pre-rounded result at the round bit position; incrementing the truncated pre-rounded result; multiplying the incremented and truncated pre-rounded result times the multiplier Y to generate a rounding test result; comparing the pre-rounded result to the rounding test result; if the rounding test result is larger, incrementing the pre-rounded to determine a rounded result; if the pre-rounded result is larger, setting the value of the rounded result equal to the pre-rounded result value; and if the pre-rounded result is equal to the rounding test result, setting the rounded result value according to the LSB of the pre-rounded result value.
4. A method according to any of Claims 1 through 3 further comprising: deriving the coefficients A, B, and C to reduce least mean square error using a least squares approximation of a plurality of equally-spaced points within an interval.
5. A method according to any of Claims 1 through 4 wherein: the number X is a floating point number in which the value X designates the mantissa and x designates lower order bits of the floating point number X.
6. A method according to any of Claims 1 through 5 further comprising: computing the value Ax2+Bx+C result including: computing partial products x2, Ax2, and Bx without rounding.
7. A method according to any of Claims 1 through 6 further comprising: accessing the A, B, and C coefficients from a storage including: indexing the storage using higher order bits of the mantissa.
8. A method according to any of Claims 1 through 7 further comprising: accessing the A, B, and C coefficients from a storage including: indexing the storage using higher order bits of the mantissa excluding the most significant bit.
9. A method according to any of Claims 1 through 8 further comprising: truncating the pre-rounded result at the round bit position and incrementing the truncated pre-rounded result in a single clock cycle.
10. A method according to any of Claims 1 through 9 further comprising: accessing the A, B, and C coefficients from a storage and squaring the x term of the number X to obtain an x2 term in a single clock cycle.
11. A method according to any of Claims 1 through 10 further comprising: multiplying the x2 term times the coefficient A to obtain an Ax2 term and multiplying the x term times the coefficient B to obtain a Bx term in a single clock cycle.
12. A method according to any of Claims 1 through 11 further comprising: summing the Ax2 term, the Bx term, and the C term to form an approximation result and shifting the exponent right in a single clock cycle.
13. An integrated circuit including: a multiplier; an adder coupled to the multiplier; and a control logic coupled to the multiplier and the adder, the control logic performing the method according to any of Claims 1 through 12.
14. A processor comprising: an instruction storage; a register file coupled to the instruction storage; a functional unit including: a multiplier; an adder coupled to the multiplier; and a control logic coupled to the multiplier and the adder, the control logic performing the method according to any of Claims 1 through 12.
15. An integrated circuit including: a storage; a first multiplier and a second multiplier coupled to the storage; an adder coupled to the storage, the first multiplier, and the second multiplier; an incrementer coupled to the storage, the first multiplier, and the second multiplier; a control logic coupled to the storage, the first multiplier, the second multiplier, the adder, and the incrementer, the control logic that computes a piece- wise quadratic approximation of the number X having a mantissa and an exponent in a plurality of parallel data paths using an equation of the form 1/X = Ax2 + Bx + C and a multiplication of (1/X)*Y, the computing operation including: accessing the A, B, and C coefficients from a storage; squaring the x term of the number X to obtain an x2 term; multiplying the x2 term times the coefficient A to obtain an Ax2 term; multiplying the x term times the coefficient B to obtain a Bx term; summing the Ax2 term, the Bx term, and the C term to form a reciprocal term 1/X; multiplying the reciprocal term 1/X by a multiplier Y to determine a pre-rounded result; and rounding the pre-rounded result to the nearest value.
16. An integrated circuit according to Claim 15 wherein: the first multiplier is a 16-bit by 16-bit multiplier; the second multiplier is a 28X24 multiplier; and the adder is a 28-bit adder.
PCT/US2000/001780 1999-01-29 2000-01-24 Division unit in a processor using a piece-wise quadratic approximation technique WO2000045253A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/240,312 US6351760B1 (en) 1999-01-29 1999-01-29 Division unit in a processor using a piece-wise quadratic approximation technique
US09/240,312 1999-01-29

Publications (2)

Publication Number Publication Date
WO2000045253A1 WO2000045253A1 (en) 2000-08-03
WO2000045253A9 true WO2000045253A9 (en) 2002-05-02

Family

ID=22906047

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/001780 WO2000045253A1 (en) 1999-01-29 2000-01-24 Division unit in a processor using a piece-wise quadratic approximation technique

Country Status (2)

Country Link
US (1) US6351760B1 (en)
WO (1) WO2000045253A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6646639B1 (en) 1998-07-22 2003-11-11 Nvidia Corporation Modified method and apparatus for improved occlusion culling in graphics systems
US7509486B1 (en) * 1999-07-08 2009-03-24 Broadcom Corporation Encryption processor for performing accelerated computations to establish secure network sessions connections
US6844880B1 (en) * 1999-12-06 2005-01-18 Nvidia Corporation System, method and computer program product for an improved programmable vertex processing model with instruction set
US6954214B2 (en) * 2000-10-30 2005-10-11 Microsoft Corporation Efficient perceptual/physical color space conversion
US7456845B2 (en) * 2000-10-30 2008-11-25 Microsoft Corporation Efficient perceptual/physical color space conversion
US6678710B1 (en) 2000-11-03 2004-01-13 Sun Microsystems, Inc. Logarithmic number system for performing calculations in a processor
US7006101B1 (en) 2001-06-08 2006-02-28 Nvidia Corporation Graphics API with branching capabilities
US7456838B1 (en) 2001-06-08 2008-11-25 Nvidia Corporation System and method for converting a vertex program to a binary format capable of being executed by a hardware graphics pipeline
US7281140B2 (en) * 2001-12-28 2007-10-09 Intel Corporation Digital throttle for multiple operating points
JP3886870B2 (en) * 2002-09-06 2007-02-28 株式会社ルネサステクノロジ Data processing device
US7539720B2 (en) * 2004-12-15 2009-05-26 Sun Microsystems, Inc. Low latency integer divider and integration with floating point divider and method
WO2006107996A2 (en) 2005-04-05 2006-10-12 Sunfish Studio, Llc Modal interval processor
US8254700B1 (en) 2006-10-03 2012-08-28 Adobe Systems Incorporated Optimized method and system for entropy coding
US8452831B2 (en) * 2009-03-31 2013-05-28 Oracle America, Inc. Apparatus and method for implementing hardware support for denormalized operands for floating-point divide operations
US9086890B2 (en) 2012-01-06 2015-07-21 Oracle International Corporation Division unit with normalization circuit and plural divide engines for receiving instructions when divide engine availability is indicated
US9158498B2 (en) * 2013-02-05 2015-10-13 Intel Corporation Optimizing fixed point divide
US9454345B1 (en) * 2013-12-02 2016-09-27 Google Inc. Apparatus for faster division

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4336599A (en) 1980-06-09 1982-06-22 Sperry Corporation Circuit for performing a square root calculation
US4477879A (en) 1981-12-28 1984-10-16 Sperry Corporation Floating point processor architecture which performs square root by hardware
US4682302A (en) 1984-12-14 1987-07-21 Motorola, Inc. Logarithmic arithmetic logic unit
US4852038A (en) 1985-07-02 1989-07-25 Vlsi Techology, Inc. Logarithmic calculating apparatus
US4857882A (en) 1985-07-02 1989-08-15 Vlsi Technology, Inc. Comparator array logic
US4737925A (en) 1985-12-06 1988-04-12 Motorola, Inc. Method and apparatus for minimizing a memory table for use with nonlinear monotonic arithmetic functions
US4734876A (en) 1985-12-18 1988-03-29 Motorola, Inc. Circuit for selecting one of a plurality of exponential values to a predetermined base to provide a maximum value
JPH0720263B2 (en) 1986-01-17 1995-03-06 ソニー株式会社 Carrier color signal processing circuit
US4757467A (en) 1986-05-15 1988-07-12 Rca Licensing Corporation Apparatus for estimating the square root of digital samples
US4823301A (en) * 1987-10-22 1989-04-18 Tektronix, Inc. Method and circuit for computing reciprocals
US4991132A (en) * 1987-12-17 1991-02-05 Matsushita Electric Industrial Co., Ltd. Apparatus for executing division by high-speed convergence processing
US5337266A (en) 1987-12-21 1994-08-09 Arnold Mark G Method and apparatus for fast logarithmic addition and subtraction
US4949296A (en) 1988-05-18 1990-08-14 Harris Corporation Method and apparatus for computing square roots of binary numbers
US5359551A (en) 1989-06-14 1994-10-25 Log Point Technologies, Inc. High speed logarithmic function generating apparatus
US5184317A (en) 1989-06-14 1993-02-02 Pickett Lester C Method and apparatus for generating mathematical functions
US5212661A (en) 1989-10-16 1993-05-18 Matsushita Electric Industrial Co., Ltd. Apparatus for performing floating point arithmetic operation and rounding the result thereof
US5097434A (en) 1990-10-03 1992-03-17 The Ohio State University Research Foundation Hybrid signed-digit/logarithmic number system processor
US5245564A (en) * 1991-05-10 1993-09-14 Weitek Corporation Apparatus for multiplying operands
US5268857A (en) 1992-01-08 1993-12-07 Ncr Corporation Device and method for approximating the square root of a number
US5305248A (en) 1993-04-23 1994-04-19 International Business Machines Corporation Fast IEEE double precision reciprocals and square roots
US5537345A (en) 1993-10-14 1996-07-16 Matsushita Electrical Industrial Co. Ltd. Mathematical function processor utilizing table information
US5563818A (en) 1994-12-12 1996-10-08 International Business Machines Corporation Method and system for performing floating-point division using selected approximation values
US5600581A (en) 1995-02-22 1997-02-04 Motorola, Inc. Logarithm/inverse-logarithm converter utilizing linear interpolation and method of using same
US5729481A (en) 1995-03-31 1998-03-17 International Business Machines Corporation Method and system of rounding for quadratically converging division or square root
US5675822A (en) 1995-04-07 1997-10-07 Motorola Inc. Method and apparatus for a digital signal processor having a multiplierless computation block
US5737257A (en) 1995-09-13 1998-04-07 Holtek Microelectronics, Inc. Method and apparatus for compression of integer multiplication table
US5751619A (en) 1996-01-22 1998-05-12 International Business Machines Corporation Recurrent adrithmetical computation using carry-save arithmetic
US5764555A (en) 1996-03-13 1998-06-09 International Business Machines Corporation Method and system of rounding for division or square root: eliminating remainder calculation
US5764990A (en) 1996-03-18 1998-06-09 Hewlett-Packard Company Compact encoding for storing integer multiplication Sequences
US5768170A (en) 1996-07-25 1998-06-16 Motorola Inc. Method and apparatus for performing microprocessor integer division operations using floating point hardware
US6115733A (en) * 1997-10-23 2000-09-05 Advanced Micro Devices, Inc. Method and apparatus for calculating reciprocals and reciprocal square roots

Also Published As

Publication number Publication date
WO2000045253A1 (en) 2000-08-03
US6351760B1 (en) 2002-02-26

Similar Documents

Publication Publication Date Title
US6349319B1 (en) Floating point square root and reciprocal square root computation unit in a processor
US6671796B1 (en) Converting an arbitrary fixed point value to a floating point value
US6341300B1 (en) Parallel fixed point square root and reciprocal square root computation unit in a processor
US9891887B2 (en) Subdivision of a fused compound arithmetic operation
US6279100B1 (en) Local stall control method and structure in a microprocessor
US6351760B1 (en) Division unit in a processor using a piece-wise quadratic approximation technique
US5923871A (en) Multifunctional execution unit having independently operable adder and multiplier
US6487575B1 (en) Early completion of iterative division
KR101009095B1 (en) Graphics processor having multipurpose double precision functional unit
US6490607B1 (en) Shared FP and SIMD 3D multiplier
US5418736A (en) Optimized binary adders and comparators for inputs having different widths
US20060041610A1 (en) Processor having parallel vector multiply and reduce operations with sequential semantics
JP2001501330A (en) Digital signal processing integrated circuit architecture
WO2000033183A9 (en) Method and structure for local stall control in a microprocessor
TW201812571A (en) Vector multiply-ADD instruction
JPH09507592A (en) Unified floating point and integer data path for RISC processor
US20010042187A1 (en) Variable issue-width vliw processor
US7117342B2 (en) Implicitly derived register specifiers in a processor
US20030005261A1 (en) Method and apparatus for attaching accelerator hardware containing internal state to a processing core
US5590351A (en) Superscalar execution unit for sequential instruction pointer updates and segment limit checks
US6615338B1 (en) Clustered architecture in a VLIW processor
US6678710B1 (en) Logarithmic number system for performing calculations in a processor
Arakawa et al. SH4 RISC multimedia microprocessor
Rampogna et al. MACGIC, a low-power, re-configurable DSP
Yu et al. An energy-efficient mobile vertex processor with multithread expanded VLIW architecture and vertex caches

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
COP Corrected version of pamphlet

Free format text: PAGES 1-20, DESCRIPTION, REPLACED BY NEW PAGES 1-20; PAGES 21-24, CLAIMS, REPLACED BY NEW PAGES 21-24; PAGES 1/11-11/11, DRAWINGS, REPLACED BY NEW PAGES 1/10-10/10; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase