USRE35794E - System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache - Google Patents

System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache Download PDF

Info

Publication number
USRE35794E
USRE35794E US08/285,520 US28552094A USRE35794E US RE35794 E USRE35794 E US RE35794E US 28552094 A US28552094 A US 28552094A US RE35794 E USRE35794 E US RE35794E
Authority
US
United States
Prior art keywords
branch
instruction
address
iaddend
iadd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/285,520
Inventor
William M. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GlobalFoundries Inc
Original Assignee
Advanced Micro Devices Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Micro Devices Inc filed Critical Advanced Micro Devices Inc
Priority to US08/285,520 priority Critical patent/USRE35794E/en
Application granted granted Critical
Publication of USRE35794E publication Critical patent/USRE35794E/en
Anticipated expiration legal-status Critical
Assigned to GLOBALFOUNDRIES INC. reassignment GLOBALFOUNDRIES INC. AFFIRMATION OF PATENT ASSIGNMENT Assignors: ADVANCED MICRO DEVICES, INC.
Assigned to GLOBALFOUNDRIES U.S. INC. reassignment GLOBALFOUNDRIES U.S. INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3804Instruction prefetching for branches, e.g. hedging, branch folding
    • G06F9/3806Instruction prefetching for branches, e.g. hedging, branch folding using address prediction, e.g. return stack, branch history buffer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3858Result writeback, i.e. updating the architectural state or memory

Definitions

  • the present invention relates to a method and apparatus for improving processor performance by reducing processing delays associated with branch instructions.
  • the present invention provides an instruction cache for a super-scalar processor wherein branch-prediction information is provided within the instruction cache.
  • the time taken by a computing system to perform a particular application is determined by three basic factors, namely, the processor cycle time, the number of processor instructions required to perform the application, and the average number of processor cycles required to execute an instruction.
  • Overall system performance can be improved by reducing one or more of these factors.
  • the average number of cycles required to perform an application can be significantly reduced by employing a multi-processor architecture, i.e., providing more than one processor to execute separate instructions concurrently.
  • a super-scalar processor that executes more than one instruction per cycle can only be effective when instructions can be supplied at a sufficient rate. It is readily apparent that instruction fetching can be a limiting factor in overall system performance if the average rate of instruction fetching is less than the average rate of instruction execution. Providing the necessary instruction bandwidth for sequential instructions is relatively easy, as the instruction fetcher can simply fetch several instructions per cycle. It is much more difficult, however, to provide sufficient instruction bandwidth in the presence of non-sequential fetches caused by branches, as the branches make the instruction fetching dependent on the results of instruction execution. Thus, the instruction fetcher can either stall or fetch incorrect instructions when the outcome of a branch is not known.
  • FIG. 1 illustrates two instruction runs consisting of a number of instructions occupying four instruction-cache blocks (assuming a four-word cache block) in an instruction cache memory.
  • the first instruction run consists of instructions S1-S5 that contain a branch to a second instruction run T1-T4.
  • FIG. 2 illustrates how these instruction runs are sequenced through a four-instruction decoder and a two-instruction decoder, assuming for purposes of illustration that two cycles are required to determine the outcome of a branch.
  • the four-instruction decoder provides a higher instruction bandwidth than the two-instruction decoder, but neither provides sufficient instruction bandwidth for a super-scalar processor.
  • the instruction bandwidth improves dramatically if the branch delays are reduced to zero.
  • Branch prediction relies heavily on the fact that the outcome of a branch does not change frequently over a given period of time.
  • the instruction fetcher can predict future branch executions using information collected on the outcome of the previous branch executions performed by the execution unit.
  • a conventional method for hardware-branch prediction uses a branch target buffer to collect information about the most-recently executed branches. See, for example, "Branch Prediction Strategies and Branch Target Buffer Design", by J.K.F. Lee and A.J. Smith, IEEE Computer, Vol. 17, pp. 6-22, January, 1984.
  • the branch target buffer is accessed using an instruction address, and indicates whether or not the instruction at that address is a branch instruction. If the instruction is a branch instruction, the branch target buffer indicates the predicted outcome and the target address.
  • FIG. 4 is a graph of the hit ratio for a target branch buffer for selected sample benchmark programs, and illustrates the necessity of a relatively large branch target buffer in order to obtain an acceptable prediction accuracy. Accordingly, it would be desirable to provide an improved hardware branch prediction architecture that would require less hardware support as compared with a conventional branch target buffer.
  • the present invention provides a super-scalar processor wherein branch-prediction information is provided within an instruction cache memory.
  • Each instruction cache block stored in the instruction cache memory includes branch-prediction information fields in addition to instruction fields, which indicate the address of the instruction block's successor and information indicating the location of a branch instruction within the instruction block.
  • branch-prediction information fields in addition to instruction fields, which indicate the address of the instruction block's successor and information indicating the location of a branch instruction within the instruction block.
  • branch predication is accomplished in accordance with the present invention by loading a plurality of instruction blocks into the instruction cache memory, wherein each of the instruction blocks includes a plurality of instructions and instruction fetch information.
  • the instruction fetch information includes an address tag, a branch block index and a successor index that includes a successor valid bit.
  • a fetch program counter is used to generate and supply a fetch program counter value to the instruction cache memory in order to prefetch one of the plurality of instruction blocks stored in the instruction cache memory.
  • the processor determines whether the successor valid bit of the prefetched instruction block is set to a predetermined condition which indicates that a branch instruction within the prefetched instruction block is predicted as taken.
  • the fetch program counter value is incremented and supplied to the instruction cache memory to prefetch a succeeding instruction block. If the successor valid bit is set to the predetermined condition, a predicted target branch address is generated by the instruction cache memory based on information contained in the instruction fetch information field associated with the instruction block. The predicted target branch address and the branch location of the branch instruction within the instruction cache memory is then stored in a branch prediction memory. The branch instruction is subsequently executed with a branch execution unit which generates an actual branch location address and a target branch address for the executed branch instruction. The actual branch location and the target branch address are then respectively compared with the branch location and predicted target branch address stored in the branch prediction memory. A misprediction signal is generated if the compared values are not equal, and the successor valid bit and instruction fetch information are updated for the instruction block in response to misprediction signal.
  • FIG. 1 shows a sequence of two instruction runs to illustrate decoder behavior
  • FIG. 2 illustrates the sequencing of the instruction runs shown in FIG. 1 through a two-instruction and four-instruction decoder
  • FIG. 3 illustrates the improvements in instruction bandwidth for the instruction runs illustrated in FIG. 2 if branch delays are avoided;
  • FIG. 4 is a graph of the hit ratio of a target branch buffer
  • FIG. 5 illustrates a preferred layout for an instruction-cache entry in accordance with the present invention
  • FIG. 6 an example of instruction-cache entries for the code sequence illustrated in FIG. 3;
  • FIG. 7 is a block diagram of a super-scalar processor according to the present invention.
  • FIG. 8 is a block diagram of an instruction cache employed in the super-scalar processor illustrated in FIG. 7;
  • FIG. 9 is a block diagram of a branch prediction FIFO employed in the super-scalar processor illustrated in FIG. 7;
  • FIG. 10 block diagram of a branch execution unit employed in the super-scalar processor illustrated in FIG. 7.
  • FIG. 5 illustrates a preferred layout for an instruction-cache entry required by the super-scalar processor.
  • the cache entry holds four instructions and instruction fetch information which is shown in expanded form to include a conventional address tag field and two additional fields: a successor index field which indicates both the next entry predicted to be fetched and the first instruction within the next entry predicted to be executed, and a branch block index field which indicates the location of a branch point within the instruction block.
  • the successor index field does not specify a full instruction address, but is of sufficient size to select any instruction address within the instruction cache.
  • the successor index field includes a successor valid bit that indicates a branch is predicted to be taken when set, and that a branch is not predicted to be taken when cleared.
  • FIG. 6 illustrates instruction-cache entries for the code sequence shown in FIG. 3, assuming a 64 Kbyte direct-mapped cache and the indicated instruction address.
  • the address tag is set and the successor valid bit is cleared. The default for a newly-loaded entry, therefore, is to predict that a branch is not taken and the next sequential instruction block is to be fetched.
  • FIG. 6 also illustrates that a branch target program counter can be constructed at branch points by concatenating the successor index field of the instruction block where the branch occurs to the address tag of the successor instruction block.
  • the validity of instructions at the beginning of a current instruction block are preferably determined by the low-order bits of the successor index field in the preceding instruction block.
  • the successor index of the preceding instruction block may point to any instruction within the current instruction block, and instructions up to this point in the current instruction block are not executed by the processor.
  • the validity of instructions at the end of the block are determined by the branch block index, which indicates the point where a branch is predicted to be taken.
  • the branch block index is required by an instruction decoder to determine valid instructions, while cache entries are retrieved based on the successor index fields alone.
  • the processor keeps a list of predicted branches, stored in the order in which the branches are predicted, in a branch prediction FIFO associated with the instruction cache.
  • Each entry on the list indicates the location of the branch in the instruction cache, which is identified by concatenating the successor index of the entry preceding the branching entry with the branch location index field.
  • Each entry also contains a complete program-counter value for the target of the branch.
  • the processor executes all branches in their original program sequence with a branch execution unit, and compares information resulting from the execution of the branches with information at the head of the list of predicted branches.
  • the following conditions must hold for a successful branch prediction. First, if the branch is taken, its location in the instruction cache must match the location of the next branch on the list contained in the branch prediction FIFO. This condition is required to detect a taken branch that was predicted to be not taken. Secondly, the predicted target address of the branch at the head of the list must match the next instruction address determined by executing the branch.
  • the second comparison is relevant only if the locations match, and is required primarily to detect a branch which was not taken that was predicted to be taken. However, as the predicted target address is based on the address tag of the successor block, this comparison also detects that cache replacement during execution has removed the original target entry. In addition, comparing program-counter values checks that indirect branches were properly predicted.
  • the branch is mispredicted if either or both of the above-described conditions does not hold.
  • the appropriate cache entry must be fetched using the location of the branch determined by the execution unit.
  • the successor valid bit and instruction fetch information for the incorrect instruction block must also be updated based on the misprediction to reflect the actual result of the execution of the branch. For example, the successor valid bit is cleared if a branch had been predicted as taken but was not taken, so that on the next fetch of the instruction block the branch will be predicted as not taken.
  • the successor valid bit and instruction fetch information alway reflect the actual result of the previous execution of the branch instruction.
  • FIG. 7 illustrates a block diagram of a super-scalar processor that includes a bus interface unit (BIU) 10, an instruction cache 12, a branch prediction FIFO 14, an instruction decoder 16, a register file 18, a reorder buffer 20, a branch execution unit 22, an arithmetic logic unit (ALU) 24, a shifter unit 30, a load unit 32 a store unit 33, and a data cache 34.
  • BIU bus interface unit
  • ALU arithmetic logic unit
  • the reorder buffer 20 is managed as a FIFO.
  • a corresponding entry is allocated in the reorder buffer 20.
  • the result value of the decoded instruction is written into the allocated entry when the execution of the instruction is completed.
  • the result value is then written into the register file 18 if there are no exceptions associated with the instruction. If the instruction is not complete when its associated entry reaches the head of the reorder buffer 20, the advancement of the reorder buffer 20 is halted until the instruction is completed, additional entries, however, can continue to be allocated. If there is an exception or branch misprediction, the entire contents of the reorder buffer 20 are discarded.
  • the instruction cache 12 includes an instruction store array 36 which is a direct mapped instruction cache organized as 512 instruction blocks of four words each, a tag array 38 having 512 entries composed of a 19 bit tag and a single valid bit for the entire block, a dual ported successor array 40 having 512 entries composed of an 11 bit successor index and a successor valid bit which indicates when set that the successor index stored in the successor array . .340.!.
  • .Iadd.40 .Iaddend. should be used to access the instruction store array 36, and indicates when cleared that no branch is predicted within the instruction block, a dual ported block status array 42 that contains a branch block indicator for each instruction block in the instruction cache 12 which indicates the last instruction predicted to be executed within a block, a fetch program counter (PC) 44 (including a PC latch 46, a MUX unit 48 and an incrementer (INC) 50) that generates a PC value that is used for prefetching the instruction stream from the instruction cache 12, an instruction fetch control unit 52 that controls the fetching of instructions from the instruction cache 12, the replacement of cache blocks on misses, and the reformatting of the successor array 40 and branch block array 42 on branches that are mispredicted, and an instruction register latch 54 which is loaded with the instructions to be provided to the instruction decoder 16.
  • PC fetch program counter
  • IRC incrementer
  • the branch prediction FIFO 14 is used to maintain information related to every predicted branch within an instruction block. Specifically, the location in the cache where the branch is predicted to occur (i.e. the branch location) as well as the predicted branch target PC of the branch are stored within the branch prediction FIFO 14. As illustrated in FIG. 9, the branch prediction FIFO 14 is preferably implemented as a fixed array with a target PC FIFO and a branch location FIFO, incrementing read/write pointers 56 and 58, and also includes a target PC comparator 60 and a branch location comparator 62 which are respectively coupled to a branch location data bus (CPC) and a target PC data bus (TPC). The output signals generated by the target PC comparator 60 and the branch location PC comparator 62 are provided to a branch FIFO control circuit 63.
  • the FIFO 14 could alternatively be implemented as a shiftable array or a circular FIFO.
  • the branch execution unit 22 contains the hardware that actually executes the branch instructions and writes the branch results back to the reorder buffer 18 As shown in FIG. 10, the branch execution unit 22 includes a branch reservation station 62, a branch computation unit 64 and a result bus interface 66.
  • the reservation station 62 is a FIFO array which receives decoded instructions from the instruction decoder 16 and operand information from the register file 18 and reorder buffer 20 and holds this information until the decoded instruction is free from dependencies and the branch computation unit 64 is free to execute the instruction.
  • the result bus interface 66 couples the branch execution unit 22 to the CPC bus and TPC bus, which in turn are coupled to the branch location comparator 62 and the target PC comparator 60 of the branch predication FIFO 14 as illustrated in FIG. 9.
  • the instruction cache 12 is loaded with instructions from an instruction memory via the BIU 10.
  • the fetch PC 44 supplies a predicted fetch PC value to the instruction cache 12 in order to prefetch an instruction stream.
  • the successor valid bit for each instruction block is cleared when the instruction block is first loaded into the instruction cache 12.
  • the prefetched instruction block is supplied to the instruction decoder 16 via the instruction decode latch 54.
  • the predicted fetch PC is then incremented via the incrementer 50 and loaded back into the fetch PC latch 46 via the MUX unit 48.
  • the resulting fetch PC is then supplied to the instruction cache 12 in order to fetch the next sequential instruction block in the instruction store.
  • the branch execution unit 22 processes any branch instruction contained in the first prefetched instruction block, and generates an actual PC value and target PC value for the executed branch instruction. Note, that if the branch is not taken on execution, the target PC value generated by the branch execution unit 22 will be the next sequential value after the actual PC value, i.e., the term "target PC" in this sense does not necessarily mean the target of an executed branch, but instead indicates the address of the next instruction block to be executed regardless of the branch results.
  • the actual PC value and the target PC value are respectively supplied to the CPC bus and the TPC bus and loaded into the branch location comparator and the target PC comparator in the branch prediction FIFO.
  • the comparison of the actual PC value supplied by the branch instruction unit 22 with the branch location value supplied from the branch location FIFO of the branch prediction FIFO 14 will fail.
  • the branch prediction FIFO 14 resets and generates a branch misprediction signal which is supplied to the instruction fetch control unit of the instruction cache 12.
  • the target PC from the branch execution unit 22 is then loaded into the fetch PC latch 46 via the MUX unit 48 and the successor array is updated to set the successor valid bit under control of the instruction fetch control circuit 52.
  • the branch will be predicted as taken on subsequent fetches of the instruction block.
  • the value of the fetch PC latch is loaded into the next available entry in the branch prediction FIFO.
  • a reconstructed predicted fetch PC formed from the successor index and the tag field read out of the tag array is loaded via the MUX 48 into the fetch PC latch 46.
  • This reconstructed fetch PC is supplied to the instruction store array 36 to fetch the next instruction and to the branch prediction FIFO.
  • the branch prediction FIFO entry contains the branch location of the branch as well as the predicted target of the branch.
  • the branch execution unit 22 subsequently executes the branch instruction and generates an actual PC value and a target PC value which are supplied to the branch location comparator and the target PC comparator in the branch prediction FIFO. If the branch was predicted to be taken, the PC value generated by the branch execution unit 22 will always match the branch location loaded from the branch location FIFO. Three possible conditions, however, will result in the target PC value generated by the branch execution unit 22 not matching the target PC stored in the branch prediction FIFO 14: the branch was predicted as taken but was not taken in which case the successor valid bit must be cleared, the branch executed a subroutine return to an address which did not match the predicted address thereby requiring the successor index be updated, or cache replacement occurred prior to the execution of the branch instruction requiring the reloading of the instruction cache.
  • the principal hardware cost of the above-described branch prediction scheme is the increase in the cache size caused by the successor index and branch block index fields associated with each entry in the instruction cache. This increase is minimal when compared with other hardware prediction schemes, however, as the present invention saves storage space by predicting only one taken branch per cache block, and predicting non-taken branches by not storing any branch information associated with the instruction block into the successor index. For an 8 Kbyte direct mapped cache, the additional fields add about 8% to the cache storage required. The increase in overall system performance due to branch prediction, however, justifies the increased size requirement for the instruction cache.
  • the requirement for updating the cache entry when a branch is mispredicted does conflict with the requirement to fetch the correct branch target, i.e., unless it is possible to read and write the fetch information for two different entries simultaneously, the updating of the fetch information on a mispredicted branch takes a cycle away from instruction fetching.
  • the requirement for an additional cycle causes only a small degradation in performance, however, as mispredicted branches occur infrequently and the increase in performance associated with branch prediction easily outweigh any degradation in performance due to the additional cycles required mispredicted branches.

Abstract

A super-scaler processor is disclosed wherein branch-prediction information is provided within an instruction cache memory. Each instruction cache block stored in the instruction cache memory includes branch-prediction information fields in addition to instruction fields, which indicate the address of the instruction block's successor and information indicating the location of a branch instruction within the instruction block. Thus, the next cache block can be easily fetched without waiting on a decoder or execution unit to indicate the proper fetch action to be taken for correctly predicted branching.

Description

BACKGROUND OF THE INVENTION
The present invention relates to a method and apparatus for improving processor performance by reducing processing delays associated with branch instructions. In particular, the present invention provides an instruction cache for a super-scalar processor wherein branch-prediction information is provided within the instruction cache.
The time taken by a computing system to perform a particular application is determined by three basic factors, namely, the processor cycle time, the number of processor instructions required to perform the application, and the average number of processor cycles required to execute an instruction. Overall system performance can be improved by reducing one or more of these factors. For example, the average number of cycles required to perform an application can be significantly reduced by employing a multi-processor architecture, i.e., providing more than one processor to execute separate instructions concurrently.
There are disadvantages, however, associated with the implementation of a multi-processor architecture. In order to be effective, multi-processing requires an application that can be easily segmented into independent tasks to be performed concurrently by the different processors. The requirement for a readily segmented task limits the effective applicability of multi-processing. Further, the increase in processing performance attained via multi-processing in many circumstances may not offset the additional expense incurred by requiring multiple processors.
Single-processor hardware architectures that avoid the disadvantages associated with multi-processing have been proposed. These so called "super-scalar" processors permit a sustained execution rate of more than one instruction per processor cycle, as opposed to conventional scalar processors which--while capable of handling multiple instructions in different pipeline stages in one cycle--are limited to a maximum pipeline capacity of one instruction per cycle. In contrast, a super-scalar pipeline architecture achieves concurrency between instructions both in different pipeline stages and within the same pipeline stage.
A super-scalar processor that executes more than one instruction per cycle, however, can only be effective when instructions can be supplied at a sufficient rate. It is readily apparent that instruction fetching can be a limiting factor in overall system performance if the average rate of instruction fetching is less than the average rate of instruction execution. Providing the necessary instruction bandwidth for sequential instructions is relatively easy, as the instruction fetcher can simply fetch several instructions per cycle. It is much more difficult, however, to provide sufficient instruction bandwidth in the presence of non-sequential fetches caused by branches, as the branches make the instruction fetching dependent on the results of instruction execution. Thus, the instruction fetcher can either stall or fetch incorrect instructions when the outcome of a branch is not known.
For example, FIG. 1 illustrates two instruction runs consisting of a number of instructions occupying four instruction-cache blocks (assuming a four-word cache block) in an instruction cache memory. The first instruction run consists of instructions S1-S5 that contain a branch to a second instruction run T1-T4. FIG. 2 illustrates how these instruction runs are sequenced through a four-instruction decoder and a two-instruction decoder, assuming for purposes of illustration that two cycles are required to determine the outcome of a branch. As would be expected, the four-instruction decoder provides a higher instruction bandwidth than the two-instruction decoder, but neither provides sufficient instruction bandwidth for a super-scalar processor. As illustrated in FIG. 3, the instruction bandwidth improves dramatically if the branch delays are reduced to zero.
The dependency between the instruction fetcher and the execution unit caused by branches can be reduced by predicting the outcome of the branch during an instruction fetch without waiting for the execution unit to indicate whether or not the branch should be taken. Branch prediction relies heavily on the fact that the outcome of a branch does not change frequently over a given period of time. The instruction fetcher can predict future branch executions using information collected on the outcome of the previous branch executions performed by the execution unit.
A conventional method for hardware-branch prediction uses a branch target buffer to collect information about the most-recently executed branches. See, for example, "Branch Prediction Strategies and Branch Target Buffer Design", by J.K.F. Lee and A.J. Smith, IEEE Computer, Vol. 17, pp. 6-22, January, 1984. Typically, the branch target buffer is accessed using an instruction address, and indicates whether or not the instruction at that address is a branch instruction. If the instruction is a branch instruction, the branch target buffer indicates the predicted outcome and the target address.
The hit ratio of a branch target buffer, i.e., the probability that a branch is found in the branch target buffer at the time it is fetched, increases as the size of the branch target buffer increases. FIG. 4 is a graph of the hit ratio for a target branch buffer for selected sample benchmark programs, and illustrates the necessity of a relatively large branch target buffer in order to obtain an acceptable prediction accuracy. Accordingly, it would be desirable to provide an improved hardware branch prediction architecture that would require less hardware support as compared with a conventional branch target buffer.
SUMMARY OF THE INVENTION
The present invention provides a super-scalar processor wherein branch-prediction information is provided within an instruction cache memory. Each instruction cache block stored in the instruction cache memory includes branch-prediction information fields in addition to instruction fields, which indicate the address of the instruction block's successor and information indicating the location of a branch instruction within the instruction block. Thus, the next cache block can be easily fetched without waiting on a decoder or execution unit to indicate the proper fetch action to be taken for correctly predicted branching.
More specifically, branch predication is accomplished in accordance with the present invention by loading a plurality of instruction blocks into the instruction cache memory, wherein each of the instruction blocks includes a plurality of instructions and instruction fetch information. The instruction fetch information includes an address tag, a branch block index and a successor index that includes a successor valid bit. A fetch program counter is used to generate and supply a fetch program counter value to the instruction cache memory in order to prefetch one of the plurality of instruction blocks stored in the instruction cache memory. The processor determines whether the successor valid bit of the prefetched instruction block is set to a predetermined condition which indicates that a branch instruction within the prefetched instruction block is predicted as taken. If the successor valid bit is not set to the predetermined condition, the fetch program counter value is incremented and supplied to the instruction cache memory to prefetch a succeeding instruction block. If the successor valid bit is set to the predetermined condition, a predicted target branch address is generated by the instruction cache memory based on information contained in the instruction fetch information field associated with the instruction block. The predicted target branch address and the branch location of the branch instruction within the instruction cache memory is then stored in a branch prediction memory. The branch instruction is subsequently executed with a branch execution unit which generates an actual branch location address and a target branch address for the executed branch instruction. The actual branch location and the target branch address are then respectively compared with the branch location and predicted target branch address stored in the branch prediction memory. A misprediction signal is generated if the compared values are not equal, and the successor valid bit and instruction fetch information are updated for the instruction block in response to misprediction signal.
The utilization of the instruction cache and branch prediction memory as described above, provides branch prediction accuracy substantially identical to that of a target branch buffer without requiring as much hardware support.
BRIEF DESCRIPTION OF THE DRAWINGS
With the above as background, reference should now be made to the following detailed description of the preferred embodiments in conjunction with the drawings, in which:
FIG. 1 shows a sequence of two instruction runs to illustrate decoder behavior;
FIG. 2 illustrates the sequencing of the instruction runs shown in FIG. 1 through a two-instruction and four-instruction decoder;
FIG. 3 illustrates the improvements in instruction bandwidth for the instruction runs illustrated in FIG. 2 if branch delays are avoided;
FIG. 4 is a graph of the hit ratio of a target branch buffer;
FIG. 5 illustrates a preferred layout for an instruction-cache entry in accordance with the present invention;
FIG. 6 an example of instruction-cache entries for the code sequence illustrated in FIG. 3;
FIG. 7 is a block diagram of a super-scalar processor according to the present invention;
FIG. 8 is a block diagram of an instruction cache employed in the super-scalar processor illustrated in FIG. 7;
FIG. 9 is a block diagram of a branch prediction FIFO employed in the super-scalar processor illustrated in FIG. 7; and
FIG. 10 block diagram of a branch execution unit employed in the super-scalar processor illustrated in FIG. 7.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The basic operation of an instruction cache for a super-scalar processor in accordance with the present invention will be discussed with reference to FIG. 5, which illustrates a preferred layout for an instruction-cache entry required by the super-scalar processor. In the example illustrated, the cache entry holds four instructions and instruction fetch information which is shown in expanded form to include a conventional address tag field and two additional fields: a successor index field which indicates both the next entry predicted to be fetched and the first instruction within the next entry predicted to be executed, and a branch block index field which indicates the location of a branch point within the instruction block. The successor index field does not specify a full instruction address, but is of sufficient size to select any instruction address within the instruction cache. The successor index field includes a successor valid bit that indicates a branch is predicted to be taken when set, and that a branch is not predicted to be taken when cleared.
FIG. 6 illustrates instruction-cache entries for the code sequence shown in FIG. 3, assuming a 64 Kbyte direct-mapped cache and the indicated instruction address. When a cache entry is first loaded, the address tag is set and the successor valid bit is cleared. The default for a newly-loaded entry, therefore, is to predict that a branch is not taken and the next sequential instruction block is to be fetched. FIG. 6 also illustrates that a branch target program counter can be constructed at branch points by concatenating the successor index field of the instruction block where the branch occurs to the address tag of the successor instruction block.
The validity of instructions at the beginning of a current instruction block are preferably determined by the low-order bits of the successor index field in the preceding instruction block. The successor index of the preceding instruction block may point to any instruction within the current instruction block, and instructions up to this point in the current instruction block are not executed by the processor. The validity of instructions at the end of the block are determined by the branch block index, which indicates the point where a branch is predicted to be taken The branch block index is required by an instruction decoder to determine valid instructions, while cache entries are retrieved based on the successor index fields alone.
To check branch predictions, the processor keeps a list of predicted branches, stored in the order in which the branches are predicted, in a branch prediction FIFO associated with the instruction cache. Each entry on the list indicates the location of the branch in the instruction cache, which is identified by concatenating the successor index of the entry preceding the branching entry with the branch location index field. Each entry also contains a complete program-counter value for the target of the branch.
The processor executes all branches in their original program sequence with a branch execution unit, and compares information resulting from the execution of the branches with information at the head of the list of predicted branches. The following conditions must hold for a successful branch prediction. First, if the branch is taken, its location in the instruction cache must match the location of the next branch on the list contained in the branch prediction FIFO. This condition is required to detect a taken branch that was predicted to be not taken. Secondly, the predicted target address of the branch at the head of the list must match the next instruction address determined by executing the branch.
The second comparison is relevant only if the locations match, and is required primarily to detect a branch which was not taken that was predicted to be taken. However, as the predicted target address is based on the address tag of the successor block, this comparison also detects that cache replacement during execution has removed the original target entry. In addition, comparing program-counter values checks that indirect branches were properly predicted.
The branch is mispredicted if either or both of the above-described conditions does not hold. When a misprediction occurs, the appropriate cache entry must be fetched using the location of the branch determined by the execution unit. The successor valid bit and instruction fetch information for the incorrect instruction block must also be updated based on the misprediction to reflect the actual result of the execution of the branch. For example, the successor valid bit is cleared if a branch had been predicted as taken but was not taken, so that on the next fetch of the instruction block the branch will be predicted as not taken. Thus, the successor valid bit and instruction fetch information alway reflect the actual result of the previous execution of the branch instruction.
With the above as background, reference should now be made to FIG. 7 for a detailed description of a preferred embodiment of the invention. FIG. 7 illustrates a block diagram of a super-scalar processor that includes a bus interface unit (BIU) 10, an instruction cache 12, a branch prediction FIFO 14, an instruction decoder 16, a register file 18, a reorder buffer 20, a branch execution unit 22, an arithmetic logic unit (ALU) 24, a shifter unit 30, a load unit 32 a store unit 33, and a data cache 34.
The reorder buffer 20 is managed as a FIFO. When an instruction is decoded by the instruction decoder 16, a corresponding entry is allocated in the reorder buffer 20. The result value of the decoded instruction is written into the allocated entry when the execution of the instruction is completed. The result value is then written into the register file 18 if there are no exceptions associated with the instruction. If the instruction is not complete when its associated entry reaches the head of the reorder buffer 20, the advancement of the reorder buffer 20 is halted until the instruction is completed, additional entries, however, can continue to be allocated. If there is an exception or branch misprediction, the entire contents of the reorder buffer 20 are discarded.
As illustrated in FIG. 8, the instruction cache 12 includes an instruction store array 36 which is a direct mapped instruction cache organized as 512 instruction blocks of four words each, a tag array 38 having 512 entries composed of a 19 bit tag and a single valid bit for the entire block, a dual ported successor array 40 having 512 entries composed of an 11 bit successor index and a successor valid bit which indicates when set that the successor index stored in the successor array . .340.!. .Iadd.40 .Iaddend.should be used to access the instruction store array 36, and indicates when cleared that no branch is predicted within the instruction block, a dual ported block status array 42 that contains a branch block indicator for each instruction block in the instruction cache 12 which indicates the last instruction predicted to be executed within a block, a fetch program counter (PC) 44 (including a PC latch 46, a MUX unit 48 and an incrementer (INC) 50) that generates a PC value that is used for prefetching the instruction stream from the instruction cache 12, an instruction fetch control unit 52 that controls the fetching of instructions from the instruction cache 12, the replacement of cache blocks on misses, and the reformatting of the successor array 40 and branch block array 42 on branches that are mispredicted, and an instruction register latch 54 which is loaded with the instructions to be provided to the instruction decoder 16.
The branch prediction FIFO 14 is used to maintain information related to every predicted branch within an instruction block. Specifically, the location in the cache where the branch is predicted to occur (i.e. the branch location) as well as the predicted branch target PC of the branch are stored within the branch prediction FIFO 14. As illustrated in FIG. 9, the branch prediction FIFO 14 is preferably implemented as a fixed array with a target PC FIFO and a branch location FIFO, incrementing read/ write pointers 56 and 58, and also includes a target PC comparator 60 and a branch location comparator 62 which are respectively coupled to a branch location data bus (CPC) and a target PC data bus (TPC). The output signals generated by the target PC comparator 60 and the branch location PC comparator 62 are provided to a branch FIFO control circuit 63. The FIFO 14 could alternatively be implemented as a shiftable array or a circular FIFO.
The branch execution unit 22 contains the hardware that actually executes the branch instructions and writes the branch results back to the reorder buffer 18 As shown in FIG. 10, the branch execution unit 22 includes a branch reservation station 62, a branch computation unit 64 and a result bus interface 66. The reservation station 62 is a FIFO array which receives decoded instructions from the instruction decoder 16 and operand information from the register file 18 and reorder buffer 20 and holds this information until the decoded instruction is free from dependencies and the branch computation unit 64 is free to execute the instruction. The result bus interface 66 couples the branch execution unit 22 to the CPC bus and TPC bus, which in turn are coupled to the branch location comparator 62 and the target PC comparator 60 of the branch predication FIFO 14 as illustrated in FIG. 9.
In operation, the instruction cache 12 is loaded with instructions from an instruction memory via the BIU 10. The fetch PC 44 supplies a predicted fetch PC value to the instruction cache 12 in order to prefetch an instruction stream. As previously stated, the successor valid bit for each instruction block is cleared when the instruction block is first loaded into the instruction cache 12. Thus, when a given instruction block is first fetched from the instruction cache 12, any branch in the block is predicted as not taken. The prefetched instruction block is supplied to the instruction decoder 16 via the instruction decode latch 54. The predicted fetch PC is then incremented via the incrementer 50 and loaded back into the fetch PC latch 46 via the MUX unit 48. The resulting fetch PC is then supplied to the instruction cache 12 in order to fetch the next sequential instruction block in the instruction store.
The branch execution unit 22 processes any branch instruction contained in the first prefetched instruction block, and generates an actual PC value and target PC value for the executed branch instruction. Note, that if the branch is not taken on execution, the target PC value generated by the branch execution unit 22 will be the next sequential value after the actual PC value, i.e., the term "target PC" in this sense does not necessarily mean the target of an executed branch, but instead indicates the address of the next instruction block to be executed regardless of the branch results. The actual PC value and the target PC value are respectively supplied to the CPC bus and the TPC bus and loaded into the branch location comparator and the target PC comparator in the branch prediction FIFO.
Where a branch was predicted .Iadd.as .Iaddend.not taken but was taken on execution, the comparison of the actual PC value supplied by the branch instruction unit 22 with the branch location value supplied from the branch location FIFO of the branch prediction FIFO 14 will fail. The branch prediction FIFO 14 resets and generates a branch misprediction signal which is supplied to the instruction fetch control unit of the instruction cache 12. The target PC from the branch execution unit 22 is then loaded into the fetch PC latch 46 via the MUX unit 48 and the successor array is updated to set the successor valid bit under control of the instruction fetch control circuit 52. Thus, the branch will be predicted as taken on subsequent fetches of the instruction block.
When the successor valid bit is set indicating a branch is predicted as taken, the value of the fetch PC latch is loaded into the next available entry in the branch prediction FIFO. A reconstructed predicted fetch PC formed from the successor index and the tag field read out of the tag array is loaded via the MUX 48 into the fetch PC latch 46. This reconstructed fetch PC is supplied to the instruction store array 36 to fetch the next instruction and to the branch prediction FIFO. Thus, the branch prediction FIFO entry contains the branch location of the branch as well as the predicted target of the branch.
The branch execution unit 22 subsequently executes the branch instruction and generates an actual PC value and a target PC value which are supplied to the branch location comparator and the target PC comparator in the branch prediction FIFO. If the branch was predicted to be taken, the PC value generated by the branch execution unit 22 will always match the branch location loaded from the branch location FIFO. Three possible conditions, however, will result in the target PC value generated by the branch execution unit 22 not matching the target PC stored in the branch prediction FIFO 14: the branch was predicted as taken but was not taken in which case the successor valid bit must be cleared, the branch executed a subroutine return to an address which did not match the predicted address thereby requiring the successor index be updated, or cache replacement occurred prior to the execution of the branch instruction requiring the reloading of the instruction cache.
The principal hardware cost of the above-described branch prediction scheme is the increase in the cache size caused by the successor index and branch block index fields associated with each entry in the instruction cache. This increase is minimal when compared with other hardware prediction schemes, however, as the present invention saves storage space by predicting only one taken branch per cache block, and predicting non-taken branches by not storing any branch information associated with the instruction block into the successor index. For an 8 Kbyte direct mapped cache, the additional fields add about 8% to the cache storage required. The increase in overall system performance due to branch prediction, however, justifies the increased size requirement for the instruction cache.
The requirement for updating the cache entry when a branch is mispredicted does conflict with the requirement to fetch the correct branch target, i.e., unless it is possible to read and write the fetch information for two different entries simultaneously, the updating of the fetch information on a mispredicted branch takes a cycle away from instruction fetching. The requirement for an additional cycle causes only a small degradation in performance, however, as mispredicted branches occur infrequently and the increase in performance associated with branch prediction easily outweigh any degradation in performance due to the additional cycles required mispredicted branches.
The invention has been described with particular reference to certain preferred embodiments thereof. The invention is not limited to these disclosed embodiments and modifications and variations may be made within the scope of the appended claims.

Claims (18)

What is claimed is:
1. A branch prediction method.Iadd., .Iaddend.comprising .Iadd.the steps of.Iaddend.:
a. loading a plurality of instruction blocks into an instruction cache memory, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises an address tag, a predicted target branch address, a branch block index and a successor index that includes a successor valid bit;
b. generating and supplying a fetch program counter .Iadd.value .Iaddend.to said instruction cache memory in order to prefetch one of said plurality of instruction blocks . .and store.!. .Iadd.stored .Iaddend.in said instruction cache memory;
c. determining whether said successor valid bit of said prefetched instruction block is set to a predetermined condition which indicates that a branch instruction within said prefetched instruction block is predicted as taken;
d. . .incrementing said fetch program counter and supplying the incremented fetch program counter value to said instruction cache memory to prefetch a succeeding instruction block if said successor valid bit is not set to said predetermined condition, and.!. generating a branch location address indicative of the location of said branch instruction within said instruction . .memory.!. cache .Iadd.memory .Iaddend.and a predicted target branch address if said successor valid bit is set to said predetermined condition;
e. storing said predicted target branch address and said branch location address in a branch prediction memory .Iadd.if said successor valid bit is set to said predetermined condition.Iaddend.;
f. .Iadd.incrementing said fetch program counter value and supplying the incremented fetch program counter value to said instruction cache memory to prefetch a succeeding instruction block if said successor valid bit is not set to said predetermined condition;
g. .Iaddend.executing said branch instruction with an execution unit and generating an actual branch address and a target branch address for the executed branch instruction;
. .g..!. .Iadd.h. .Iaddend.comparing said actual .Iadd.branch .Iaddend.address generated by said execution unit with said branch location .Iadd.address .Iaddend.stored in said branch prediction memory and generating a .Iadd.first .Iaddend.misprediction signal if .Iadd.a branch corresponding to said branch instruction was taken on execution and either .Iaddend.said actual .Iadd.branch .Iaddend.address is not equal to said branch location .Iadd.address or said executed target branch address is not equal to said predicted target branch address stored in said branch prediction memory.Iaddend.;
. .h..!. .Iadd.i. .Iaddend.comparing . .the executed target.!. .Iadd.said actual .Iaddend.branch address with . .the predicted.!. .Iadd.said .Iaddend.branch .Iadd.location .Iaddend.address stored in said branch prediction memory and generating a .Iadd.second .Iaddend.misprediction signal if . .the executed target.!. .Iadd.said branch corresponding to said branch instruction was not taken on execution and said actual .Iaddend.branch address is . .not.!. equal to . .the predicted target.!. .Iadd.said .Iaddend.branch .Iadd.location .Iaddend.address;
. .i..!. .Iadd.j. .Iaddend.updating the successor valid bit and instruction fetch information for said instruction block in response to said .Iadd.first or second .Iaddend.misprediction signal; and
. .j..!. .Iadd.k. .Iaddend.updating said .Iadd.fetch .Iaddend.program counter value with the target branch address .Iadd.in response to said first or second misprediction signal.Iaddend..
2. A method as set forth in claim 1, wherein said predicted target branch address is generated by concatenating said successor index of said prefetched instruction block to an address tag of a successor instruction block.
3. A method as set forth in claim 2, wherein said branch location .Iadd.address .Iaddend.is generated by concatenating a successor index from a preceding instruction block . .with the branch location address.!. .Iadd.to an address tag .Iaddend.of said prefetched instruction block.
4. An apparatus comprising:
a. first means for storing a plurality of instruction blocks, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises an address tag, a predicted target branch address, a branch block index and a successor index that includes a successor valid bit;
b. second means for generating and supplying a fetch program counter value to said first means in order to prefetch one of said plurality of instruction blocks . .and store.!. .Iadd.stored .Iaddend.in said first means;
c. third means for determining whether said successor valid bit of said prefetched instruction block is set to a predetermined condition which indicates that a branch instruction within said prefetched instruction block is predicted as taken;
d. fourth means for . .incrementing said fetch program counter and supplying the incremented fetch program counter value to said first means to prefetch a succeeding instruction block if said successor valid bit is not set to said predetermined condition;
e. fifth means for.!. generating a branch location address and a predicted target branch address if said successor valid bit is set to said predetermined condition;
. .f. sixth.!.
.Iadd.e. fifth .Iaddend.means for storing said predicted target branch address and said branch location address .Iadd.if said successor valid bit is set to said predetermined condition.Iaddend.;
. .g. seventh.!.
.Iadd.f. sixth .Iaddend.means for .Iadd.incrementing said fetch program counter value and supplying the incremented fetch program counter value to said instruction cache memory to prefetch a succeeding instruction block if said successor valid bit is not set to said predetermined condition;
g. seventh means for .Iaddend.executing said branch instruction and generating an actual branch address and a target branch address for the executed branch instruction;
h. eighth means for comparing said actual .Iadd.branch .Iaddend.address generated by said seventh means with said branch location .Iadd.address .Iaddend.stored in said sixth means and .Iadd.generating a first misprediction signal if a branch corresponding to said branch instruction was taken on execution and either said actual branch address is not equal to said branch location address or said executed target branch address is not equal to said predicted branch address stored in said sixth means;
i. ninth means .Iaddend.for comparing . .the executed target.!. .Iadd.said actual .Iaddend.branch address with . .the predicted.!. .Iadd.said .Iaddend.branch .Iadd.location .Iaddend.address stored in said . .branch prediction memory.!. .Iadd.sixth means .Iaddend.and generating a .Iadd.second .Iaddend.misprediction signal . .based on the result of said comparisons.!. .Iadd.if said branch corresponding to said branch instruction was not taken on execution and said actual branch address is equal to said branch location address.Iaddend.;
. .i. ninth means.!.
.Iadd.j. tenth means .Iaddend.for updating the successor valid bit and instruction fetch information for said instruction block in response to said .Iadd.first or second .Iaddend.misprediction signal; and
. .j..!. .Iadd.k. eleventh means for .Iaddend.updating said .Iadd.fetch .Iaddend.program counter value with the target branch address .Iadd.in response to said first or second misprediction signal.Iaddend..
5. An apparatus as claimed in claim 4, wherein said . .seventh.!. .Iadd.fourth .Iaddend.means generates said predicted target branch address by concatenating said successor index of said prefetched instruction block to an address tag of a successor instruction block.
6. A method as set forth in claim 4, wherein said . .seventh.!. .Iadd.fourth .Iaddend.means generates said branch location .Iadd.address .Iaddend.by concatenating a successor index from a preceding instruction block . .with the branch location address.!. .Iadd.to an address tag .Iaddend.of said prefetched instruction block.
7. An apparatus comprising:
a bus interface unit, an instruction cache memory coupled to said bus interface unit and configured to receive a plurality of instruction blocks, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises an address tag, a branch block index and a successor index that includes a successor valid bit;
a branch prediction memory coupled to said instruction cache memory;
an instruction decoder coupled to said instruction cache memory, . .an instruction branch memory coupled to said instruction cache memory,.!. wherein when said successor valid bit is not set to a predetermined condition.Iadd., .Iaddend.a fetch program counter value is incremented and supplied to said instruction cache memory for prefetching a succeeding instruction block, and when said successor valid bit is set to the predetermined condition, a predicted target branch address is generated by said instruction cache memory based on information contained in said instruction fetch information and said predicted target branch address within the instruction cache memory is stored in said branch prediction . .said.!. memory; and
a processing unit including a branch execution unit coupled to said instruction decoder and a register file, wherein said branch instruction is subsequently executed with said branch execution unit which generates an actual branch location address and a target branch address for said executed branch instruction and said actual branch location .Iadd.address .Iaddend.and the target branch address are respectively compared with the branch location .Iadd.address .Iaddend.and said predicted target branch address stored in the branch prediction memory, generating a misprediction signal if .Iadd.said branch instruction was taken on execution and .Iaddend.the compared values are not equal, and said successor valid bit and said instruction fetch information being updated for the instruction block in response to the misprediction signal and updating said .Iadd.fetch .Iaddend.program counter value with the target branch address .Iadd.in response to the misprediction signal.Iaddend..
8. An apparatus as claimed in claim 7, wherein said instruction cache memory includes an instruction store array coupled to said bus interface unit, a tag array coupled to said instruction store array, a successor array coupled to said tag array, and a block status array coupled to said successor array.
9. An apparatus as claimed in claim 8, wherein said instruction cache memory further comprises a fetch program counter that includes a PC latch, an incrementer, and a MUX unit.
10. An apparatus as claimed in claim 9, wherein said instruction cache memory further comprises an instruction fetch control circuit coupled to said fetch program counter, wherein said instruction fetch control circuit controls the operation of said Mux unit to selectively load the PC latch with a value generated by said incrementer, a value supplied by said branch . .control.!. .Iadd.execution .Iaddend.unit, or a reconstructed fetch PC value.
11. An apparatus as claimed in claim 7, wherein said branch prediction memory comprises a branch target FIFO and a branch location FIFO.
12. An apparatus as claimed in claim 11, wherein said branch prediction memory further comprises a target PC comparator coupled to said branch target FIFO and a bus that is coupled to said branch execution unit, and a branch location comparator coupled to said branch location FIFO and a bus that is coupled to said branch execution unit, wherein the output of said target PC comparator and said branch location comparator are coupled to a control circuit. .Iadd.
13. A branch prediction method comprising the steps of:
a. loading a plurality of instruction blocks into an instruction cache memory, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises a successor index indicative of a predicted target branch address and a successor valid bit;
b. generating and supplying a fetch program counter value to said instruction cache memory in order to prefetch one of said plurality of instruction blocks stored in said instruction cache memory;
c. determining whether said successor valid bit of said prefetched instruction block is set to a predetermined condition which indicates that a branch instruction within said prefetched instruction block is predicted as taken;
d. generating a branch location address indicative of the location of said branch instruction within said instruction cache memory and a predicted target branch address if said successor valid bit is set to said predetermined condition;
e. storing said predicted target branch address and said branch location address in a branch prediction memory if said successor valid bit is set to said predetermined condition;
f. incrementing said fetch program counter value and supplying the incremented fetch program counter value to said instruction cache memory to prefetch a succeeding instruction block if said successor valid bit is not set to said predetermined condition;
g. executing said branch instruction with an execution unit and generating an actual branch address and a target branch address for the executed branch instruction;
h. comparing said actual branch address generated by said execution unit with said branch location address stored in said branch prediction memory and generating a first misprediction signal if said branch instruction was taken on execution and either said actual branch address is not equal to said branch location address or said executed target branch address is not equal to said predicted target branch address stored in said branch prediction memory;
i. comparing said actual branch address with said branch location address stored in said branch prediction memory and generating a second misprediction signal if said branch instruction was not taken and said actual branch address is equal to said branch location address;
j. updating the successor valid bit and instruction fetch information for said instruction block in response to said first or second misprediction signal; and
k. updating said fetch program counter value with the target branch address in response to said first or second misprediction signal..Iaddend..Iadd.
14. A method as set forth in claim 13, wherein said instruction fetch information further comprises an address tag and wherein said predicted target branch address is generated by concatenating said successor index of said prefetched instruction block to an address tag of a successor instruction block..Iaddend..Iadd.15. A method as set forth in claim 14, wherein said branch location address is generated by concatenating a successor index from a preceding instruction block to an address tag of said prefetched instruction block..Iaddend..Iadd.16. An apparatus comprising:
a. first means for storing a plurality of instruction blocks, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises a successor index indicative of a predicted target branch address and a successor valid bit;
b. second means for generating and supplying a fetch program counter value to said first means in order to prefetch one of said plurality of instruction blocks stored in said first means;
c. third means for determining whether said successor valid bit of said prefetched instruction block is set to a predetermined condition which indicates that a branch instruction within said prefetched instruction block is predicted as taken;
d. fourth means for generating a branch location address and a predicted target branch address if said successor valid bit is set to said predetermined condition;
e. fifth means for storing said predicted target branch address and said branch location address if said successor valid bit is set to said predetermined condition;
f. sixth means for incrementing said fetch program counter value and supplying the incremented fetch program counter value to said first means to prefetch a succeeding instruction block if said successor valid bit is not set to said predetermined condition;
g. seventh means for executing said branch instruction and generating an actual branch address and a target branch address for the executed branch instruction;
h. eighth means for comparing said actual branch address generated by said seventh means with said branch location address stored in said sixth means and generating a first misprediction signal if a branch corresponding to said branch instruction was taken on execution and either said actual branch address is not equal to said branch location address or said executed target branch address is not equal to said predicted target branch address stored in said fifth means;
i. ninth means for comparing said actual branch address with said branch location address stored in said sixth means and generating a second misprediction signal if said branch instruction was not taken on execution and said actual branch address is equal to said branch location address;
j. tenth means for updating the successor valid bit and instruction fetch information for said instruction block in response to said first or second misprediction signal; and
k. eleventh means for updating said fetch program counter value with the target branch address in response to said first or second misprediction
signal..Iaddend..Iadd.17. An apparatus as claimed in claim 16, wherein said instruction fetch information further comprises an address tag and wherein said fourth means generates said predicted target branch address by concatenating said successor index of said prefetched instruction block to an address tag of a successor instruction block..Iaddend..Iadd.18. A method as set forth in claim 16, wherein said instruction fetch information further comprises an address tag and wherein said fourth means generates said branch location address by concatenating a successor index from a preceding instruction block to an address tag of said prefetched instruction block..Iaddend..Iadd.19. An apparatus comprising:
an instruction cache memory configured to receive a plurality of instruction blocks, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises a successor index indicative of a predicted target branch address and a successor valid bit;
a branch prediction memory coupled to said instruction cache memory;
an instruction decoder coupled to said instruction cache memory, wherein when said successor valid bit is not set to a predetermined condition, a fetch program counter value is incremented and supplied to said instruction cache memory for prefetching a succeeding instruction block, and when said successor valid bit is set to the predetermined condition, a predicted target branch address is generated for a branch location address by said instruction cache memory based on information contained in said instruction fetch information, and wherein said predicted target branch address and said branch location address are stored in said branch prediction memory; and
a processing unit including a branch execution unit coupled to said instruction decoder, wherein said branch instruction is subsequently executed by said branch execution unit which generates an actual branch location address and a target branch address for said executed branch instruction and said actual branch location address and the target branch address are respectively compared with the branch location address and said predicted target branch address stored in the branch prediction memory, generating a misprediction signal if a branch corresponding to said branch instruction was taken on execution and the compared values are not equal, and said successor index being updated for the instruction block in said instruction cache memory in response to the misprediction signal and updating said fetch program counter value with the target branch address in response to said misprediction
signal..Iaddend..Iadd. An apparatus as claimed in claim 19, wherein said instruction cache memory includes an instruction store array, a tag array coupled to said instruction store array, a successor array coupled to said tag array, and a block status array coupled to said successor array..Iaddend..Iadd.21. An apparatus as claimed in claim 20, wherein said instruction cache memory further comprises a fetch program counter that includes a PC latch, an incrementer, and a MUX unit..Iaddend..Iadd.22. An apparatus as claimed in claim 21, wherein said instruction cache memory further comprises an instruction fetch control circuit coupled to said fetch program counter, wherein said instruction fetch control circuit controls the operation of said MUX unit to selectively load the PC latch with a value generated by said incrementer, a value supplied by said
branch control unit, or a reconstructed fetch PC value..Iaddend..Iadd.23. An apparatus as claimed in claim 19, wherein said branch prediction memory comprises a branch target FIFO and a branch location FIFO..Iaddend..Iadd.24. An apparatus as claimed in claim 23, wherein said branch prediction memory further comprises a target PC comparator coupled to said branch target FIFO and a bus that is coupled to said branch execution unit, and a branch location comparator coupled to said branch location FIFO and a bus that is coupled to said branch execution unit, wherein the output of said target PC comparator and said branch location comparator are coupled to a control circuit..Iaddend..Iadd.25. An apparatus for prefetching branch instructions for a processor, comprising:
a. first means for storing a plurality of instruction blocks, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises an index field indicating a succeeding instruction block predicted to be fetched and a branch/no branch prediction;
b. second means for generating and supplying a fetch program counter value to said first means in order to prefetch one of said plurality of instruction blocks stored in said first means as a prefetched instruction block;
c. third means for reading said instruction fetch information of said prefetched instruction block and incrementing said fetch program counter value and supplying said incremented fetch program counter value to said first means if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates a no branch condition, and updating said fetch program counter value with said succeeding instruction block stored in said instruction fetch information of said prefetched instruction block if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates a branch condition;
d. fourth means for storing a branch location address and a corresponding predicted target branch address if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates said branch condition;
e. fifth means for executing a branch instruction contained in said prefetched instruction block and generating an actual target branch address as a result of said execution of said branch instruction;
f. sixth means for comparing said actual target branch address with said predicted target branch address corresponding to said branch instruction stored in said fourth means, wherein when a branch corresponding to said branch instruction was taken on execution and said comparison result indicates that said branch location address stored in said fourth means corresponds to said branch instruction executed by said fifth means and said predicted target branch address is not equivalent to said actual target branch address, sending a first update signal to said first means to replace said index field with said actual target branch address; and
g. seventh means for comparing said branch location address stored in said fourth means with an address of said branch instruction executed by said fifth means and for sending a second update signal to said first means to update said branch/no branch prediction to said no branch condition if said branch corresponding to said branch instruction was not taken on execution and said comparison result indicates that said address of said branch instruction is equal to said branch location address stored in said fourth means..Iaddend..Iadd.26. A method of prefetching branch instructions for a processor, comprising the steps of:
a. loading a plurality of instruction blocks into an instruction cache memory, wherein each of said instruction blocks comprises a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises an index field indicating a succeeding instruction block predicted to be fetched and a branch/no branch prediction;
b. generating and supplying a fetch program counter value to said instruction cache memory in order to prefetch one of said plurality of instruction blocks as a prefetched instruction block;
c. reading said instruction fetch information of said prefetched instruction block and incrementing said fetch program counter value if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates a no branch condition, and updating said fetch program counter value with said succeeding instruction block stored in said instruction fetch information of said prefetched instruction block if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates a branch condition;
d. storing a branch location address and a corresponding predicted target branch address in a branch prediction memory if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates said branch condition;
e. executing a branch instruction contained in said prefetched instruction block and generating an actual target branch address as a result of said execution of said branch instruction;
f. comparing said actual target branch address with said predicted target branch address corresponding to said branch instruction stored in said branch prediction memory, wherein when a branch corresponding to said branch instruction was taken on execution and said comparison result indicates that said branch location address stored in said branch prediction memory corresponds to said executed branch instruction and said predicted target branch address is not equivalent to said actual target branch address, sending a first update signal to said instruction cache memory to replace said index field with said actual target branch address for said corresponding branch instruction; and
g. comparing said branch location address stored in said branch prediction memory with an address of said executed branch instruction and for sending a second update signal to said instruction cache memory to update said branch/no branch prediction to said no branch condition if said branch corresponding to said branch instruction was not taken on execution and said comparison result indicates that said address of said branch instruction is equal to said branch location address stored in said branch
prediction memory..Iaddend..Iadd.27. An apparatus for prefetching instructions for a processor, comprising:
a. an instruction cache memory configured to receive a plurality of instruction blocks, each of said instruction blocks comprising a plurality of instructions and instruction fetch information, wherein said instruction fetch information comprises an index field indicating a succeeding instruction block predicted to be fetched and a branch/no branch prediction;
b. a fetch program counter operatively connected to said instruction cache memory to prefetch one of said plurality of instruction blocks stored in said instruction cache memory as a prefetched instruction block based on a fetch program counter value supplied to said instruction cache memory:
c. an instruction fetch control unit operatively connected to said fetch program counter and said instruction cache memory for reading said instruction fetch information of said prefetched instruction block, wherein said instruction fetch control unit sends a signal to said fetch program counter to increment and supply said fetch program counter value to said instruction cache memory if said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates a no branch condition, and wherein said instruction fetch control unit sends a signal to said fetch program counter to update said fetch program counter value with said succeeding instruction block stored in said instruction fetch information of said prefetched instruction block if said data representing said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates a branch condition;
d. a branch prediction memory coupled to said instruction cache memory for storing a branch location address and a corresponding predicted target branch address if said data representing said branch/no branch prediction stored within said instruction fetch information of said prefetched instruction block indicates said branch condition;
e. an execution unit coupled to said branch prediction memory, wherein when said branch instruction is executed by said execution unit, an actual target branch address is generated, and when a branch corresponding to said branch instruction is taken on execution, said actual target branch address is compared to said predicted target branch address stored within said branch prediction memory and said branch location address is compared with an address of said branch instruction executed by said execution unit, and wherein said index field of said instruction cache memory is updated with said actual target branch address if said actual target branch address is not equivalent to said predicted target branch address or if said branch location address is not equivalent to said address of said branch instruction executed by said execution unit, and
wherein when execution of said branch instruction by said execution unit results in said branch corresponding to said branch instruction not being taken, said address of said branch instruction executed by said execution unit is compared with said branch location address stored in said branch prediction memory and said branch/no branch prediction stored in said instruction cache memory is updated to indicate a no branch condition if said address of said branch instruction executed by said execution unit is equivalent to said branch location address stored in said branch prediction memory..Iaddend.
US08/285,520 1989-06-06 1994-08-04 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache Expired - Lifetime USRE35794E (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/285,520 USRE35794E (en) 1989-06-06 1994-08-04 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/361,870 US5136697A (en) 1989-06-06 1989-06-06 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache
US08/285,520 USRE35794E (en) 1989-06-06 1994-08-04 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US07/361,870 Reissue US5136697A (en) 1989-06-06 1989-06-06 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache

Publications (1)

Publication Number Publication Date
USRE35794E true USRE35794E (en) 1998-05-12

Family

ID=23423749

Family Applications (2)

Application Number Title Priority Date Filing Date
US07/361,870 Ceased US5136697A (en) 1989-06-06 1989-06-06 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache
US08/285,520 Expired - Lifetime USRE35794E (en) 1989-06-06 1994-08-04 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US07/361,870 Ceased US5136697A (en) 1989-06-06 1989-06-06 System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache

Country Status (6)

Country Link
US (2) US5136697A (en)
EP (1) EP0401992B1 (en)
JP (1) JP2889955B2 (en)
AT (1) ATE162897T1 (en)
DE (1) DE69031991T2 (en)
ES (1) ES2111528T3 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6205544B1 (en) * 1998-12-21 2001-03-20 Intel Corporation Decomposition of instructions into branch and sequential code sections
US6516462B1 (en) * 1999-02-17 2003-02-04 Elbrus International Cache miss saving for speculation load operation
US20060036836A1 (en) * 1998-12-31 2006-02-16 Metaflow Technologies, Inc. Block-based branch target buffer
US20080040592A1 (en) * 2006-08-10 2008-02-14 Arm Limited Control of a branch target cache within a data processing system

Families Citing this family (224)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0404068A3 (en) * 1989-06-20 1991-12-27 Fujitsu Limited Branch instruction executing device
US5230068A (en) * 1990-02-26 1993-07-20 Nexgen Microsystems Cache memory system for dynamically altering single cache memory line as either branch target entry or pre-fetch instruction queue based upon instruction sequence
US5163140A (en) * 1990-02-26 1992-11-10 Nexgen Microsystems Two-level branch prediction cache
US5226130A (en) * 1990-02-26 1993-07-06 Nexgen Microsystems Method and apparatus for store-into-instruction-stream detection and maintaining branch prediction cache consistency
US5317718A (en) * 1990-03-27 1994-05-31 Digital Equipment Corporation Data processing system and method with prefetch buffers
CA2045791A1 (en) * 1990-06-29 1991-12-30 Richard Lee Sites Branch performance in high speed processor
US5530941A (en) * 1990-08-06 1996-06-25 Ncr Corporation System and method for prefetching data from a main computer memory into a cache memory
WO1992006426A1 (en) * 1990-10-09 1992-04-16 Nexgen Microsystems Method and apparatus for parallel decoding of instructions with branch prediction look-up
JP2535252B2 (en) * 1990-10-17 1996-09-18 三菱電機株式会社 Parallel processor
JP2532300B2 (en) * 1990-10-17 1996-09-11 三菱電機株式会社 Instruction supply device in parallel processing device
US5446849A (en) * 1990-11-30 1995-08-29 Kabushiki Kaisha Toshiba Electronic computer which executes squash branching
US5265213A (en) * 1990-12-10 1993-11-23 Intel Corporation Pipeline system for executing predicted branch target instruction in a cycle concurrently with the execution of branch instruction
US5414822A (en) * 1991-04-05 1995-05-09 Kabushiki Kaisha Toshiba Method and apparatus for branch prediction using branch prediction table with improved branch prediction effectiveness
US5454089A (en) * 1991-04-17 1995-09-26 Intel Corporation Branch look ahead adder for use in an instruction pipeline sequencer with multiple instruction decoding
US5630157A (en) * 1991-06-13 1997-05-13 International Business Machines Corporation Computer organization for multiple and out-of-order execution of condition code testing and setting instructions
US5493687A (en) 1991-07-08 1996-02-20 Seiko Epson Corporation RISC microprocessor architecture implementing multiple typed register sets
US5539911A (en) 1991-07-08 1996-07-23 Seiko Epson Corporation High-performance, superscalar-based computer system with out-of-order instruction execution
ATE200357T1 (en) 1991-07-08 2001-04-15 Seiko Epson Corp RISC PROCESSOR WITH STRETCHABLE ARCHITECTURE
JP2875909B2 (en) * 1991-07-12 1999-03-31 三菱電機株式会社 Parallel processing unit
JP2773471B2 (en) * 1991-07-24 1998-07-09 日本電気株式会社 Information processing device
US5649097A (en) * 1991-10-25 1997-07-15 International Business Machines Corporation Synchronizing a prediction RAM
WO1993018459A1 (en) * 1992-03-06 1993-09-16 Rambus Inc. Prefetching into a cache to minimize main memory access time and cache size in a computer system
DE69311330T2 (en) 1992-03-31 1997-09-25 Seiko Epson Corp COMMAND SEQUENCE PLANNING FROM A RISC SUPER SCALAR PROCESSOR
KR100309566B1 (en) * 1992-04-29 2001-12-15 리패치 Method and apparatus for grouping multiple instructions, issuing grouped instructions concurrently, and executing grouped instructions in a pipeline processor
WO1993022722A1 (en) 1992-05-01 1993-11-11 Seiko Epson Corporation A system and method for retiring instructions in a superscalar microprocessor
EP0583089B1 (en) * 1992-08-12 2000-01-26 Advanced Micro Devices, Inc. Instruction decoder
EP0586057B1 (en) * 1992-08-31 2000-03-01 Sun Microsystems, Inc. Rapid instruction (pre)fetching and dispatching using prior (pre)fetch predictive annotations
US5337415A (en) * 1992-12-04 1994-08-09 Hewlett-Packard Company Predecoding instructions for supercalar dependency indicating simultaneous execution for increased operating frequency
US5628021A (en) 1992-12-31 1997-05-06 Seiko Epson Corporation System and method for assigning tags to control instruction processing in a superscalar processor
JP3531166B2 (en) 1992-12-31 2004-05-24 セイコーエプソン株式会社 Register renaming system and method
US5898882A (en) * 1993-01-08 1999-04-27 International Business Machines Corporation Method and system for enhanced instruction dispatch in a superscalar processor system utilizing independently accessed intermediate storage
US5467473A (en) * 1993-01-08 1995-11-14 International Business Machines Corporation Out of order instruction load and store comparison
US5367703A (en) * 1993-01-08 1994-11-22 International Business Machines Corporation Method and system for enhanced branch history prediction accuracy in a superscalar processor system
US5696958A (en) * 1993-01-11 1997-12-09 Silicon Graphics, Inc. Method and apparatus for reducing delays following the execution of a branch instruction in an instruction pipeline
US5537573A (en) * 1993-05-28 1996-07-16 Rambus, Inc. Cache system and method for prefetching of data
US5826069A (en) * 1993-09-27 1998-10-20 Intel Corporation Having write merge and data override capability for a superscalar processing device
US5826094A (en) * 1993-09-30 1998-10-20 Intel Corporation Register alias table update to indicate architecturally visible state
US5740393A (en) * 1993-10-15 1998-04-14 Intel Corporation Instruction pointer limits in processor that performs speculative out-of-order instruction execution
US5748976A (en) * 1993-10-18 1998-05-05 Amdahl Corporation Mechanism for maintaining data coherency in a branch history instruction cache
US5878245A (en) 1993-10-29 1999-03-02 Advanced Micro Devices, Inc. High performance load/store functional unit and data cache
US5689672A (en) * 1993-10-29 1997-11-18 Advanced Micro Devices, Inc. Pre-decoded instruction cache and method therefor particularly suitable for variable byte-length instructions
DE69429061T2 (en) * 1993-10-29 2002-07-18 Advanced Micro Devices Inc Superskalarmikroprozessoren
DE69427265T2 (en) 1993-10-29 2002-05-02 Advanced Micro Devices Inc Superskalarbefehlsdekoder
DE69434669T2 (en) * 1993-10-29 2006-10-12 Advanced Micro Devices, Inc., Sunnyvale Speculative command queue for variable byte length commands
EP0651332B1 (en) * 1993-10-29 2001-07-18 Advanced Micro Devices, Inc. Linearly addressable microprocessor cache
US5630082A (en) * 1993-10-29 1997-05-13 Advanced Micro Devices, Inc. Apparatus and method for instruction queue scanning
US5574928A (en) * 1993-10-29 1996-11-12 Advanced Micro Devices, Inc. Mixed integer/floating point processor core for a superscalar microprocessor with a plurality of operand buses for transferring operand segments
US5500943A (en) * 1993-11-02 1996-03-19 Motorola, Inc. Data processor with rename buffer and FIFO buffer for in-order instruction completion
US6128721A (en) * 1993-11-17 2000-10-03 Sun Microsystems, Inc. Temporary pipeline register file for a superpipelined superscalar processor
US6079014A (en) * 1993-12-02 2000-06-20 Intel Corporation Processor that redirects an instruction fetch pipeline immediately upon detection of a mispredicted branch while committing prior instructions to an architectural state
US5604909A (en) 1993-12-15 1997-02-18 Silicon Graphics Computer Systems, Inc. Apparatus for processing instructions in a computing system
IE940855A1 (en) * 1993-12-20 1995-06-28 Motorola Inc Data processor with speculative instruction fetching and¹method of operation
US5517651A (en) * 1993-12-29 1996-05-14 Intel Corporation Method and apparatus for loading a segment register in a microprocessor capable of operating in multiple modes
US6393550B1 (en) * 1993-12-30 2002-05-21 Intel Corporation Method and apparatus for pipeline streamlining where resources are immediate or certainly retired
US5680565A (en) * 1993-12-30 1997-10-21 Intel Corporation Method and apparatus for performing page table walks in a microprocessor capable of processing speculative instructions
US6101597A (en) * 1993-12-30 2000-08-08 Intel Corporation Method and apparatus for maximum throughput scheduling of dependent operations in a pipelined processor
US5721857A (en) * 1993-12-30 1998-02-24 Intel Corporation Method and apparatus for saving the effective address of floating point memory operations in an out-of-order microprocessor
TW253946B (en) * 1994-02-04 1995-08-11 Ibm Data processor with branch prediction and method of operation
US5553256A (en) * 1994-02-28 1996-09-03 Intel Corporation Apparatus for pipeline streamlining where resources are immediate or certainly retired
US5687338A (en) * 1994-03-01 1997-11-11 Intel Corporation Method and apparatus for maintaining a macro instruction for refetching in a pipelined processor
TW260765B (en) * 1994-03-31 1995-10-21 Ibm
TW353732B (en) * 1994-03-31 1999-03-01 Ibm Processing system and method of operation
US5559976A (en) * 1994-03-31 1996-09-24 International Business Machines Corporation System for instruction completion independent of result write-back responsive to both exception free completion of execution and completion of all logically prior instructions
US5546599A (en) * 1994-03-31 1996-08-13 International Business Machines Corporation Processing system and method of operation for processing dispatched instructions with detected exceptions
JPH07281893A (en) * 1994-04-15 1995-10-27 Internatl Business Mach Corp <Ibm> Processing system and arithmetic method
US5644779A (en) * 1994-04-15 1997-07-01 International Business Machines Corporation Processing system and method of operation for concurrent processing of branch instructions with cancelling of processing of a branch instruction
US5590352A (en) * 1994-04-26 1996-12-31 Advanced Micro Devices, Inc. Dependency checking and forwarding of variable width operands
US5689693A (en) * 1994-04-26 1997-11-18 Advanced Micro Devices, Inc. Range finding circuit for selecting a consecutive sequence of reorder buffer entries using circular carry lookahead
US5696955A (en) 1994-06-01 1997-12-09 Advanced Micro Devices, Inc. Floating point stack and exchange instruction
US5632023A (en) * 1994-06-01 1997-05-20 Advanced Micro Devices, Inc. Superscalar microprocessor including flag operand renaming and forwarding apparatus
US5649225A (en) * 1994-06-01 1997-07-15 Advanced Micro Devices, Inc. Resynchronization of a superscalar processor
US5559975A (en) * 1994-06-01 1996-09-24 Advanced Micro Devices, Inc. Program counter update mechanism
US5561782A (en) * 1994-06-30 1996-10-01 Intel Corporation Pipelined cache system having low effective latency for nonsequential accesses
US5535346A (en) * 1994-07-05 1996-07-09 Motorola, Inc. Data processor with future file with parallel update and method of operation
US5623615A (en) * 1994-08-04 1997-04-22 International Business Machines Corporation Circuit and method for reducing prefetch cycles on microprocessors
EP0698884A1 (en) * 1994-08-24 1996-02-28 Advanced Micro Devices, Inc. Memory array for microprocessor cache
US5928357A (en) * 1994-09-15 1999-07-27 Intel Corporation Circuitry and method for performing branching without pipeline delay
US5721695A (en) * 1994-10-17 1998-02-24 Advanced Micro Devices, Inc. Simulation by emulating level sensitive latches with edge trigger latches
US5603045A (en) * 1994-12-16 1997-02-11 Vlsi Technology, Inc. Microprocessor system having instruction cache with reserved branch target section
US5878244A (en) * 1995-01-25 1999-03-02 Advanced Micro Devices, Inc. Reorder buffer configured to allocate storage capable of storing results corresponding to a maximum number of concurrently receivable instructions regardless of a number of instructions received
US5901302A (en) * 1995-01-25 1999-05-04 Advanced Micro Devices, Inc. Superscalar microprocessor having symmetrical, fixed issue positions each configured to execute a particular subset of instructions
US6237082B1 (en) 1995-01-25 2001-05-22 Advanced Micro Devices, Inc. Reorder buffer configured to allocate storage for instruction results corresponding to predefined maximum number of concurrently receivable instructions independent of a number of instructions received
US5903741A (en) * 1995-01-25 1999-05-11 Advanced Micro Devices, Inc. Method of allocating a fixed reorder buffer storage line for execution results regardless of a number of concurrently dispatched instructions
US5933860A (en) * 1995-02-10 1999-08-03 Digital Equipment Corporation Multiprobe instruction cache with instruction-based probe hint generation and training whereby the cache bank or way to be accessed next is predicted
US5708788A (en) * 1995-03-03 1998-01-13 Fujitsu, Ltd. Method for adjusting fetch program counter in response to the number of instructions fetched and issued
US5737550A (en) * 1995-03-28 1998-04-07 Advanced Micro Devices, Inc. Cache memory to processor bus interface and method thereof
US5822574A (en) * 1995-04-12 1998-10-13 Advanced Micro Devices, Inc. Functional unit with a pointer for mispredicted resolution, and a superscalar microprocessor employing the same
US5764946A (en) * 1995-04-12 1998-06-09 Advanced Micro Devices Superscalar microprocessor employing a way prediction unit to predict the way of an instruction fetch address and to concurrently provide a branch prediction address corresponding to the fetch address
US5875324A (en) * 1995-06-07 1999-02-23 Advanced Micro Devices, Inc. Superscalar microprocessor which delays update of branch prediction information in response to branch misprediction until a subsequent idle clock
US5968169A (en) * 1995-06-07 1999-10-19 Advanced Micro Devices, Inc. Superscalar microprocessor stack structure for judging validity of predicted subroutine return addresses
US5878255A (en) * 1995-06-07 1999-03-02 Advanced Micro Devices, Inc. Update unit for providing a delayed update to a branch prediction array
US6112019A (en) * 1995-06-12 2000-08-29 Georgia Tech Research Corp. Distributed instruction queue
US5828895A (en) * 1995-09-20 1998-10-27 International Business Machines Corporation Methods and system for predecoding instructions in a superscalar data processing system
KR100384213B1 (en) * 1995-10-06 2003-08-19 어드밴스트 마이크로 디바이시즈 인코퍼레이티드 A processing system, selection circuit and methods for identifying first or second objects of selected types in a sequential list
EP0870228B1 (en) 1995-10-06 2003-08-13 Advanced Micro Devices, Inc. Unified multi-function operation scheduler for out-of-order execution in a superscalar processor
US5884059A (en) * 1996-01-26 1999-03-16 Advanced Micro Devices, Inc. Unified multi-function operation scheduler for out-of-order execution in a superscalar processor
GB9521980D0 (en) * 1995-10-26 1996-01-03 Sgs Thomson Microelectronics Branch target buffer
US5881278A (en) * 1995-10-30 1999-03-09 Advanced Micro Devices, Inc. Return address prediction system which adjusts the contents of return stack storage to enable continued prediction after a mispredicted branch
US5796974A (en) * 1995-11-07 1998-08-18 Advanced Micro Devices, Inc. Microcode patching apparatus and method
US5835744A (en) * 1995-11-20 1998-11-10 Advanced Micro Devices, Inc. Microprocessor configured to swap operands in order to minimize dependency checking logic
US5713039A (en) * 1995-12-05 1998-01-27 Advanced Micro Devices, Inc. Register file having multiple register storages for storing data from multiple data streams
US5864707A (en) * 1995-12-11 1999-01-26 Advanced Micro Devices, Inc. Superscalar microprocessor configured to predict return addresses from a return stack storage
US5732235A (en) * 1996-01-25 1998-03-24 International Business Machines Corporation Method and system for minimizing the number of cycles required to execute semantic routines
US5742805A (en) * 1996-02-15 1998-04-21 Fujitsu Ltd. Method and apparatus for a single history register based branch predictor in a superscalar microprocessor
US5752259A (en) * 1996-03-26 1998-05-12 Advanced Micro Devices, Inc. Instruction cache configured to provide instructions to a microprocessor having a clock cycle time less than a cache access time of said instruction cache
US6108769A (en) * 1996-05-17 2000-08-22 Advanced Micro Devices, Inc. Dependency table for reducing dependency checking hardware
US5835511A (en) * 1996-05-17 1998-11-10 Advanced Micro Devices, Inc. Method and mechanism for checking integrity of byte enable signals
US5918056A (en) * 1996-05-17 1999-06-29 Advanced Micro Devices, Inc. Segmentation suspend mode for real-time interrupt support
US5761490A (en) * 1996-05-28 1998-06-02 Hewlett-Packard Company Changing the meaning of a pre-decode bit in a cache memory depending on branch prediction mode
US5903740A (en) * 1996-07-24 1999-05-11 Advanced Micro Devices, Inc. Apparatus and method for retiring instructions in excess of the number of accessible write ports
US6049863A (en) * 1996-07-24 2000-04-11 Advanced Micro Devices, Inc. Predecoding technique for indicating locations of opcode bytes in variable byte-length instructions within a superscalar microprocessor
US5900013A (en) * 1996-07-26 1999-05-04 Advanced Micro Devices, Inc. Dual comparator scheme for detecting a wrap-around condition and generating a cancel signal for removing wrap-around buffer entries
US5946468A (en) * 1996-07-26 1999-08-31 Advanced Micro Devices, Inc. Reorder buffer having an improved future file for storing speculative instruction execution results
US5915110A (en) * 1996-07-26 1999-06-22 Advanced Micro Devices, Inc. Branch misprediction recovery in a reorder buffer having a future file
US5872943A (en) * 1996-07-26 1999-02-16 Advanced Micro Devices, Inc. Apparatus for aligning instructions using predecoded shift amounts
US5872951A (en) * 1996-07-26 1999-02-16 Advanced Micro Design, Inc. Reorder buffer having a future file for storing speculative instruction execution results
US5765016A (en) * 1996-09-12 1998-06-09 Advanced Micro Devices, Inc. Reorder buffer configured to store both speculative and committed register states
US5822575A (en) * 1996-09-12 1998-10-13 Advanced Micro Devices, Inc. Branch prediction storage for storing branch prediction information such that a corresponding tag may be routed with the branch instruction
US5794028A (en) * 1996-10-17 1998-08-11 Advanced Micro Devices, Inc. Shared branch prediction structure
US5850543A (en) * 1996-10-30 1998-12-15 Texas Instruments Incorporated Microprocessor with speculative instruction pipelining storing a speculative register value within branch target buffer for use in speculatively executing instructions after a return
US5918044A (en) * 1996-10-31 1999-06-29 International Business Machines Corporation Apparatus and method for instruction fetching using a multi-port instruction cache directory
US5920710A (en) * 1996-11-18 1999-07-06 Advanced Micro Devices, Inc. Apparatus and method for modifying status bits in a reorder buffer with a large speculative state
US5870579A (en) * 1996-11-18 1999-02-09 Advanced Micro Devices, Inc. Reorder buffer including a circuit for selecting a designated mask corresponding to an instruction that results in an exception
US5995749A (en) 1996-11-19 1999-11-30 Advanced Micro Devices, Inc. Branch prediction mechanism employing branch selectors to select a branch prediction
US5954816A (en) * 1996-11-19 1999-09-21 Advanced Micro Devices, Inc. Branch selector prediction
US5978906A (en) * 1996-11-19 1999-11-02 Advanced Micro Devices, Inc. Branch selectors associated with byte ranges within an instruction cache for rapidly identifying branch predictions
US6175906B1 (en) 1996-12-06 2001-01-16 Advanced Micro Devices, Inc. Mechanism for fast revalidation of virtual tags
US5881305A (en) * 1996-12-13 1999-03-09 Advanced Micro Devices, Inc. Register rename stack for a microprocessor
US5870580A (en) * 1996-12-13 1999-02-09 Advanced Micro Devices, Inc. Decoupled forwarding reorder buffer configured to allocate storage in chunks for instructions having unresolved dependencies
US5983321A (en) * 1997-03-12 1999-11-09 Advanced Micro Devices, Inc. Cache holding register for receiving instruction packets and for providing the instruction packets to a predecode unit and instruction cache
US5862065A (en) * 1997-02-13 1999-01-19 Advanced Micro Devices, Inc. Method and circuit for fast generation of zero flag condition code in a microprocessor-based computer
US5768555A (en) 1997-02-20 1998-06-16 Advanced Micro Devices, Inc. Reorder buffer employing last in buffer and last in line bits
US5974538A (en) * 1997-02-21 1999-10-26 Wilmot, Ii; Richard Byron Method and apparatus for annotating operands in a computer system with source instruction identifiers
US6141740A (en) * 1997-03-03 2000-10-31 Advanced Micro Devices, Inc. Apparatus and method for microcode patching for generating a next address
US6233672B1 (en) 1997-03-06 2001-05-15 Advanced Micro Devices, Inc. Piping rounding mode bits with floating point instructions to eliminate serialization
US5852727A (en) * 1997-03-10 1998-12-22 Advanced Micro Devices, Inc. Instruction scanning unit for locating instructions via parallel scanning of start and end byte information
US5968163A (en) 1997-03-10 1999-10-19 Advanced Micro Devices, Inc. Microcode scan unit for scanning microcode instructions using predecode data
US5850532A (en) * 1997-03-10 1998-12-15 Advanced Micro Devices, Inc. Invalid instruction scan unit for detecting invalid predecode data corresponding to instructions being fetched
US5859992A (en) * 1997-03-12 1999-01-12 Advanced Micro Devices, Inc. Instruction alignment using a dispatch list and a latch list
US5859998A (en) * 1997-03-19 1999-01-12 Advanced Micro Devices, Inc. Hierarchical microcode implementation of floating point instructions for a microprocessor
US5930492A (en) * 1997-03-19 1999-07-27 Advanced Micro Devices, Inc. Rapid pipeline control using a control word and a steering word
US5828873A (en) * 1997-03-19 1998-10-27 Advanced Micro Devices, Inc. Assembly queue for a floating point unit
US5887185A (en) * 1997-03-19 1999-03-23 Advanced Micro Devices, Inc. Interface for coupling a floating point unit to a reorder buffer
US5987599A (en) * 1997-03-28 1999-11-16 Intel Corporation Target instructions prefetch cache
US5987235A (en) * 1997-04-04 1999-11-16 Advanced Micro Devices, Inc. Method and apparatus for predecoding variable byte length instructions for fast scanning of instructions
US5901076A (en) * 1997-04-16 1999-05-04 Advanced Micro Designs, Inc. Ripple carry shifter in a floating point arithmetic unit of a microprocessor
US6003128A (en) * 1997-05-01 1999-12-14 Advanced Micro Devices, Inc. Number of pipeline stages and loop length related counter differential based end-loop prediction
US6122729A (en) * 1997-05-13 2000-09-19 Advanced Micro Devices, Inc. Prefetch buffer which stores a pointer indicating an initial predecode position
US5845101A (en) * 1997-05-13 1998-12-01 Advanced Micro Devices, Inc. Prefetch buffer for storing instructions prior to placing the instructions in an instruction cache
US6016543A (en) * 1997-05-14 2000-01-18 Mitsubishi Denki Kabushiki Kaisha Microprocessor for controlling the conditional execution of instructions
US5940602A (en) * 1997-06-11 1999-08-17 Advanced Micro Devices, Inc. Method and apparatus for predecoding variable byte length instructions for scanning of a number of RISC operations
US6073230A (en) * 1997-06-11 2000-06-06 Advanced Micro Devices, Inc. Instruction fetch unit configured to provide sequential way prediction for sequential instruction fetches
US5872946A (en) * 1997-06-11 1999-02-16 Advanced Micro Devices, Inc. Instruction alignment unit employing dual instruction queues for high frequency instruction dispatch
US6009511A (en) * 1997-06-11 1999-12-28 Advanced Micro Devices, Inc. Apparatus and method for tagging floating point operands and results for rapid detection of special floating point numbers
US5983337A (en) * 1997-06-12 1999-11-09 Advanced Micro Devices, Inc. Apparatus and method for patching an instruction by providing a substitute instruction or instructions from an external memory responsive to detecting an opcode of the instruction
US5933629A (en) * 1997-06-12 1999-08-03 Advanced Micro Devices, Inc. Apparatus and method for detecting microbranches early
US5898865A (en) * 1997-06-12 1999-04-27 Advanced Micro Devices, Inc. Apparatus and method for predicting an end of loop for string instructions
US5933626A (en) * 1997-06-12 1999-08-03 Advanced Micro Devices, Inc. Apparatus and method for tracing microprocessor instructions
US6012125A (en) * 1997-06-20 2000-01-04 Advanced Micro Devices, Inc. Superscalar microprocessor including a decoded instruction cache configured to receive partially decoded instructions
US5978901A (en) * 1997-08-21 1999-11-02 Advanced Micro Devices, Inc. Floating point and multimedia unit with data type reclassification capability
US6101577A (en) * 1997-09-15 2000-08-08 Advanced Micro Devices, Inc. Pipelined instruction cache and branch prediction mechanism therefor
US6185676B1 (en) * 1997-09-30 2001-02-06 Intel Corporation Method and apparatus for performing early branch prediction in a microprocessor
US5931943A (en) * 1997-10-21 1999-08-03 Advanced Micro Devices, Inc. Floating point NaN comparison
US6032252A (en) * 1997-10-28 2000-02-29 Advanced Micro Devices, Inc. Apparatus and method for efficient loop control in a superscalar microprocessor
US5974542A (en) * 1997-10-30 1999-10-26 Advanced Micro Devices, Inc. Branch prediction unit which approximates a larger number of branch predictions using a smaller number of branch predictions and an alternate target indication
US6230259B1 (en) 1997-10-31 2001-05-08 Advanced Micro Devices, Inc. Transparent extended state save
US6157996A (en) * 1997-11-13 2000-12-05 Advanced Micro Devices, Inc. Processor programably configurable to execute enhanced variable byte length instructions including predicated execution, three operand addressing, and increased register space
US6199154B1 (en) 1997-11-17 2001-03-06 Advanced Micro Devices, Inc. Selecting cache to fetch in multi-level cache system based on fetch address source and pre-fetching additional data to the cache for future access
US6256728B1 (en) 1997-11-17 2001-07-03 Advanced Micro Devices, Inc. Processor configured to selectively cancel instructions from its pipeline responsive to a predicted-taken short forward branch instruction
US6079005A (en) * 1997-11-20 2000-06-20 Advanced Micro Devices, Inc. Microprocessor including virtual address branch prediction and current page register to provide page portion of virtual and physical fetch address
US6154818A (en) * 1997-11-20 2000-11-28 Advanced Micro Devices, Inc. System and method of controlling access to privilege partitioned address space for a model specific register file
US6079003A (en) 1997-11-20 2000-06-20 Advanced Micro Devices, Inc. Reverse TLB for providing branch target address in a microprocessor having a physically-tagged cache
US6516395B1 (en) 1997-11-20 2003-02-04 Advanced Micro Devices, Inc. System and method for controlling access to a privilege-partitioned address space with a fixed set of attributes
US5974432A (en) * 1997-12-05 1999-10-26 Advanced Micro Devices, Inc. On-the-fly one-hot encoding of leading zero count
US5870578A (en) * 1997-12-09 1999-02-09 Advanced Micro Devices, Inc. Workload balancing in a microprocessor for reduced instruction dispatch stalling
US6016545A (en) * 1997-12-16 2000-01-18 Advanced Micro Devices, Inc. Reduced size storage apparatus for storing cache-line-related data in a high frequency microprocessor
US6157986A (en) * 1997-12-16 2000-12-05 Advanced Micro Devices, Inc. Fast linear tag validation unit for use in microprocessor
US6016533A (en) * 1997-12-16 2000-01-18 Advanced Micro Devices, Inc. Way prediction logic for cache array
US6112296A (en) * 1997-12-18 2000-08-29 Advanced Micro Devices, Inc. Floating point stack manipulation using a register map and speculative top of stack values
US6112018A (en) * 1997-12-18 2000-08-29 Advanced Micro Devices, Inc. Apparatus for exchanging two stack registers
US6018798A (en) * 1997-12-18 2000-01-25 Advanced Micro Devices, Inc. Floating point unit using a central window for storing instructions capable of executing multiple instructions in a single clock cycle
US5954814A (en) * 1997-12-19 1999-09-21 Intel Corporation System for using a branch prediction unit to achieve serialization by forcing a branch misprediction to flush a pipeline
US6175908B1 (en) 1998-04-30 2001-01-16 Advanced Micro Devices, Inc. Variable byte-length instructions using state of function bit of second byte of plurality of instructions bytes as indicative of whether first byte is a prefix byte
US6141745A (en) * 1998-04-30 2000-10-31 Advanced Micro Devices, Inc. Functional bit identifying a prefix byte via a particular state regardless of type of instruction
US6119223A (en) * 1998-07-31 2000-09-12 Advanced Micro Devices, Inc. Map unit having rapid misprediction recovery
US6122656A (en) 1998-07-31 2000-09-19 Advanced Micro Devices, Inc. Processor configured to map logical register numbers to physical register numbers using virtual register numbers
US6230262B1 (en) 1998-07-31 2001-05-08 Advanced Micro Devices, Inc. Processor configured to selectively free physical registers upon retirement of instructions
US6230260B1 (en) 1998-09-01 2001-05-08 International Business Machines Corporation Circuit arrangement and method of speculative instruction execution utilizing instruction history caching
US6442681B1 (en) * 1998-12-28 2002-08-27 Bull Hn Information Systems Inc. Pipelined central processor managing the execution of instructions with proximate successive branches in a cache-based data processing system while performing block mode transfer predictions
US6247097B1 (en) * 1999-01-22 2001-06-12 International Business Machines Corporation Aligned instruction cache handling of instruction fetches across multiple predicted branch instructions
DE19945940C2 (en) * 1999-09-24 2002-01-17 Infineon Technologies Ag Method and device for processing conditional jump instructions in a processor with PIPELINE computer architecture
US6438664B1 (en) 1999-10-27 2002-08-20 Advanced Micro Devices, Inc. Microcode patch device and method for patching microcode using match registers and patch routines
US6442707B1 (en) 1999-10-29 2002-08-27 Advanced Micro Devices, Inc. Alternate fault handler
US6877084B1 (en) 2000-08-09 2005-04-05 Advanced Micro Devices, Inc. Central processing unit (CPU) accessing an extended register set in an extended register mode
US6981132B2 (en) 2000-08-09 2005-12-27 Advanced Micro Devices, Inc. Uniform register addressing using prefix byte
US6732253B1 (en) * 2000-11-13 2004-05-04 Chipwrights Design, Inc. Loop handling for single instruction multiple datapath processor architectures
US6931518B1 (en) 2000-11-28 2005-08-16 Chipwrights Design, Inc. Branching around conditional processing if states of all single instruction multiple datapaths are disabled and the computer program is non-deterministic
US6970985B2 (en) 2002-07-09 2005-11-29 Bluerisc Inc. Statically speculative memory accessing
US7117290B2 (en) * 2003-09-03 2006-10-03 Advanced Micro Devices, Inc. MicroTLB and micro tag for reducing power in a processor
US20050050278A1 (en) * 2003-09-03 2005-03-03 Advanced Micro Devices, Inc. Low power way-predicted cache
US20050114850A1 (en) 2003-10-29 2005-05-26 Saurabh Chheda Energy-focused re-compilation of executables and hardware mechanisms based on compiler-architecture interaction and compiler-inserted control
US7996671B2 (en) 2003-11-17 2011-08-09 Bluerisc Inc. Security of program executables and microprocessors based on compiler-architecture interaction
US8607209B2 (en) 2004-02-04 2013-12-10 Bluerisc Inc. Energy-focused compiler-assisted branch prediction
US8484441B2 (en) * 2004-03-31 2013-07-09 Icera Inc. Apparatus and method for separate asymmetric control processing and data path processing in a configurable dual path processor that supports instructions having different bit widths
US7949856B2 (en) * 2004-03-31 2011-05-24 Icera Inc. Method and apparatus for separate control processing and data path processing in a dual path processor with a shared load/store unit
US9047094B2 (en) 2004-03-31 2015-06-02 Icera Inc. Apparatus and method for separate asymmetric control processing and data path processing in a dual path processor
JP3926809B2 (en) * 2004-07-27 2007-06-06 富士通株式会社 Branch instruction control device and control method.
WO2006026510A2 (en) 2004-08-30 2006-03-09 Texas Instruments Incorporated Methods and apparatus for branch prediction and processing of microprocessor instructions and the like
US8578134B1 (en) * 2005-04-04 2013-11-05 Globalfoundries Inc. System and method for aligning change-of-flow instructions in an instruction buffer
US8161252B1 (en) * 2005-08-01 2012-04-17 Nvidia Corporation Memory interface with dynamic selection among mirrored storage locations
US7640422B2 (en) * 2006-08-16 2009-12-29 Qualcomm Incorporated System for reducing number of lookups in a branch target address cache by storing retrieved BTAC addresses into instruction cache
US20080126766A1 (en) 2006-11-03 2008-05-29 Saurabh Chheda Securing microprocessors against information leakage and physical tampering
US20080154379A1 (en) * 2006-12-22 2008-06-26 Musculoskeletal Transplant Foundation Interbody fusion hybrid graft
EP2243098A2 (en) * 2008-02-11 2010-10-27 Nxp B.V. Method of program obfuscation and processing device for executing obfuscated programs
US8181005B2 (en) * 2008-09-05 2012-05-15 Advanced Micro Devices, Inc. Hybrid branch prediction device with sparse and dense prediction caches
US20110093658A1 (en) * 2009-10-19 2011-04-21 Zuraski Jr Gerald D Classifying and segregating branch targets
US9916252B2 (en) 2015-05-19 2018-03-13 Linear Algebra Technologies Limited Systems and methods for addressing a cache with split-indexes
US10719321B2 (en) 2015-09-19 2020-07-21 Microsoft Technology Licensing, Llc Prefetching instruction blocks
US20180210734A1 (en) * 2017-01-26 2018-07-26 Alibaba Group Holding Limited Methods and apparatus for processing self-modifying codes
CN116881180A (en) 2017-05-19 2023-10-13 莫维迪乌斯有限公司 Methods, systems, and apparatus for reducing memory latency when fetching pixel cores
US10540181B2 (en) 2018-01-19 2020-01-21 Marvell World Trade Ltd. Managing branch prediction information for different contexts
US10599437B2 (en) 2018-01-19 2020-03-24 Marvell World Trade Ltd. Managing obscured branch prediction information
CN111221579B (en) * 2018-11-27 2022-04-26 展讯通信(上海)有限公司 Method and system for predicting Load instruction execution delay
US20220342672A1 (en) * 2021-04-27 2022-10-27 Red Hat, Inc. Rescheduling a load instruction based on past replays

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4200927A (en) * 1978-01-03 1980-04-29 International Business Machines Corporation Multi-instruction stream branch processing mechanism
US4295193A (en) * 1979-06-29 1981-10-13 International Business Machines Corporation Machine for multiple instruction execution
US4430706A (en) * 1980-10-27 1984-02-07 Burroughs Corporation Branch prediction apparatus and method for a data processing system
US4477872A (en) * 1982-01-15 1984-10-16 International Business Machines Corporation Decode history table for conditional branch instructions
US4604691A (en) * 1982-09-07 1986-08-05 Nippon Electric Co., Ltd. Data processing system having branch instruction prefetching performance
US4755966A (en) * 1985-06-28 1988-07-05 Hewlett-Packard Company Bidirectional branch prediction and optimization
US4764861A (en) * 1984-02-08 1988-08-16 Nec Corporation Instruction fpefetching device with prediction of a branch destination for each branch count instruction
US4807115A (en) * 1983-10-07 1989-02-21 Cornell Research Foundation, Inc. Instruction issuing mechanism for processors with multiple functional units
US4858104A (en) * 1987-01-16 1989-08-15 Mitsubishi Denki Kabushiki Kaisha Preceding instruction address based branch prediction in a pipelined processor
US4860197A (en) * 1987-07-31 1989-08-22 Prime Computer, Inc. Branch cache system with instruction boundary determination independent of parcel boundary
US4894772A (en) * 1987-07-31 1990-01-16 Prime Computer, Inc. Method and apparatus for qualifying branch cache entries
US4984154A (en) * 1982-11-17 1991-01-08 Nec Corporation Instruction prefetching device with prediction of a branch destination address

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4943908A (en) * 1987-12-02 1990-07-24 International Business Machines Corporation Multiple branch analyzer for prefetching cache lines

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4200927A (en) * 1978-01-03 1980-04-29 International Business Machines Corporation Multi-instruction stream branch processing mechanism
US4295193A (en) * 1979-06-29 1981-10-13 International Business Machines Corporation Machine for multiple instruction execution
US4430706A (en) * 1980-10-27 1984-02-07 Burroughs Corporation Branch prediction apparatus and method for a data processing system
US4477872A (en) * 1982-01-15 1984-10-16 International Business Machines Corporation Decode history table for conditional branch instructions
US4604691A (en) * 1982-09-07 1986-08-05 Nippon Electric Co., Ltd. Data processing system having branch instruction prefetching performance
US4984154A (en) * 1982-11-17 1991-01-08 Nec Corporation Instruction prefetching device with prediction of a branch destination address
US4807115A (en) * 1983-10-07 1989-02-21 Cornell Research Foundation, Inc. Instruction issuing mechanism for processors with multiple functional units
US4764861A (en) * 1984-02-08 1988-08-16 Nec Corporation Instruction fpefetching device with prediction of a branch destination for each branch count instruction
US4755966A (en) * 1985-06-28 1988-07-05 Hewlett-Packard Company Bidirectional branch prediction and optimization
US4858104A (en) * 1987-01-16 1989-08-15 Mitsubishi Denki Kabushiki Kaisha Preceding instruction address based branch prediction in a pipelined processor
US4860197A (en) * 1987-07-31 1989-08-22 Prime Computer, Inc. Branch cache system with instruction boundary determination independent of parcel boundary
US4894772A (en) * 1987-07-31 1990-01-16 Prime Computer, Inc. Method and apparatus for qualifying branch cache entries

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6205544B1 (en) * 1998-12-21 2001-03-20 Intel Corporation Decomposition of instructions into branch and sequential code sections
US20060036836A1 (en) * 1998-12-31 2006-02-16 Metaflow Technologies, Inc. Block-based branch target buffer
US7552314B2 (en) * 1998-12-31 2009-06-23 Stmicroelectronics, Inc. Fetching all or portion of instructions in memory line up to branch instruction based on branch prediction and size indicator stored in branch target buffer indexed by fetch address
US20100017586A1 (en) * 1998-12-31 2010-01-21 Anatoly Gelman Fetching all or portion of instructions in memory line up to branch instruction based on branch prediction and size indicator stored in branch target buffer indexed by fetch address
US8171260B2 (en) 1998-12-31 2012-05-01 Stmicroelectronics, Inc. Fetching all or portion of instructions in memory line up to branch instruction based on branch prediction and size indicator stored in branch target buffer indexed by fetch address
US6516462B1 (en) * 1999-02-17 2003-02-04 Elbrus International Cache miss saving for speculation load operation
US20080040592A1 (en) * 2006-08-10 2008-02-14 Arm Limited Control of a branch target cache within a data processing system
US7447883B2 (en) * 2006-08-10 2008-11-04 Arm Limited Allocation of branch target cache resources in dependence upon program instructions within an instruction queue

Also Published As

Publication number Publication date
ES2111528T3 (en) 1998-03-16
JP2889955B2 (en) 1999-05-10
EP0401992A2 (en) 1990-12-12
US5136697A (en) 1992-08-04
ATE162897T1 (en) 1998-02-15
EP0401992B1 (en) 1998-01-28
DE69031991D1 (en) 1998-03-05
DE69031991T2 (en) 1998-08-20
JPH0334024A (en) 1991-02-14
EP0401992A3 (en) 1992-12-30

Similar Documents

Publication Publication Date Title
USRE35794E (en) System for reducing delay for execution subsequent to correctly predicted branch instruction using fetch information stored with each block of instructions in cache
US5848269A (en) Branch predicting mechanism for enhancing accuracy in branch prediction by reference to data
US6609194B1 (en) Apparatus for performing branch target address calculation based on branch type
US5805877A (en) Data processor with branch target address cache and method of operation
JP3182740B2 (en) A method and system for fetching non-consecutive instructions in a single clock cycle.
US5687349A (en) Data processor with branch target address cache and subroutine return address cache and method of operation
US7437543B2 (en) Reducing the fetch time of target instructions of a predicted taken branch instruction
US5903750A (en) Dynamic branch prediction for branch instructions with multiple targets
US6185676B1 (en) Method and apparatus for performing early branch prediction in a microprocessor
US6263427B1 (en) Branch prediction mechanism
US6247122B1 (en) Method and apparatus for performing branch prediction combining static and dynamic branch predictors
US5761723A (en) Data processor with branch prediction and method of operation
US6684323B2 (en) Virtual condition codes
US5774710A (en) Cache line branch prediction scheme that shares among sets of a set associative cache
US5935238A (en) Selection from multiple fetch addresses generated concurrently including predicted and actual target by control-flow instructions in current and previous instruction bundles
IE940855A1 (en) Data processor with speculative instruction fetching and¹method of operation
US5964869A (en) Instruction fetch mechanism with simultaneous prediction of control-flow instructions
US5761490A (en) Changing the meaning of a pre-decode bit in a cache memory depending on branch prediction mode
US20200081717A1 (en) Branch prediction circuitry
KR20090042303A (en) Associate cached branch information with the last granularity of branch instruction in variable length instruction set
US8250344B2 (en) Methods and apparatus for dynamic prediction by software
US20070174592A1 (en) Early conditional selection of an operand
US5748976A (en) Mechanism for maintaining data coherency in a branch history instruction cache
US6910124B1 (en) Apparatus and method for recovering a link stack from mis-speculation
US20050216713A1 (en) Instruction text controlled selectively stated branches for prediction via a branch target buffer

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: GLOBALFOUNDRIES INC.,CAYMAN ISLANDS

Free format text: AFFIRMATION OF PATENT ASSIGNMENT;ASSIGNOR:ADVANCED MICRO DEVICES, INC.;REEL/FRAME:023120/0426

Effective date: 20090630

Owner name: GLOBALFOUNDRIES INC., CAYMAN ISLANDS

Free format text: AFFIRMATION OF PATENT ASSIGNMENT;ASSIGNOR:ADVANCED MICRO DEVICES, INC.;REEL/FRAME:023120/0426

Effective date: 20090630

AS Assignment

Owner name: GLOBALFOUNDRIES U.S. INC., NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:056987/0001

Effective date: 20201117