US7062767B1 - Method for coordinating information flow between components - Google Patents

Method for coordinating information flow between components Download PDF

Info

Publication number
US7062767B1
US7062767B1 US09/654,718 US65471800A US7062767B1 US 7062767 B1 US7062767 B1 US 7062767B1 US 65471800 A US65471800 A US 65471800A US 7062767 B1 US7062767 B1 US 7062767B1
Authority
US
United States
Prior art keywords
command
component
medium
sending
scheduler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/654,718
Inventor
Dominic Paul McCarthy
Jack Choquette
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Raza Microelectronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US09/654,718 priority Critical patent/US7062767B1/en
Assigned to SANDCRAFT INCORPORATED reassignment SANDCRAFT INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHOQUETTE, JACK, MCCARTHY, DOMINIC
Application filed by Raza Microelectronics Inc filed Critical Raza Microelectronics Inc
Assigned to RAZA MICROELECTRONICS, INC. reassignment RAZA MICROELECTRONICS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SANDCRAFT, INC.
Application granted granted Critical
Publication of US7062767B1 publication Critical patent/US7062767B1/en
Assigned to VENTURE LENDING & LEASING IV, INC. reassignment VENTURE LENDING & LEASING IV, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAZA MICROELECTRONICS, INC.
Assigned to RMI CORPORATION reassignment RMI CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: RAZA MICROELECTRONICS, INC.
Assigned to NETLOGIC MICROSYSTEMS, INC. reassignment NETLOGIC MICROSYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RMI CORPORATION
Assigned to NETLOGIC MICROSYSTEMS, INC. reassignment NETLOGIC MICROSYSTEMS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: VENTURE LENDING & LEASING, INC
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NETLOGIC I LLC
Assigned to NETLOGIC I LLC reassignment NETLOGIC I LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: NETLOGIC MICROSYSTEMS, INC.
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE PREVIOUSLY RECORDED AT REEL: 047196 FRAME: 0097. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system

Definitions

  • Bus contention occurs when multiple components attempt to use simultaneously a shared bus.
  • Arbitration protocols determine the allocation of the shared bus. These allocation protocols are performed in real-time on demand. To avoid increasing the latency of access to the bus, the allocation protocols must be kept simple so that rapid computation is facilitated.
  • Many allocation techniques are well known in the art, including: first-come first-served, round-robin, rate monotonic, various weighted prioritization schemes and others.
  • bus-based interconnection techniques Another limiting factor of bus-based interconnection techniques is lack of scalability. There are two well-known techniques for scaling bus architectures.
  • One scaling technique is to increase the performance of a single bus through higher clock rates and increased width. This technique is expensive. The physical realization of a bus in a particular manufacturing process serves to place an upper limit on its clock rate. Additional performance increases require a wider bus, consuming greater amounts of expensive chip area. Furthermore, wide buses are ineffective on small transfers, serving to limit performance increases. An additional burden of this scaling technique is that every component connected to the bus requires redesign.
  • bus-based methodologies Another limiting factor in bus-based methodologies is the lack of a unified scheduling capability.
  • the existing methodologies lack a coherent mechanism for an individual component to adapt its communication requirements to the capabilities of the system in which it is placed. System designers are forced to create ad-hoc mechanisms to regulate the communication demands of individual components and to integrate them into the overall system.
  • a communications technique is required that is efficient and scales well as system complexity increases.
  • An efficient technique is provided which moves the decisions about the scheduling of transfers from individual components with an arbitration mechanism to one or more centralized scheduling processors. Scheduling decisions are made in advance by the processors and then communicated to the participating components using a transaction protocol.
  • the transaction protocol allows the scheduling processor to create chained sequences of transfers. The elements of each chained sequence can then be performed by the individual components without additional communication with the scheduling processor.
  • FIG. 1 illustrates the architecture of the communication system.
  • FIG. 2 illustrates a write command
  • FIG. 3 illustrates a write command with notification command.
  • FIG. 4 illustrates a request for write with notification command.
  • FIG. 5 illustrates a wait for condition command
  • FIG. 6 illustrates a chained command sequence
  • FIG. 7 illustrates a forwarded command sequence
  • FIG. 1 depicts the architecture of a system.
  • a plurality of components 20 , 30 and 40 are connected to routing fabric 10 .
  • Routing fabric 10 provides the communication pathways between components 20 , 30 and 40 .
  • routing fabric 10 is point-to-point, however, it can be constructed using any interconnection scheme. Interconnection schemes found in the art include shared bus, multiple shared buses, hierarchical buses, point-to-point, banyan tree and others. It should be understood that the principles of the disclosed invention are equally applicable to systems with more than three components and zero or more scheduling processors.
  • Components perform computations on data. Many forms of components are well known in the art including vector processors, MPEG encoders and decoders, audio decoders, graphics rasterizers, network processing engines, digital signal processing engines and others. Data and commands are transferred between components via routing fabric 10 . Component computations and inter-component transfers are the principal system resources that must be scheduled for the system to operate efficiently. The usage of resources is directed by one or more centralized schedules. A centralizing schedule allows computations and transfers between multiple components to be optimized over a time horizon.
  • schedule processor 20 performs the scheduling function in addition to any other computations and will be known as schedule processor 20 .
  • Schedule processor 20 determines the allocation of resources over a time horizon, creating a schedule.
  • Schedule processor 20 may use any of the widely known scheduling methods including static, dynamic, adaptive, goal-directed, pre-emptive, rate monotonic and others.
  • schedule processor 20 is a microprocessor executing a program.
  • One alternate embodiment of schedule processor 20 is a state machine following one or more fixed schedules provided by a designer.
  • Each command consists of a transfer of command information and, optionally, data between two components using routing fabric 10 .
  • Commands may instruct the receiving component to create and issue a subsequent command to a third component once the initiating command is completed.
  • Commands may instruct the receiving component to perform computation.
  • Commands may instruct the receiving component to perform computation and then issue a subsequent command.
  • Components may receive multiple commands, storing them until they can be performed. All command transfers are unidirectional, allowing the sender to proceed without an acknowledgement from the receiver.
  • FIG. 2 illustrates the write command.
  • Initiator component 100 sends write command 120 through routing fabric 10 to destination component 110 .
  • Write command 120 may convey any combination of data, status or instruction to perform computation to destination component 110 .
  • the write command may, upon completion, optionally generate a second write command.
  • the second write command may be used to notify another component of the completion status of the first write command.
  • FIG. 3 illustrates a write with notification sequence.
  • Initiator component 300 sends the first write command 330 through routing fabric 10 to destination component 310 .
  • destination component 310 sends the second write command 340 to acknowledge component 320 through routing fabric 10 . It may be advantageous for acknowledge component 320 and initiating component 300 to be the same component.
  • FIG. 4 illustrates a request for write command sequence.
  • Initiator component 400 sends request for write command 440 through routing fabric 10 to target component 410 .
  • Request for write command 440 contains at least operation 470 , destination address 480 , and optionally acknowledge address 490 .
  • Operation 470 directs target component 410 to send write command 450 to destination component 420 through routing fabric 10 , using destination address 480 .
  • destination component 420 sends notification write command 460 to acknowledge component 430 through routing fabric 10 , using acknowledge address 490 .
  • This sequence does not require four different components: it is possible for one component to participate in the request for write sequence more than once.
  • destination component 420 is the same as initiating component 400 .
  • acknowledge component 430 is the same as initiating component 400 .
  • Other combinations of a single component participating in a request for write command sequence more than once are possible.
  • the wait for condition command issued by a first component instructs a second component to suspend processing until a specific condition occurs.
  • Specific conditions to be awaited by a component include completion of component computation, receipt of a notification write command from another component, receipt of status from other specified components and others.
  • FIG. 5 illustrates a wait for condition command. Initiating component 500 sends a wait for condition command 520 to target component 510 through routing fabric 10 . Target component 510 suspends processing of commands until the condition specified in wait for condition command 520 is satisfied. Similar to the write command, the wait for condition command optionally initiates a status notification write operation to a third component (not shown).
  • FIG. 6 illustrates a chained command sequence wherein two blocks of data residing in two components are transferred to a third component for computation. The computation will not begin until both blocks of data have been received.
  • Schedule processor 600 issues four commands. First, request for write command 640 is sent to target 610 . Request for write command 640 directs target component 610 to send write command 670 to destination component 630 , providing one block of input data. Second, request for write command 650 is sent to target component 620 . Request for write command 650 directs target component 620 to send write command 660 to destination component 630 , providing the other block of input data.
  • wait for condition command 680 is sent to destination component 630 .
  • Wait for condition command 680 indicates that destination component 630 is to wait until the completion of write command 660 .
  • wait for condition command 690 is sent to destination component 630 . Wait for condition command 690 indicates that destination component 630 is to wait until the completion of write command 670 , begin computation on input data, and send notification write operation 695 to schedule processor 600 .
  • Schedule processor 20 is able to issue all four commands without waiting for any of the specified operations to actually be started or completed. Immediately after issuing the four commands, schedule processor 20 can proceed with determining and specifying the next chain of commands to be scheduled. No further communication between schedule processor 20 and components 610 , 620 and 630 is required to complete the chained sequence.
  • the chained sequence operates correctly regardless of the order of execution of the two write commands 660 and 670 . This means that the chained sequence is insensitive to issues such as delay and jitter in routing fabric 10 . Furthermore, the sequence operates correctly regardless of the sizes of the two blocks of data.
  • command forwarding Another capability created by combining write and request for write commands is command forwarding.
  • a first component may receive a request for write command that it is unable to perform but which could be performed by a second component.
  • the first component issues a second request for write command to the second component, directing the second component to supply the requested data in accordance with the first request for write command.
  • FIG. 7 illustrates an example of command forwarding.
  • Requesting component 700 issues a request for write command 730 to expected source component 710 , specifying requesting component 700 as the destination of the write operation.
  • Expected source component 710 determines that actual source component 720 is able to satisfy request for write command 730 .
  • Expected source component 710 issues request for write command 740 to actual source component 720 , specifying requesting component 700 as the destination of the write operation.
  • Actual source component 720 receives request for write command 740 , subsequently issuing write command 750 to provide the requested data to requesting component 700 .
  • Combinations of write, request for write and wait for condition commands, creating chained sequences of commands provide schedule processor 20 with the capability of coordinating computations and inter-component data transfers in a system. Multiple chained command sequences can be issued and executed simultaneously in the system. Combining chained sequences of differing lengths and differing utilization of system resources to achieve system goals is a task for schedule processor 20 .
  • Command chaining reduces the amount of communication between schedule processor 20 and the components of the system. This reduction in communication allows a schedule processor more time to evaluate each scheduling decision or to scale to a larger number of components. Schedule processing need not be concentrated in a single component: it can be divided and distributed among other components in the system allowing further scaling.

Abstract

A method of efficiently coordinating the communication of data and commands between multiple entities in a system is disclosed. A transaction protocol enabling centralized scheduling of chains of data transfers in a system is disclosed.

Description

FIELD OF INVENTION
The invention pertains to the field of coordination of the flow of data between components of an integrated system, particularly multi-step protocols used by systems with multiple functional units.
BACKGROUND
The ability to reduce the physical size of integrated circuits (chips) has led to more combinations of functions on a single chip. Design methodologies have arisen that teach combining pre-existing functional components using standardized bus-based interconnection techniques. These bus-based interconnection techniques are inherently inefficient and unable to scale as system complexity increases.
One limiting factor of bus-based interconnection techniques is bus contention. Bus contention occurs when multiple components attempt to use simultaneously a shared bus. Arbitration protocols determine the allocation of the shared bus. These allocation protocols are performed in real-time on demand. To avoid increasing the latency of access to the bus, the allocation protocols must be kept simple so that rapid computation is facilitated. Many allocation techniques are well known in the art, including: first-come first-served, round-robin, rate monotonic, various weighted prioritization schemes and others.
Another limiting factor of bus-based interconnection techniques is lack of scalability. There are two well-known techniques for scaling bus architectures.
One scaling technique is to increase the performance of a single bus through higher clock rates and increased width. This technique is expensive. The physical realization of a bus in a particular manufacturing process serves to place an upper limit on its clock rate. Additional performance increases require a wider bus, consuming greater amounts of expensive chip area. Furthermore, wide buses are ineffective on small transfers, serving to limit performance increases. An additional burden of this scaling technique is that every component connected to the bus requires redesign.
Another scaling technique is multiple buses. This technique is difficult in practice. A principal difficulty is scheduling transfers across the multiple buses. Similar to the case of a single bus, the scheduling algorithm must be simple in order to facilitate its computation to avoid introducing delay. The required simplicity of the algorithm reduces its effectiveness.
Another limiting factor in bus-based methodologies is the lack of a unified scheduling capability. The existing methodologies lack a coherent mechanism for an individual component to adapt its communication requirements to the capabilities of the system in which it is placed. System designers are forced to create ad-hoc mechanisms to regulate the communication demands of individual components and to integrate them into the overall system.
A communications technique is required that is efficient and scales well as system complexity increases.
SUMMARY OF THE INVENTION
An efficient technique is provided which moves the decisions about the scheduling of transfers from individual components with an arbitration mechanism to one or more centralized scheduling processors. Scheduling decisions are made in advance by the processors and then communicated to the participating components using a transaction protocol. The transaction protocol allows the scheduling processor to create chained sequences of transfers. The elements of each chained sequence can then be performed by the individual components without additional communication with the scheduling processor.
DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates the architecture of the communication system.
FIG. 2 illustrates a write command.
FIG. 3 illustrates a write command with notification command.
FIG. 4 illustrates a request for write with notification command.
FIG. 5 illustrates a wait for condition command.
FIG. 6 illustrates a chained command sequence.
FIG. 7 illustrates a forwarded command sequence.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
A method for coordinating information flow between components is disclosed. In the following descriptions, numerous specific details are set forth, such as the specific rendering of the implementation, in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known circuits, control logic and coding techniques have not been shown in detail, in order to avoid unnecessarily obscuring the present invention.
FIG. 1 depicts the architecture of a system. A plurality of components 20, 30 and 40 are connected to routing fabric 10. Routing fabric 10 provides the communication pathways between components 20, 30 and 40. In the preferred embodiment, routing fabric 10 is point-to-point, however, it can be constructed using any interconnection scheme. Interconnection schemes found in the art include shared bus, multiple shared buses, hierarchical buses, point-to-point, banyan tree and others. It should be understood that the principles of the disclosed invention are equally applicable to systems with more than three components and zero or more scheduling processors.
Components perform computations on data. Many forms of components are well known in the art including vector processors, MPEG encoders and decoders, audio decoders, graphics rasterizers, network processing engines, digital signal processing engines and others. Data and commands are transferred between components via routing fabric 10. Component computations and inter-component transfers are the principal system resources that must be scheduled for the system to operate efficiently. The usage of resources is directed by one or more centralized schedules. A centralizing schedule allows computations and transfers between multiple components to be optimized over a time horizon.
One or more of the components is given the responsibility of scheduling. In the preferred embodiment, component 20 performs the scheduling function in addition to any other computations and will be known as schedule processor 20. Schedule processor 20 determines the allocation of resources over a time horizon, creating a schedule. Schedule processor 20 may use any of the widely known scheduling methods including static, dynamic, adaptive, goal-directed, pre-emptive, rate monotonic and others. In the preferred embodiment, schedule processor 20 is a microprocessor executing a program. One alternate embodiment of schedule processor 20 is a state machine following one or more fixed schedules provided by a designer.
There are three types of commands: write, request for write, and wait for condition. Each command consists of a transfer of command information and, optionally, data between two components using routing fabric 10. Commands may instruct the receiving component to create and issue a subsequent command to a third component once the initiating command is completed. Commands may instruct the receiving component to perform computation. Commands may instruct the receiving component to perform computation and then issue a subsequent command. Components may receive multiple commands, storing them until they can be performed. All command transfers are unidirectional, allowing the sender to proceed without an acknowledgement from the receiver.
The write command moves data and/or status between two components. FIG. 2 illustrates the write command. Initiator component 100 sends write command 120 through routing fabric 10 to destination component 110. Write command 120 may convey any combination of data, status or instruction to perform computation to destination component 110.
The write command may, upon completion, optionally generate a second write command. The second write command may be used to notify another component of the completion status of the first write command. FIG. 3 illustrates a write with notification sequence. Initiator component 300 sends the first write command 330 through routing fabric 10 to destination component 310. Upon completion of write command 330, destination component 310 sends the second write command 340 to acknowledge component 320 through routing fabric 10. It may be advantageous for acknowledge component 320 and initiating component 300 to be the same component.
The request for write command issued by an initial component instructs a second component to initiate a write operation to a third component. The completion of the write operation between the second and third components may request initiation of a notification write operation to a fourth component. FIG. 4 illustrates a request for write command sequence. Initiator component 400 sends request for write command 440 through routing fabric 10 to target component 410. Request for write command 440 contains at least operation 470, destination address 480, and optionally acknowledge address 490. Operation 470 directs target component 410 to send write command 450 to destination component 420 through routing fabric 10, using destination address 480. If notification was requested then upon completion of write command 450, destination component 420 sends notification write command 460 to acknowledge component 430 through routing fabric 10, using acknowledge address 490. This sequence does not require four different components: it is possible for one component to participate in the request for write sequence more than once. In some cases, destination component 420 is the same as initiating component 400. In other cases, acknowledge component 430 is the same as initiating component 400. Other combinations of a single component participating in a request for write command sequence more than once are possible.
The wait for condition command issued by a first component instructs a second component to suspend processing until a specific condition occurs. Specific conditions to be awaited by a component include completion of component computation, receipt of a notification write command from another component, receipt of status from other specified components and others. FIG. 5 illustrates a wait for condition command. Initiating component 500 sends a wait for condition command 520 to target component 510 through routing fabric 10. Target component 510 suspends processing of commands until the condition specified in wait for condition command 520 is satisfied. Similar to the write command, the wait for condition command optionally initiates a status notification write operation to a third component (not shown).
Chained sequences of computation by components and data transfer between components can be created by combining write, request for write and wait for condition commands. FIG. 6 illustrates a chained command sequence wherein two blocks of data residing in two components are transferred to a third component for computation. The computation will not begin until both blocks of data have been received. Schedule processor 600 issues four commands. First, request for write command 640 is sent to target 610. Request for write command 640 directs target component 610 to send write command 670 to destination component 630, providing one block of input data. Second, request for write command 650 is sent to target component 620. Request for write command 650 directs target component 620 to send write command 660 to destination component 630, providing the other block of input data. Third, wait for condition command 680 is sent to destination component 630. Wait for condition command 680 indicates that destination component 630 is to wait until the completion of write command 660. Fourth, wait for condition command 690 is sent to destination component 630. Wait for condition command 690 indicates that destination component 630 is to wait until the completion of write command 670, begin computation on input data, and send notification write operation 695 to schedule processor 600.
Due to the ability of components to store commands, Schedule processor 20 is able to issue all four commands without waiting for any of the specified operations to actually be started or completed. Immediately after issuing the four commands, schedule processor 20 can proceed with determining and specifying the next chain of commands to be scheduled. No further communication between schedule processor 20 and components 610, 620 and 630 is required to complete the chained sequence.
The chained sequence operates correctly regardless of the order of execution of the two write commands 660 and 670. This means that the chained sequence is insensitive to issues such as delay and jitter in routing fabric 10. Furthermore, the sequence operates correctly regardless of the sizes of the two blocks of data.
Another capability created by combining write and request for write commands is command forwarding. In command forwarding, a first component may receive a request for write command that it is unable to perform but which could be performed by a second component. The first component issues a second request for write command to the second component, directing the second component to supply the requested data in accordance with the first request for write command. FIG. 7 illustrates an example of command forwarding. Requesting component 700 issues a request for write command 730 to expected source component 710, specifying requesting component 700 as the destination of the write operation. Expected source component 710 determines that actual source component 720 is able to satisfy request for write command 730. Expected source component 710 issues request for write command 740 to actual source component 720, specifying requesting component 700 as the destination of the write operation. Actual source component 720 receives request for write command 740, subsequently issuing write command 750 to provide the requested data to requesting component 700.
Combinations of write, request for write and wait for condition commands, creating chained sequences of commands, provide schedule processor 20 with the capability of coordinating computations and inter-component data transfers in a system. Multiple chained command sequences can be issued and executed simultaneously in the system. Combining chained sequences of differing lengths and differing utilization of system resources to achieve system goals is a task for schedule processor 20. Command chaining reduces the amount of communication between schedule processor 20 and the components of the system. This reduction in communication allows a schedule processor more time to evaluate each scheduling decision or to scale to a larger number of components. Schedule processing need not be concentrated in a single component: it can be divided and distributed among other components in the system allowing further scaling.
In the foregoing specification, the invention has been described with reference to a specific exemplary embodiment and alternative embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative, rather than a restrictive, sense.

Claims (25)

1. A method for scheduling communication between a plurality of components in an integrated circuit (IC) coupled to at least one communication medium and at least one scheduling processor comprising the steps of:
initiating a transfer by said at least one scheduling processor sending a transfer command to a first IC component;
transferring data from said first IC component to a second IC component over said at least one communication medium;
said second IC component notifying a third IC component upon completion of said transferring data step;
wherein said transfer command to said first IC component identifies said second and said third IC components.
2. The method of claim 1 further comprising the steps of:
initiating another transfer by said at least one scheduling processor sending a transfer command to a fourth IC component;
transferring data from said fourth IC component to a fifth IC component;
said fifth IC component notifying a sixth IC component upon completion of said transferring data step;
wherein said transfer command to said fourth IC component identifies said fifth and said sixth IC components.
3. The method of claim 2 wherein said components include a microprocessor and said method further comprises the step of:
said microprocessor executing program code.
4. The method of claim 3 further comprising the steps of:
said at least one scheduling processor deciding an order to perform said transfers; and
creating a chained sequence of said transfers.
5. The method of claim 2 further comprising the steps of:
said at least one scheduling processor deciding an order to perform said transfers; and
creating a chained sequence of said transfers.
6. The method of claim 1 wherein:
said transfer command is communicated over a first medium; and
said transferring step is performed over a second medium.
7. A method of controlling system operation between a plurality of components in an integrated circuit (IC) coupled to at least one communication medium and at least one scheduler comprising the steps of:
said scheduler sending a first command to a first IC component to transfer data over said at least one communication medium;
said at least one scheduler sending a second command to a second IC component to transfer data over said at least one communication medium;
notifying said second IC component upon completion of said first command;
initiating execution of said second command upon completion of said notifying step.
8. The method of claim 7 wherein said sending a first command and said sending a second command step can occur in any order.
9. The method of claim 8 wherein said method further comprises the step of:
said at least one scheduler deciding an order to send said first command and said second command and creating a chained sequence of transfers.
10. The method of claim 9 wherein said at least one scheduler includes a microprocessor and said method further comprises the step of:
said microprocessor executing a program.
11. The method of claim 9 wherein:
said step of sending a first command is communicated over a first medium; and
said step of sending a second command is communicated over a second medium.
12. The method of claim 9 further comprising the step of:
transferring data from said first IC component over a first medium; and
wherein said step of sending a first command is communicated over a second medium.
13. The method of claim 7 wherein:
said step of sending a first command is communicated over a first medium; and
said step of sending a second command is communicated over a second medium.
14. The method of claim 7 further comprising the step of:
transferring data from said first IC component over a first medium; and
wherein said step of sending a first command is communicated over a second medium.
15. A method of controlling system operation between a plurality of components in an integrated circuit (IC) coupled to at least one communication medium and at least one scheduler comprising the steps of:
receiving a first command from said scheduler by a first IC component to transfer data over said at least one communication medium;
receiving a second command from said scheduler by a second IC component to transfer data over said at least one communication medium;
performing said first command;
notifying said second IC component upon completion of said performing step; and
initiating said second command upon completion of said notifying step.
16. The method of claim 15 wherein said receiving a first command, said receiving a second command, and said performing steps can occur in any order.
17. The method of claim 16 further comprising the steps of:
sending said first command by said at least one scheduler; and
sending said second command by said at least one scheduler.
18. The method of claim 17 wherein said at least one scheduler includes a microprocessor and said method further comprises the step of:
said microprocessor executing a program.
19. The method of claim 17 wherein:
said first command is communicated over a first medium; and
said step of performing said first command is performed over a second medium.
20. The method of claim 15 wherein:
said first command is communicated over a first medium; and
said step of performing said first command is performed over a second medium.
21. A method of controlling a system including a plurality of components in an integrated circuit (IC) coupled to at least one communication medium and at least one scheduler comprising the steps of:
said at least one scheduler receiving transfer requests from requesting IC components;
said at least one scheduler constructing a transfer command for each of said transfer requests;
said at least one scheduler sending said transfer commands to said requesting IC components;
wherein said transfer command further comprises;
(a) a destination address identifying a destination component; and
(b) a notification address identifying an acknowledge component.
22. The method of claim 21 wherein said at least one scheduler includes a microprocessor and said method further comprises the step of:
said microprocessor executing program code.
23. The method of claim 21 further comprising the steps of:
said at least one scheduling processor deciding an order to perform said transfers; and
creating a chained sequence of said transfers.
24. The method of claim 21 further comprising the step of:
transferring data from said requesting IC components over a first medium; and
wherein said step of sending said transfer commands is performed over a plurality of second medium.
25. The method of claim 21 further comprising the step of:
transferring data from said requesting IC components over a first medium; and
wherein said step of sending said transfer commands is performed over a plurality of second mediums.
US09/654,718 2000-09-05 2000-09-05 Method for coordinating information flow between components Expired - Lifetime US7062767B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/654,718 US7062767B1 (en) 2000-09-05 2000-09-05 Method for coordinating information flow between components

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/654,718 US7062767B1 (en) 2000-09-05 2000-09-05 Method for coordinating information flow between components

Publications (1)

Publication Number Publication Date
US7062767B1 true US7062767B1 (en) 2006-06-13

Family

ID=36576679

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/654,718 Expired - Lifetime US7062767B1 (en) 2000-09-05 2000-09-05 Method for coordinating information flow between components

Country Status (1)

Country Link
US (1) US7062767B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204821A (en) * 2014-05-21 2015-12-30 卡雷公司 Inter-processor synchronization system
US20210382748A1 (en) * 2020-06-09 2021-12-09 Nxp Usa, Inc. Hardware-accelerated computing system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4635189A (en) * 1984-03-01 1987-01-06 Measurex Corporation Real-time distributed data-base management system
US4980824A (en) * 1986-10-29 1990-12-25 United Technologies Corporation Event driven executive
US5317734A (en) * 1989-08-29 1994-05-31 North American Philips Corporation Method of synchronizing parallel processors employing channels and compiling method minimizing cross-processor data dependencies
US5408658A (en) * 1991-07-15 1995-04-18 International Business Machines Corporation Self-scheduling parallel computer system and method
US5644749A (en) * 1993-05-10 1997-07-01 Matsushita Electric Industrial Co. Ltd. Parallel computer and processor element utilizing less memory
US5754781A (en) * 1995-03-22 1998-05-19 Nec Corporation Data transfer controller device for controlling data transferred by and among separate clusters
US5758051A (en) * 1996-07-30 1998-05-26 International Business Machines Corporation Method and apparatus for reordering memory operations in a processor
US5884060A (en) * 1991-05-15 1999-03-16 Ross Technology, Inc. Processor which performs dynamic instruction scheduling at time of execution within a single clock cycle
US5926474A (en) * 1994-07-25 1999-07-20 Microsoft Corporation Protection against multiple senders in a multipoint to point data funnel
US6195744B1 (en) * 1995-10-06 2001-02-27 Advanced Micro Devices, Inc. Unified multi-function operation scheduler for out-of-order execution in a superscaler processor
US6212623B1 (en) * 1998-08-24 2001-04-03 Advanced Micro Devices, Inc. Universal dependency vector/queue entry

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4635189A (en) * 1984-03-01 1987-01-06 Measurex Corporation Real-time distributed data-base management system
US4980824A (en) * 1986-10-29 1990-12-25 United Technologies Corporation Event driven executive
US5317734A (en) * 1989-08-29 1994-05-31 North American Philips Corporation Method of synchronizing parallel processors employing channels and compiling method minimizing cross-processor data dependencies
US5884060A (en) * 1991-05-15 1999-03-16 Ross Technology, Inc. Processor which performs dynamic instruction scheduling at time of execution within a single clock cycle
US5408658A (en) * 1991-07-15 1995-04-18 International Business Machines Corporation Self-scheduling parallel computer system and method
US5644749A (en) * 1993-05-10 1997-07-01 Matsushita Electric Industrial Co. Ltd. Parallel computer and processor element utilizing less memory
US5926474A (en) * 1994-07-25 1999-07-20 Microsoft Corporation Protection against multiple senders in a multipoint to point data funnel
US5754781A (en) * 1995-03-22 1998-05-19 Nec Corporation Data transfer controller device for controlling data transferred by and among separate clusters
US6195744B1 (en) * 1995-10-06 2001-02-27 Advanced Micro Devices, Inc. Unified multi-function operation scheduler for out-of-order execution in a superscaler processor
US5758051A (en) * 1996-07-30 1998-05-26 International Business Machines Corporation Method and apparatus for reordering memory operations in a processor
US6212623B1 (en) * 1998-08-24 2001-04-03 Advanced Micro Devices, Inc. Universal dependency vector/queue entry

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204821A (en) * 2014-05-21 2015-12-30 卡雷公司 Inter-processor synchronization system
CN105204821B (en) * 2014-05-21 2019-12-10 卡雷公司 Inter-processor synchronization system
US10915488B2 (en) * 2014-05-21 2021-02-09 Kalray Inter-processor synchronization system
US20210382748A1 (en) * 2020-06-09 2021-12-09 Nxp Usa, Inc. Hardware-accelerated computing system

Similar Documents

Publication Publication Date Title
US10558595B2 (en) Sending data off-chip
JP6797880B2 (en) Synchronization in multi-tile, multi-chip processing configurations
US10949266B2 (en) Synchronization and exchange of data between processors
US11106510B2 (en) Synchronization with a host processor
KR20190044567A (en) Synchronization amongst processor tiles
US7822885B2 (en) Channel-less multithreaded DMA controller
EP2441013B1 (en) Shared resource multi-thread processor array
US10705999B1 (en) Exchange of data between processor modules
CN1342940A (en) Coprocessor with multiple logic interface
CN108279927B (en) Multi-channel instruction control method and system capable of adjusting instruction priority and controller
US7383336B2 (en) Distributed shared resource management
JP7389231B2 (en) synchronous network
CN112639738A (en) Data passing through gateway
WO2017084331A1 (en) Data processing apparatus and method for interconnection circuit
CN100354853C (en) Inter-chip processor control plane communication
Chen et al. ArSMART: An improved SMART NoC design supporting arbitrary-turn transmission
US7062767B1 (en) Method for coordinating information flow between components
CN112673351A (en) Streaming engine
US6708282B1 (en) Method and system for initiating computation upon unordered receipt of data
US11915041B1 (en) Method and system for sequencing artificial intelligence (AI) jobs for execution at AI accelerators
JP2001069161A (en) Method for scheduling decentralized service request using token for collision avoidance and data processor for actualizing same

Legal Events

Date Code Title Description
AS Assignment

Owner name: SANDCRAFT INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCCARTHY, DOMINIC;CHOQUETTE, JACK;REEL/FRAME:011074/0147

Effective date: 20000901

AS Assignment

Owner name: RAZA MICROELECTRONICS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANDCRAFT, INC.;REEL/FRAME:014624/0967

Effective date: 20030729

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: VENTURE LENDING & LEASING IV, INC., CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:RAZA MICROELECTRONICS, INC.;REEL/FRAME:019224/0254

Effective date: 20061226

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: RMI CORPORATION, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:RAZA MICROELECTRONICS, INC.;REEL/FRAME:020951/0633

Effective date: 20071217

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
AS Assignment

Owner name: NETLOGIC MICROSYSTEMS, INC.,CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RMI CORPORATION;REEL/FRAME:023926/0338

Effective date: 20091229

Owner name: NETLOGIC MICROSYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RMI CORPORATION;REEL/FRAME:023926/0338

Effective date: 20091229

AS Assignment

Owner name: NETLOGIC MICROSYSTEMS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:VENTURE LENDING & LEASING, INC;REEL/FRAME:026855/0108

Effective date: 20110902

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NETLOGIC I LLC;REEL/FRAME:035443/0763

Effective date: 20150327

Owner name: NETLOGIC I LLC, DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:NETLOGIC MICROSYSTEMS, INC.;REEL/FRAME:035443/0824

Effective date: 20130123

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047196/0097

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE PREVIOUSLY RECORDED AT REEL: 047196 FRAME: 0097. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:048555/0510

Effective date: 20180905