DATA STRUCTURES FOR EFFICIENT
PROCESSING OF IP FRAGMENTATION AND
CROSS-REFERENCE TO RELATED
The invention disclosed in this application is related in subject matter to co-pending U.S. patent application Ser. No. 09/839,079 (RAL9-2000-0120US) filed concurrently herewith by J. F. Logan et al. for "Data Structures for Efficient Processing of Multicast Transmissions" and assigned to a common assignee with this application. The disclosure of application Ser. No. 09/839,079 is incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to communications on a network by a network processor and, more particularly, to a method of performing Internet Protocol (IP) fragmentation and reassembly in a network processor in a more efficient manner than current designs accomplish this process.
2. Background Description
In telecommunications scenarios it is sometimes necessary to break a data frame into smaller pieces prior to transmission. This is typically done in cases where a frame may be too large for a physical link (i.e., Ethernet Max transfer unit=1.5 k bytes, token ring=17 k bytes). For such a scenario, the frame must be divided into smaller frame segments in order to satisfy link requirements. In particular, Internet Protocol (IP) fragmentation involves splitting an IP frame into smaller pieces. A typical solution in a network processor involves copying the data to create the body of each fragment, creating a new header for the fragment, and updating the buffer linked list. This is done for each IP fragment to be generated. Copying the data comprising the body of each fragment can impose a significant burden on memory allocation requirements. High performance network processors generally cannot afford to allocate the additional memory bandwidth required in this approach.
In a high performance network processor, one must develop a novel solution in order to minimize memory requirements for IP fragmentation (and IP reassembly).
SUMMARY OF THE INVENTION
It is therefore an object of the present invention to provide data structures, a method, and an associated system for IP fragmentation and reassembly on network processors in order to minimize memory allocation requirements.
According to the invention, the new approach eliminates the need to copy the entire frame for each multicast instance (i.e., each multicast target), thereby both reducing memory requirements and solving problems due to port performance discrepancies. In addition, the invention provides a means of returning leased buffers to the free queue as they are used (independent of when other instances complete transmission) and uses a counter to determine when all instances are transmitted so that a reference frame can likewise be returned to the free queue.
According to the invention, the new approach eliminates the need to copy the entire frame, adjust byte counts, update the memory link list and update headers for each fragment
by utilizing the frame/buffer linking structures within the network processor architecture. The invention requires only the leasing and chaining of buffers for fragment header information only.
5 BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of a preferred embodiment of the invention with 10 reference to the drawings, in which:
FIG. 1 is a block diagram illustrating the data structures;
FIG. 2 is a block diagram showing the chip set system environment of the invention;
FIG. 3 is a block diagram showing in more detail the 15 embedded processor complex and the dataflow chips used in the chip set of FIG. 2;
FIG. 4 is a diagram showing the general message format;
FIG. 5 is a block diagram illustrating the data structures 20 according the invention;
FIG. 6 is a flow diagram showing the IP fragmentation process; and
FIG. 7 is a flow diagram showing the IP reassembly process.
25 DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
Referring now to the drawings, and more particularly to FIG. 1, there is shown the data structures. A frame is stored
30 in a series of buffers 1011 to 1015. Each buffer 101 has a corresponding Buffer Control Block (BCB) 1021 to 1025, which is used to link the series of buffers into a frame. Each frame has a corresponding Frame Control Block (FCB) 103t to 103„, which is used to link a series of frames into a queue.
35 Each queue has a Queue Control Block (QCB) 104, which maintains the address of the first and last FCB 103 in the queue, and a count of the number of frames in the queue.
Data Structure Definitions
40 Buffers 101 are used for storage of data. Each buffer 101 is 64-bytes in size and may store from 1 to 64-bytes of valid data. All valid data within a buffer 101 must be stored as a single contiguous range of bytes. Multiple buffers are chained together via a linked list to store frames larger than
45 64 bytes.
Initially, all buffers 101 are placed in the free buffer queue. When a frame arrives, buffers are popped from the head of the free buffer queue and used to store the frame data. When the final transmission of a frame is performed, 50 the buffers used to store the frame data are pushed onto the tail of the free buffer queue.
A Buffer Control Block (BCB) 102 forms the linked list for chaining multiple buffers into a frame. It also records which bytes of the corresponding buffer 101 contain valid 55 data. For every buffer 101 there is a corresponding BCB 102. The address of a buffer 101 in Datastore Memory (205 and 206 in FIG. 2) also serves as the address of the corresponding BCB 102 in the BCB Array. A BCB 102 contains the following fields: 60 The Next Buffer Address (NBA) field is used to store the pointer to the next buffer 101 in a frame. The NBA field in the BCB 102 for the current buffer 101 contains the address of the frame's next buffer 101 (and corresponding BCB 102).
65 The Starting Byte Position (SBP) field is used to store the offset of the first valid byte of data in the next buffer 101 of a frame. Valid values are from 0 to 63.