CN103200131A - Data receiving and transmitting device - Google Patents

Data receiving and transmitting device Download PDF

Info

Publication number
CN103200131A
CN103200131A CN2013101147691A CN201310114769A CN103200131A CN 103200131 A CN103200131 A CN 103200131A CN 2013101147691 A CN2013101147691 A CN 2013101147691A CN 201310114769 A CN201310114769 A CN 201310114769A CN 103200131 A CN103200131 A CN 103200131A
Authority
CN
China
Prior art keywords
frame
fifo buffer
fifo
processing unit
sends
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101147691A
Other languages
Chinese (zh)
Other versions
CN103200131B (en
Inventor
乌力吉
牛赟
张向民
麦宋平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201310114769.1A priority Critical patent/CN103200131B/en
Publication of CN103200131A publication Critical patent/CN103200131A/en
Priority to HK13110485.6A priority patent/HK1183185A1/en
Application granted granted Critical
Publication of CN103200131B publication Critical patent/CN103200131B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)
  • Small-Scale Networks (AREA)

Abstract

The invention discloses a data receiving and transmitting device which comprises a shared cache module, a distributed cache module, a data frame scheduler and a processing unit. The shared cache module comprises an asynchronous first in first out (FIFO) cache. The distributed cache module comprises at least two FIFO caches. When the FIFO caches are free, the free caches send interrupt signals to the shared cache module, and the shared cache module sends a data frame to a corresponding FIFO cache. When the processing unit is free, the data frame scheduler sends the data frame in the FIFO cache with the data frame to the processing unit.

Description

A kind of data source and sink
Technical field
The present invention relates to data source and sink, relate in particular to the R-T unit of Frame in the 10Gbps Fast Ethernet.
Background technology
In the realization to high performance network equipment design architecture, be to adopt FIFO(First In First Out for the common way of the reception of ethernet data frame at present) the formation mode.Adopt this data receive mode, have only first Frame energy of each formation processed.Under the not high situation of network data transmission speed, because reception and the difference between the internal data processing speed of Frame are also not obvious, the FIFO mode still can satisfy the requirement of systematic function.But when network rate reach 10Gbps in addition higher after, if the top Frame of formation can not get timely processing, other all Frames all can not be transmitted and handle in the formation, take place thereby cause head to block (HOL:Head of Line) phenomenon.Head blocks will cause systematic function reduction by 58.6%, even cause the packet loss phenomenon, influence network service quality QoS.
In the design of high-speed core router, usually adopt VOQ (VOQ:Virtual Out Queue) technology, the Frames that the purpose output port is different are placed on buffer memory in the different formations, therefore the Frame that mails to different output ports does not exist HOL to block mutually, under the situation of using rational algorithm, the VOQ mode can be brought into play 100% systematic function.But this mode is suitable for only needing to finish the route to the Frame in the input rank, does not need data are done the reception of any processing; And in network security is handled, for Frame except carrying out route, often need Frame is carried out processing such as encryption and decryption and authentication, these processing need take a large amount of processing times of system, so the VOQ mode is not suitable at a high speed during particularly the network security of 10Gbps is used yet.
In addition, the uncertainty of the Frame length that receives from Ethernet has had a strong impact on the efficient that high-speed data receives.When network traffic data reaches 10Gbps even when higher, high speed data frame receives a key factor that becomes the system for restricting performance with the unmatched contradiction of online network device internal data processing rate.
No. 03127969.4 patent " based on data packet linear speed processing method and the device thereof of fifo queue " adopts the method for two-stage FIFO to solve the linear speed of random length packet handled, if but in the Fast Ethernet network, if the back level does not catch up with the Frame inbound pacing to the processing speed of Frame, will cause a choking phenomenon to take place, so this device can't satisfy the demand that the express network data receive.
Summary of the invention
In order to overcome the deficiencies in the prior art, the invention provides a kind of data source and sink, to reduce the packet loss of Frame.
A kind of data source and sink comprises shared buffer memory module, distributed caching module, Frame scheduler and processing unit, and described shared buffer memory module comprises the asynchronous FIFO buffer, and described distributed caching module comprises at least two FIFO buffers;
Idle when the appearance of FIFO buffer, idle FIFO buffer sends interrupt signal to described shared buffer memory module, and described shared buffer memory module sends to corresponding FIFO buffer with a Frame;
When processing unit occurs idlely, the Frame that will have by described Frame scheduler in the FIFO buffer of Frame sends to described processing unit.
When described shared buffer memory module receives a plurality of interrupt signals that idle FIFO buffer sends that are in simultaneously, if the last Frame that sends of described shared buffer memory module is to i FIFO buffer, then according to from i+1 FIFO buffer to the FIFO buffer of maximum sequence number, again from the order of i FIFO buffer of first FIFO buffer to the, described shared buffer memory module sends to Frame the FIFO buffer of respective free successively.
Described processing unit comprises at least two processing units that priority is different;
When a plurality of processing units occur when idle, idle processing unit sends request signal by the FIFO buffer that described Frame scheduler has Frame in the described distributed caching module respectively;
The FIFO buffer that has Frame in the distributed caching module is selected a processing unit that sends request signal by described Frame scheduler, and sends the permission signal to selected described processing unit;
After described processing unit receives described permission signal, after selecting the highest FIFO buffer that has Frame of current priority and send acknowledge(ment) signal by described Frame scheduler, begin to receive the Frame that corresponding FIFO buffer sends.
The capacity of shared buffer memory module equates with the capacity of distributed caching module.
The capacity of each FIFO buffer of described distributed caching module equates.
Inter-process module of the present invention and distributed caching intermodule adopt the signal handshake mechanism: have only after processing unit is finished a Frame and read in, the distributed caching module could receive next Frame, and the capacity of the FIFO in the distributed caching is the length of long ethernet data frame.Capacity under the Capacity Selection worst case of the asynchronous FIFO in the shared buffer memory module, namely all processing units all are in busy state, therefore be that n times of FIFO capacity register of single distributed caching module is (under the identical situation of each FIFO capacity, n is the number of FIFO buffer).The value of n can be configured according to the demand of heterogeneous networks equipment, thereby satisfies the application of the network environment of friction speed.
Module interface signal among the present invention adopts the mode based on Frame, by increasing the control signal to Frame packet header bag tail and effective bag, and by the Frame of interrupt signal control turnover FIFO buffer is set, realized the high speed of variable-length ethernet data frame is received.
Frame scheduler among the present invention can require to select different dispatching algorithms to realize according to network equipment different performance, has increased the flexibility of this Design of device.
Description of drawings
Fig. 1 is the schematic diagram of the Ethernet safety means of a kind of embodiment;
Fig. 2 is the schematic diagram of the data source and sink of a kind of embodiment;
Fig. 3 is the workflow diagram of the data source and sink of a kind of embodiment;
Fig. 4 is the handshake schematic diagram of the data source and sink of a kind of embodiment;
Fig. 5 is the scheduling process schematic diagram of the Frame scheduler of a kind of embodiment.
Embodiment
Below with reference to accompanying drawing, specific embodiments of the invention are described in further detail.
As shown in Figure 1, the Ethernet safety means of a kind of embodiment, it comprises that algorithmic controller, SAD SRAM, SPD table look-up module, 32-bit CPU, high-performance can be expanded the data source and sink in security protocol processing module and the embodiments of the invention outside SRAM controller, the sheet, and they communicate by hanging over system bus.
As shown in Figure 2, the data source and sink of a kind of embodiment, comprise Fast Ethernet interface module, shared buffer memory module, comprise the shared buffer memory module, distributed caching module, Frame scheduler and processing unit, the Fast Ethernet interface module is used for reception from the data of Ethernet and is packaged into Frame, asynchronous FIFO buffer of shared buffer memory module employing is finished the first-level buffer to Frame, the distributed caching module comprises at least two FIFO buffers, finishes the level 2 buffering to data; The Frame scheduler is realized the scheduling of Frame by the rational dispatching algorithm of selection between distributed caching module and inter-process module, thereby realize the maximization of the processing unit utilization of resources, reduces the packet loss of receiving data frames.The FIFO buffer can adopt asynchronous FIFO, also can adopt synchronization fifo; Processing unit can have one, also can comprise a plurality of.
Start working when the data source and sink of present embodiment, its course of work is as follows, shown in Fig. 3-5:
1) whole data source and sink initialization, shared buffer memory module, distributed caching module, Frame scheduler and processing unit etc. all are in idle condition.
2) the Fast Ethernet interface module receives data and is packaged into Frame, and notice shared buffer memory module.
3) the shared buffer memory module receives the Frame that sends from the Fast Ethernet interface module;
4) during initialization, shared buffer memory module distribute data frame is successively given each FIFO buffer, and when the FIFO buffer stored Frame, corresponding status signal was set to high level, shows that this FIFO stores Frame.
5) afterwards, the shared buffer memory module judges whether to have idle FIFO buffer according to the interrupt signal of distributed caching module, if idle FIFO buffer is arranged, the FIFO buffer sends interrupt signal to the shared buffer memory module.After processing unit had read Frame from certain FIFO buffer, this FIFO buffer namely was in idle condition, and sent interrupt signal to the shared buffer memory module.
6) the shared buffer memory module is distributed to idle FIFO buffer according to interrupt signal with Frame, specifically can adopt balanced allocation algorithm:
If current have only an interrupt request singal from the distributed caching module, namely have only a FIFO buffer idle condition to occur, then the shared buffer memory module is handled this interrupt request singal, a Frame is distributed in the FIFO buffer of this request signal correspondence.
If current do not have an interrupt request singal, namely idle condition does not all appear in all FIFO buffers, and then the shared buffer memory module is in wait state, occurs up to interrupt request singal.
If a plurality of interrupt request singals are arranged simultaneously, namely there are a plurality of FIFO buffers idle condition to occur simultaneously, if last shared buffer memory module is assigned to i FIFO buffer with Frame, then according to from i+1 FIFO buffer to the FIFO buffer of maximum sequence number, again from the order of i FIFO buffer of first FIFO buffer to the, the shared buffer memory module sends to Frame the FIFO buffer of respective free successively.For example, there be the the the 5th, the 7th, the 9th and the 3rd FIFO buffer to send interrupt signal to the shared buffer memory module simultaneously, and last Frame is assigned to the 4th FIFO buffer, and then the shared buffer memory module is as follows to the order of distributed caching module assignment Frame: the 5th, the 7th, the 9th and the 3rd.
When the FIFO buffer in the distributed caching module had Frame, the status signal of corresponding FIFO buffer was put height, showed that this FIFO buffer has Frame.
If when only a processing unit being arranged, when processing unit occurs idle, processing unit sends request signal to the FIFO buffer that has Frame, and the Frame scheduler is selected some FIFO buffers that has Frame, and processing unit is read data frame from this FIFO buffer.
If when processing unit comprised the different processing unit of at least two priority, the Frame scheduler can adopt repeatedly iteration SLIP dispatching algorithm (iSLIP) to realize the efficient equity dispatching of Frame, algorithm is as follows:
When the free time appearred in a plurality of processing units, idle processing unit sent request signal by described Frame scheduler to the FIFO buffer that has Frame respectively;
The FIFO buffer that has Frame that receives request signal passes through the Frame scheduler in the processing of request unit, select the highest processing unit of current priority, and sending the permission signal by the Frame scheduler to the highest processing unit of described priority, all the other processing units do not respond;
After the processing unit that sends request signal receives and allows signal, select the highest FIFO buffer of current priority and send acknowledge(ment) signal, begin to read the Frame in this FIFO buffer afterwards;
The FIFO buffer that does not obtain acknowledge(ment) signal continues to send the permission signal to the highest processing unit of current priority by the Frame scheduler, accepts until the processed unit of this FIFO buffer.
Fig. 4 is the intermodule handshake schematic diagram of the data source and sink of a kind of embodiment, when pkt_rx_avail signal when being high, show in the Fast Ethernet interface module Frame is arranged, the shared buffer memory module is at first according to the Interrupt signal of distributed caching, judge whether idle FIFO buffer is arranged in the distributed caching module, if have, the shared buffer memory module is then put height with the pkt_rx_ren signal, and expression can receiving data frames.The shared buffer memory module is according to the state of Interrupt signal, selects FIFO buffer in the distributed caching module according to the allocation algorithm of bus poll, and the Wren signal of corresponding FIFO is put height.FIFO corresponding in the distributed caching module reads in Frame, and when the Valid_out signal becomes when low, the Wren signal is set to low, shows to run through a Frame.
If idle FIFO buffer is arranged in the distributed caching module, the Interrupt signal of this free time FIFO buffer correspondence is set to height shows that this FIFO buffer can receiving data frames, Wren begins receiving data frames when effective (high level), and Interrupt is put low; When the FIFO buffer ran through a Frame, the Rden signal of corresponding FIFO buffer was put height; Inter-process module module is selected the FIFO buffer, read data frame from the FIFO buffer of selecting according to the Rden signal to the request of sending of distributed caching module and by the Frame scheduler; After inter-process module module read data frame finishes (wb_eop_o=1), FIFO buffer corresponding in the distributed caching module is put height with Interrupt, shows that this FIFO buffer is in idle condition.

Claims (5)

1. data source and sink, it is characterized in that: comprise shared buffer memory module, distributed caching module, Frame scheduler and processing unit, described shared buffer memory module comprises the asynchronous FIFO buffer, and described distributed caching module comprises at least two FIFO buffers;
Idle when the appearance of FIFO buffer, idle FIFO buffer sends interrupt signal to described shared buffer memory module, and described shared buffer memory module sends to corresponding FIFO buffer with a Frame;
When processing unit occurs idlely, the Frame that will have by described Frame scheduler in the FIFO buffer of Frame sends to described processing unit.
2. data source and sink as claimed in claim 1, it is characterized in that: when described shared buffer memory module receives a plurality of interrupt signals that idle FIFO buffer sends that are in simultaneously, if the last Frame that sends of described shared buffer memory module is to i FIFO buffer, then according to from i+1 FIFO buffer to the FIFO buffer of maximum sequence number, again from the order of i FIFO buffer of first FIFO buffer to the, described shared buffer memory module sends to Frame the FIFO buffer of respective free successively.
3. data source and sink as claimed in claim 1, it is characterized in that: described processing unit comprises at least two processing units that priority is different;
When the free time appearred in a plurality of processing units, idle processing unit sent request signal by described Frame scheduler to the FIFO buffer that has Frame in the described distributed caching module respectively;
The FIFO buffer that has Frame in the distributed caching module is selected a processing unit that sends request signal by described Frame scheduler, and sends the permission signal to selected described processing unit;
After described processing unit receives described permission signal, after selecting the highest FIFO buffer that has Frame of current priority and send acknowledge(ment) signal by described Frame scheduler, begin to receive the Frame that corresponding FIFO buffer sends.
4. data source and sink as claimed in claim 1, it is characterized in that: the capacity of shared buffer memory module equates with the capacity of distributed caching module.
5. data source and sink as claimed in claim 4 is characterized in that: the capacity of each FIFO buffer of described distributed caching module equates.
CN201310114769.1A 2013-04-03 2013-04-03 A kind of data source and sink Expired - Fee Related CN103200131B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310114769.1A CN103200131B (en) 2013-04-03 2013-04-03 A kind of data source and sink
HK13110485.6A HK1183185A1 (en) 2013-04-03 2013-09-11 A data transceiver device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310114769.1A CN103200131B (en) 2013-04-03 2013-04-03 A kind of data source and sink

Publications (2)

Publication Number Publication Date
CN103200131A true CN103200131A (en) 2013-07-10
CN103200131B CN103200131B (en) 2015-08-19

Family

ID=48722494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310114769.1A Expired - Fee Related CN103200131B (en) 2013-04-03 2013-04-03 A kind of data source and sink

Country Status (2)

Country Link
CN (1) CN103200131B (en)
HK (1) HK1183185A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107171800A (en) * 2017-03-30 2017-09-15 山东超越数控电子有限公司 A kind of scheduling system of multichannel cryptographic algorithm
CN109861921A (en) * 2019-01-21 2019-06-07 西安微电子技术研究所 A kind of adaptive dynamic flow control system and method towards Ethernet
CN110888831A (en) * 2019-12-10 2020-03-17 北京智联安科技有限公司 Multi-power-domain asynchronous communication device
CN111010352A (en) * 2019-12-31 2020-04-14 厦门金龙联合汽车工业有限公司 Automobile CAN message sending method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1117318A (en) * 1993-11-01 1996-02-21 艾利森.Ge.流动通讯有限公司 Multiprocessor data memory sharing
CN1658611A (en) * 2005-03-22 2005-08-24 中国科学院计算技术研究所 Method for guarantee service quality of radio local network
US20070104210A1 (en) * 2005-11-10 2007-05-10 Broadcom Corporation Scheduling of data transmission with minimum and maximum shaping of flows in a network device
CN101001210A (en) * 2006-12-21 2007-07-18 华为技术有限公司 Implementing device, method and network equipment and chip of output queue
CN103023806A (en) * 2012-12-18 2013-04-03 武汉烽火网络有限责任公司 Control method and control device of cache resource of shared cache type Ethernet switch

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1117318A (en) * 1993-11-01 1996-02-21 艾利森.Ge.流动通讯有限公司 Multiprocessor data memory sharing
CN1658611A (en) * 2005-03-22 2005-08-24 中国科学院计算技术研究所 Method for guarantee service quality of radio local network
US20070104210A1 (en) * 2005-11-10 2007-05-10 Broadcom Corporation Scheduling of data transmission with minimum and maximum shaping of flows in a network device
CN101001210A (en) * 2006-12-21 2007-07-18 华为技术有限公司 Implementing device, method and network equipment and chip of output queue
CN103023806A (en) * 2012-12-18 2013-04-03 武汉烽火网络有限责任公司 Control method and control device of cache resource of shared cache type Ethernet switch

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG LI ET AL.: "《Design of an IP-Core for 10 Gigabit Ethernet security processor》", 《10TH IEEE INTERNATIONAL CONFERENCE ON SOLID-STATE AND INTERGRATED CIRCUIT TECHNOLOGY PROCEEDINGS》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107171800A (en) * 2017-03-30 2017-09-15 山东超越数控电子有限公司 A kind of scheduling system of multichannel cryptographic algorithm
CN107171800B (en) * 2017-03-30 2020-07-17 山东超越数控电子股份有限公司 Scheduling system of multi-channel cryptographic algorithm
CN109861921A (en) * 2019-01-21 2019-06-07 西安微电子技术研究所 A kind of adaptive dynamic flow control system and method towards Ethernet
CN109861921B (en) * 2019-01-21 2022-08-02 西安微电子技术研究所 Self-adaptive dynamic flow control method facing Ethernet
CN110888831A (en) * 2019-12-10 2020-03-17 北京智联安科技有限公司 Multi-power-domain asynchronous communication device
CN110888831B (en) * 2019-12-10 2023-07-21 北京智联安科技有限公司 Multi-power domain asynchronous communication device
CN111010352A (en) * 2019-12-31 2020-04-14 厦门金龙联合汽车工业有限公司 Automobile CAN message sending method
CN111010352B (en) * 2019-12-31 2023-04-07 厦门金龙联合汽车工业有限公司 Automobile CAN message sending method

Also Published As

Publication number Publication date
CN103200131B (en) 2015-08-19
HK1183185A1 (en) 2013-12-13

Similar Documents

Publication Publication Date Title
US6754222B1 (en) Packet switching apparatus and method in data network
CN101536413B (en) Queue aware flow control
CN109120544B (en) Transmission control method based on host end flow scheduling in data center network
EP3573297B1 (en) Packet processing method and apparatus
US7251219B2 (en) Method and apparatus to communicate flow control information in a duplex network processor system
EP3229425B1 (en) Packet forwarding method and device
US8576850B2 (en) Band control apparatus, band control method, and storage medium
US20140163810A1 (en) Method for data transmission among ECUs and/or measuring devices
JP2009022038A5 (en)
CN103748845B (en) Packet sending and receiving method, device and system
CN103200131B (en) A kind of data source and sink
CN110290074B (en) Design method of Crossbar exchange unit for FPGA (field programmable Gate array) inter-chip interconnection
CN116018790A (en) Receiver-based precise congestion control
CN101867511A (en) Pause frame sending method, associated equipment and system
CN103338157A (en) Internuclear data message caching method and equipment of multinuclear system
WO2011120467A2 (en) Message order-preserving processing method, order-preserving coprocessor and network equipment
US7885280B2 (en) Packet relaying apparatus and packet relaying method
WO2012116655A1 (en) Exchange unit chip, router and method for sending cell information
CN101494579A (en) Bus scheduling device and method
WO2016008399A1 (en) Flow control
US6356548B1 (en) Pooled receive and transmit queues to access a shared bus in a multi-port switch asic
CN103617132B (en) A kind of ethernet terminal based on shared storage sends implementation method and terminal installation
WO2012119414A1 (en) Method and device for controlling traffic of switching network
US9019832B2 (en) Network switching system and method for processing packet switching in network switching system
CN108614792B (en) 1394 transaction layer data packet storage management method and circuit

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1183185

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1183185

Country of ref document: HK

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150819

Termination date: 20190403

CF01 Termination of patent right due to non-payment of annual fee