CA1221463A - Real-time data processing system - Google Patents

Real-time data processing system

Info

Publication number
CA1221463A
CA1221463A CA000476086A CA476086A CA1221463A CA 1221463 A CA1221463 A CA 1221463A CA 000476086 A CA000476086 A CA 000476086A CA 476086 A CA476086 A CA 476086A CA 1221463 A CA1221463 A CA 1221463A
Authority
CA
Canada
Prior art keywords
data
node
nodes
address
data link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000476086A
Other languages
French (fr)
Inventor
James C. Dann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Thales Training and Simulation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB848406322A external-priority patent/GB8406322D0/en
Priority claimed from GB848420617A external-priority patent/GB8420617D0/en
Application filed by Thales Training and Simulation Ltd filed Critical Thales Training and Simulation Ltd
Application granted granted Critical
Publication of CA1221463A publication Critical patent/CA1221463A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/161Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

ABSTRACT

A real time data processing system in which each of a series of processing nodes is provided with its own data store partitioned into a first section reserved for the storage of data local to the respective node and a second section reserved for the storage of data to be shared between nodes. The nodes are interconnected by a data link and whenever a node writes to an address in the second section of a data store the written data is communicated to all of the nodes via the data link. The data in each address of the second sections of the data stores can be changed only by one respective processing node which acts as a master for that address. As each address containing shared data can only be written to by one node collisions between different nodes attempting to change a common item of data cannot occur.

Description

I~n~ 5 ~ IT 1~ `IPl ~T 13~ ' P . ~ 3 ~ZZ1~3 TIME l:)ATA PI70~E~;SING ~;Y5TEM
___._ ~ . . -- . . .-- ._ ~ he pre~ent lnven-tion rela~b~ to ~ta proçes~lng ~ystem~ including two or m~r~ dat~ proce~in~ unit~
5 ea~h having acc~ to the s~me data. ~ach ~ata p.~oces lng unit may ~a a ~ub~t~ntially independent co~puter, or m~y interact wi~h one or more o~ the othe~ proce~ing unit~. Data proce~ing uni t~ of either type are referred to below ~ "nodes', and dat~ to which two or more nodes have acce~
r~f~rr~d to ~low a~ "~hared data".
In one known ~y~t~m u~ed in for example ~ ht ~imulator~, ~hared dat~ i~ held in a aommon data ~tore, accessibl~ to two or more node~ E~ch nQde 1~ may al~o have i~ own local store for h~l~lng data to which only that node ha~ acce~s. ~ pro~lem with 3uch a ~y~tem i~ ~hat the n~de~ mu~t compete for acce~ to the ~har~d ~tore And hence there ma~ be conflict ~etween two or mor.~ nodes each attempting ~imultaneouely to açcess the ~am~ i~em o~ ~hared data in the co~mon ~tore. Moreover, there are signi~fic~nt tra~ miB~iOn delay~ betweell th~ ~hare~ tore and relatively ~i~tan~ node~. As a re~ult, a~ce~ to the ~hared d~ta m~y be v~ry ~low~
U.S. ~ateht No. 3 8~ 237 de~cribe~ a two nod~
sy~tem in w~ich each n~da receives in it8 own local ~tore a dupllc~te copy of the ~hared data~ To en~ure that both copie~ ~re kept con~i~t~nt, ~a~h node ha~
di~ect acces~ to th~ local ~tor~ o~ the other node ~o that it can wri~ a new value of the share~ d~ta into both s~o~es ~imult~neouslY. A problem with this pr~or propo~al i~ that con~liqt ari~es between the node~ if both attempt to ~ace~ the ~ame ite~ o~
~hared data at the ~ame time, and ea~h node mu~t wait ~or all writc~ to t~e ~ha~ed da~a portion~ of the lo~al ~tore~ to be completed before it ~an contin~e ~d ~

~nR, ~ ~ C!5 ~ 13 C~ lPr~-r 1~61 ~!a~ P .

~LZ%g~63 proces~iny. Thi~ ~erio~ly re~uce~ the ~Piciency o~
~he sy~tem~ Thi~ make~ i~ v~ry ~ iault ~o exten~
thi~ propo~al to more than two node~
European P~tent Speci~icatio~ No. 0 0~2 ~gS
de~crib~ another ~y~tem in whic~ each node ha~ it~
owrl local ~t~r~ in whlch ~hared dat~ iY ~tored. The nodes are intercor~n~ated by a data txarl~mi~ion l.ink and whenever one node writes to an ~ddre~s ~ontaining 3hared data in i~ local ~torq i~ al~o gen~rate~ a me~age CO~taining khe write data and the addres~0 The meseage i~ ~pplied ~o tha link and th~ o~her nodes u~e the wrlte data t~ update the appropriate Rhared data address in ~h~ir local ~ores. ~ach node continue~ proces~ing after w~iting to a shared data add~s~ and does no~ wait for t~e wrike d~ta me~aye to reach the other node~. The link i~ orqani~sd a a toksn ring, there ~eing only on~ token 30 that vnly one ~ age can be on the ring at any one tlme. Thu~
each node ~eceive~ age~ in the ~me seqllence, ~0 the~eby esta~ hing ~ ahronolo~icD,l order ~or the me~s~e~ RVen though the individu~l node~ are operating aRynahronou~ly. However, i~ ~ fir~t node rcceive~ a write d~ta me~age from a ~eeo~d noae while the 4ira~ node ~ill has an out~tanding writ~
dAtn meæ~a~ to tr~n~mit, th~ re~eived ~ a~e ma~
overw~it~ the data add~as whi~h has ~l~ea~y be~n written to b~r the 8eC:on~ node. The data address would then be overwritten by a ~h~onologically ~a~lier v~lu~ and the dat~ s~orRd in th~ ~hared data 30 s~ore~ of th~ varlou~ e~ WOUld no~ be con~lst~nt~
To pre~ent thls happening, the proce~or o~ th~
~econd nod~ uRpend~d pending ~le~rance of t~e out~tanding me~sa~e or me~Age~. Su3pen3ion of the proce~or~ obviou~ly 310w~ down the ~yqtems oper~tion ~; and wh~re there i8 a heavy traffia o~ me~gea thi~
1~ a e~riou~ probl~m~

~ R . C~ r ~ C:l;IT ~ lF~i-r ~ F~ . r~5
2~463 In real-time computirlg ~y~t~m~, ~uch a.~ tho~3e u~qd ~or 11ght tralning ~imul~tor~, the spe~d at whi~h th~ ~y~em operate~ is o~ fun~amental importance, I~ 1~ known to provide re~l-time ~ystem~
in whi~h a ~eri~ o~ node~ each performs ~ particular func~ion b~ within a time-~ramework impo~d by ~y3tem aontr~l computer~ ~x~mple~ of ~uch ~y~tem~
~re des~ri4ed ln U.9. Patent Mo~ 4414624 and 4351025.
~n U. S. 4414~4, ~he operatio~ of the nodc~ are ~cheduled ~y the aontrol computer according to th~
proces3iny ~equired. At the ~eginnin~ of each Er~me a tlme control word i~ tr~nsmitted to each node ~o e~tabli~ the time ~vall~le ~or pro~es~ing. Each node ha~ a lo~al ~toxe for ~hared data an~ each node can globally writ~ to arly or all ~he local ~tores o~
the oth~r node~ simult~neou~ly. All data iQ fir~t wrltten to a commo~ ~to~e ~nd the~ ~he requlred da~a i~ read out to the local ~tore~ from the common store. ~hu~ e~ch update of an it~m o~ data in a local gto~ re~uirec both ~ ~rite to the common ~tore ~ r~aa ~o tne local ~ore ~ep. rhi~ ~low~
down tha operatlng ~p~ed of the sy~tem, In U~S. 4~5102S, re~l-ti~e operation of t~e node~ ~na ~he ~ys~em ~ontrol c~mputer ~re intarl~avqd 2S wit~out ov~rl~p. wri~a ~t~ fr~m the nodes bein~
di~tribute~ duri~g the operating time ~gment of the ~xte~ contr~l computer. Thi~ arr~ngement ig relatively ea~y to imple~ent but rel~tively ~low in op~rati~n a~ the t~o par~ of the ~y~tem operate alte~nately, not continuo~sly.
Thu~, in the prior art ~ystem~, including real-tim~ ~y~te~, a ri~id ~perating protocol i~
e~tabli~ed to maintain ~he coherence o~ the shaxed data in the s~parate local ~tore~. Thi~ rigid proto~ol inevLt~bly r~trict~ the ~peed ~nd flexibil~y of the sygtem~.

'0~ I. 11 C:~Vr~ Ir~nr Cl~_l Cl~ l l'L 1;~ r~.~c 12Z~3 i It i~ ~n object of the preaent inventlorl ~o obviate or mi~igate th~ above problem~.
Acoo~din~ to the p~esent inventlon, there i~
provld~d a ~-~al--tlnlc da~ proce~irly ~y~tem compri~iny at le~t two proce~ing nodc~, a data ~tore i.n re~ect o~ each node, each ~t~ store being partitioned into ~ectiorl~ ~ flr~t one ~ which i8 re~erved for the ~torage of da~a local to the ~ pective nod~ and a s~aond on~ of which i~ re~erved for the stor~e of d~ta to ~e ~hared hetween nodes, a data link lnterconnecting the nod~s, me~ns at each node for generatin~ a write me~a~ m~ y ~n addr~ss and data to be written to that addre~a wheneve~ that node write~ to an addre~s in the ~eaond ~eation of a data ~tore, mean~ for ~ransmittiny each gener~ted mes~age via the data link t~ each o~ the n~des, me~na for ~llocating to each addr~a~ in the second aection~ of the data ~ores a reepective node which i~ to be th~ ma~te~ node for that address, ~nd mean~ for p~eventing dat~ bein~ written ~o any ~ddress in the ~econd se~tion of a da-ta store other than ~y the allo~ted ma~ter node.
A~ each addres~ o~ the da~a stores which ~ontain~ ~a~a ehared b~tween t~e node~ can ~e writt~n to by only one node proce~sor i~ is not nece~ary to impc~e rigid controls on tl~e prio~ity allocated to write me~sage~ to en~ure that the ~hared data does not becoMe corrupt. Thi~ ena~le~ the 3peed o~
operation o~ ~hs ~y~tem to ~ enhanced, but in 30 additivn it enable~ a ~elatively large number o~
node~ to be run in par~llel usiny ~tand~rd proces~ing unit~ without ~omplex operatin~ procedures~ Thus wlde range o~ different re~l-ti~e syqte~ requiremen~
~an be met relatively Rasily. F~r example, the p~esent invention ha~ application~ in flight and other simulator~, proce~ ~ontrol system~, and fire '5 L~ 12 ~ r ~ P~ r ~L ~a~ l L I~ P.~7 control ~y~tam~
~re~ra~ly an add~ess r~nge ~o~parator i~
provlded in re~pec~ of eac~ nod~, for comparing th~
addresq o a da-ta write mes3age generated hy khat node wi~h a pre~et range of addre~e~ and for tran3ferring the dE~ta w~ite mq~a~ge to the datEl link only i ~he compa~Bd add.~ess is within the pr~aet range. ~hu~ the address co~parator e~fectively de~ermlne~ which ~ddre~e~ ln the shared data can be 10 wrlt~en to by the respectivR node~. A furthe~
addres~ comparato~ can be provided in re~peat o~ each no~e for comparing t~e ~dd~e g o~ a data write me~age received f~om the data link with a pre~et range of aadrea~e~ ~nd ~or t~an~ferring the received 15 data wrlte message to the local data ~tore only if the comp~red ad~r~ ia wlt~in the preset r~ny~ of addre~ses. Thu~ the fu~th~ addre~ comparator determines the address~ff within a local ~tor~ ~o whi~h data can be wrltt~n from the dat~ link.
~he nodes can be connected ~n parallql by ~inyle d~ta link ~r in parallel hy a plurality o~
data links. Furth~rmorq more complex ~ys~e~
~t~uctures can be provlded. ~or example the node~
can be ~rr~nged in R plurality o~ group~ with th2 2S noae~ in e~ch group being connec~ed in parall~l ~y ~
respectiv~ d~ta link and at lea~t one of the n3dea belonglng to two of ~he group~. In t~ia arran~ement the 3ection ~ the data store recqiving ~ared data at the node belonging to ~wo group~ is divided into a 30 plu~all~y of ~b~ection~ e~ch o which receive~ data to ~e ~h~red with the node~ o ~ respqctive g~oup, So~w~re i~ provided ~o control the trans~er of d~a from one 3ub~cti~n to the other w~en d~ta i~ ~o be ~hared betwee~ two group~ o~ nodes.
Preerably a further ~emo~y i~ provided co~nected to the d~a link to which input/output data . 06 45 ~ r ~ 1 IF'~ IT 1~ p, 13!_!

~L~22~ L63 can be written by ~he nude~. Add~ eg dedic~ted to ~hi~ purpo~e would ~e provided in the loc~l ~a~
~tore~ o~ the nodeQ from whi~ data can be read atld to ~iah data c~an ~e wri~ten vi~ the data link.
B~bodiment~ o~ the pr~ent inventi~n will now b~
de~cribed, by w~y of example, with rofererlce to the ac~o~p~nyiny drawin~, in ~ich :
~ig. 1 i~ a bl~ck sche~atic di~ram o~ a known data pro~e~ing ~y~tem;
Flg, 2 i~ a blocX ~hematic dlagram of an emhodiment o~ the pre~ent invention;
Fig. 3 ~chematl~ally illu~rate~ the line pr~vided on a dat~ 4us ~hown ln Fi~. 2, Fig. 4 i9 a blo~k schemati~ diagram of read/wrlte ~en~e hardware shown in Fig~ 2, and Figs. 5, 6, 7, ~ and ~ ~hematic~lly illu~trate alternative struc-ture~ for ~y~te~ em~odying ~e pre~ent lnvention.
Ref~rrlng to Fig. 1, the illu~trated knowh ~20 system i~ a comm~rcially availa~le ~y~tem ba~e~ on ~ ~J ~Q~ '~ ~ t~
the GOUhD' Co~pu~er Sy~tem~ Division 32/27 ~omputar, A central pro~e~or unit (cpu? 1 r~3ide~, on a ~.6 MB/Se~ co~putar bu~ 2 known ag ~ "SE~US" which i3 ~he maln ~ast communlcation bu~ A dat~ ~tore in ~he form of an Integr~t~d M~mo~y M~dule (IMM) 3 provlde3 l MB of me~ory and a~oc~a~d memory control loq~c, An IOP unit 4 l~ a ~ont~oller whleh ~upport~ a ~y~tem ~on~ol~ 5 ~nd i~ the m~te~ con~roller for a lMB/Sec Multi Purpo~e Bu~ ~MPBU~
Hig~ speed devices ~u~h a~ Ui~c or Tape controller~ 7 and high ~peed device interface (E3~DI) 8 connect to the SE~US 2. Low ~peed peripherals ~ueh a~ CRT termin~l contr~llers ~ (8 lin~
ayyn~hronou~ ~peration), line pr~nter/~loppy di~c controllerg 10, etc. connect to ~he MP~US ~. The ter~s "SELBUS", "IMM", "IO~", "MPBUS", "IPU" an~

L4~3 "HS~I" ar~ ~ho~e uaed by the manu~acturer to describe element~ of the known GOULD 3~/27 avmputer system And ~ erent nomenclature m~y be u~ed ~y other m~nufacturer~ for equivalen~ çomponent~. The ~OULD
nome~cl~tu~e is used he~in ~imply for the sake o~
convenience.
A ~y~tem ~acording t~ t~R illVention iS
illus~ra-ted in Fi~. 2. The illuqt~ted 8ystem compriseg a ~eries of proc~ing unit~ ll each ba~ed on a GOUL~ 32/27 computer~ Each procesging unit ll ~as its own ~LBUS 12 ~nd operate~ asynchronou~ly of the ot~ers. The proceq~in~ unitg 11 do not drive peripherals, but are connected to a further proce~ing uni~ 13 whiah i~ provided with a ~ull ~omplement of ~upport peri~herals~ Each o the proces~ing units 11 h2ndle~ proces~lng fel~ting to a p~rticula~ aspect of ~he sy~tem, fo~ example in ~
Elight ~imulator sy~tem one unit ll ~ould a~lculate ~light parameter~, e~g. ~lti~ude, one unit ll would calculate engine para~eters, e.g~ thrust, another unit ll would calculate autopilot paramete~, alld ~o on. ~ach proces~ing unit 11~ 13 and its as~oci~t~d eyuipment s~ch ~ data store~ consti~ute~ ~ node of the system~ ~
Aa the co~p~ting node~ incorporating proce~ing unit~ 11 do n4~ drive peripheral~ the required input/output capaci~y of the~e node~ i8 limited, ~ll Plow input/output oper~ion~ being execute~ ~y the ~ont en~ proce~ing unit l~. Thl~ maximi3e~ the availa~le real tim~ c4mputing power of each compu~ing node. A ~econdary RS ~32 channel (multiple RS ~32 line~) provid~ for initiall~ativn ~nd control unction~, and al~o aid~ di~gno~tlc~ if ~h~ ~ystem ~ail~.
Each CPU 11 i3 au~m~nted by a nul~er of ph~ ally eimilar Parallel Proce3~0r U~it~ ~PPU) 14. E~h PPU 1~ i5 ~ Ar to the CiOUL~ Intern~l Proce~or Unlt (lPU) ~eatured on the 32/67, 3~/77 and 32/~7 GO~ comp~ter~ hut extended in aeco~dance with ~onventional ~2chni~ues to allow Eor more than two 32/27 pro~ or~ p~r SE~US 12 An 11nattended Operator~ C~n~ole (UOC~ l5 i~
a3~0ciated with sach unit ll. T~e UOC l5 i~
es~ntially an IOP (Fig. l.) wi~h ext~a logia to obvi~te ~he need ~or an MPBUS a~ normally pro~ided when peripherals are to b~ drlven.
The SELBUS 12 o~ each CPU ll, 13 i~ çonnacted ~y a ~PIMM 16 (a dual port IMM) and re~d/write sen~e lo~ic l~ to a 2~.~MB/See data link 18 or refle~ted memo~y bu~, The ~PIMM l~ is available from GOUL~ and lS i~ normally arranged with the ~econd port conneçted to peripheral equipment, i.e. ~or lnput/output purpv~e~. In the illu~trated ar~angement howev~r, the ~PIMM 16 i~ used to enable the prcvi~ion of a "re~l~ctive memory" ~y~tem in aec~rdan~e with ~he inventiOn.
Th~ prlnciple of the illu~trated re~lective ~emory sy~em i~ th~t each DPIMM data Ytore 16, whi~h eontaln~ 2 MB o~ memory, ~ loglcally partitlonR~ at a predeter~ined point. All data and p~ogram on one ,lde of tho pred~termined point ia local to the SEL.BU~t l2 of the unit ll o~ l3 On whi~h the ~,PIMM 16 re~ides, and all data and pr~qram on the other ,3ide Vf that poin~ i~ ,shared vi~ the bu~ ld with th~ other unlt~ ll, 13. The read/write sen~e hardware 17 convert,s the ~age of the DPIMM l~ to a local/-~hared ~ystem. trhe re~d/wri~e ~enYa l~gic unit 17 i8 connected to the ~econd port on each DPIMM 16~ If a CPU l~, ll (or PPU l4~ write,3 to an ~dd~ in the ,~hared portion of it~ a~ociated DPIMM l6 thi~ i~
dete~ted by the read/wrlte een~e hardware 17 and the addres~ ~nd data i5 put Qn to the reflaate~ me~ory i J~35 ~ 5 ~ r ~ F~ C16~ 12 P.ll ~;ZZ1463 ~u~ 1~. A11 DPIMM'~ 16 than ~ut~mati~11y ~cc~pt thi~ data and enter it into their o~n memory. T~ s all ~PIMM'~ 16 have a copy o all o-f thq qhared data within their ~wn memories. Each proce~sing unit can th~l~ acce~s d~t~ i~ re~uires dire~tly ron~ it~
re~pe~ive da~a store ~PIMM 16), ~c~eq~ i~ never delayed a~ the re~ult of a~other proce~or a~eç~ing the ~ame data ~tore.
It i~ of fundament~ portance to prevent "co11i~ions" due to ~wo or more proces~o~ trying to manipu1~te the same d~ta item si~u1taneou~1y~ This is done ~y the read/write sen~ logia unit 17 tha~ a~
described a~ove i~ u~ed to i~su~ a ~ingle write command ~o each of the other nod~s o~ the system~
Each node ha~ its own urlique addre~ pa~tition go that only one nod~ i~ aapa~le o~ writing data to a~y one addre~ in tha sh~red da~a 0eation~ of the d~ta qto~e0. the addreq~ or an item o~ data in one data ~tore 16 ~eing the ~ame a~ the addre~ for that ~ame ite~ o~ data in all the other data stores 16. Thu~
although all nodes may be a~ to write to the shared data ~e~tion~ of ~11 thé da~a stores 1~, the only tran~actions which aatual]y do ~o are those ~n which the addreqe to which data is to be written 1ies within the m~mor~ ~e~ment ~or which tha~ ~iy~item i~
"master". For example, i~ a ~ligh~ i~imu1ator ~umpri~lng a ~light pro~essor, only that pr~ce~sor can actually chanye the ~tored val~e for altitude bRcau~e altitud~ i~ within it~ addre~ 1imit~ but outsiide t~l~ address limit~ o~ all the other proc~sorY, Thç~ ot~3r proceee:ora c;~n read the ~or~id altitude V~1ue but c~nnok change it. Thu~i numeric~1 rçpAnnie3 ~re av~ided withou~ it ~eing necePisary to provide compl~x procedure~ to maintain the ~i~me ~ronologi~al orde~ for updates to the ~hared data i~
the di~erent di~t~ ~ore~

'85 13~ 'T ~lhllP~Ir 1~ J~ L1~ P. 1~

~2:~463 As an additlonal fea-ture the read/wri~e ~nqe hardware 17 can detect input/output read and wrlte request~ to ~ddre~se~ dedicated ~o input/output dat~
in the ~PIMM'~ memory lG . The a~dres~ repregents a location ln a ~M memo~y 19 which is conneated t~
user input/output high ~peed equipment, R.g, a F'light ~i~ulator Input/Output linka~e, This ~llows fa~t acqui~ition o~ data. ('I`he D~IMM 16 has been u~ed previou~ly for inpu~/output funations, but in hlo~k mode tr~n3fer, not ~or lndividual dat~ element transfer~. Thi~ type of input/output can be re~erred to a~ Memory Mapped input/output, ~ he handling of communic~tion~ between the SEL~VS 12 of ~ny one node and the reflecte~ memory bu~ 18 will now be de~cribed in ~reater detail wlth reference to Figs. 3 and 4.
The bu~e~ 12 and 18 each carry parallel data, address, bu~ ory~ni~ation and bu~ control ~ignal~ at a rate of ~6.6 M~/~ec. Thi~ data rate aan be maintained ~or ~ bu leng~h of ~orty feet bu~ ~u~t be reduaed if the bus i~ longer tha~ this, e.g. to 1~.3 MB/See for a bu3 aighty fe~t lon~. Fi~. 3 schematically illu~trate~ the bu~ lB, which haa t~irty two data line , twqnty ~our ~ddre~ line3, nlne bu~ reque~t line~, nlne bu~ grant lines, four node identity lines, ~nd control linR~ onl~ two of wh~ch are ~hown as being of relevance ~o ~he co~munication o~ data via bu~ 1~. There are nine node~ in ~11, eAch ~ll~a~ r~pRctlve b~ reque~t and grant lines, one node co~priaing CPU 13 and the other~ each ~omprising one CPU 11.
Flg~ 4 ~hows in great~r detail than Fig. 2 the arrange~ent of the wPIMM 16 and read/write Se~9e logic 17 conneet~d between bu~ 12 and ~us la.
A~suming that the node proces~or a~ociated with th~
~rrangement of Fig. 4 w~ite~ to the dat~ store 1~, n. ~ J~ Cl ~T ~ t l~ -r ~ 1 0~ r . l'1 1 1 r~

e da~ to be written and it~ ad~res~ l~ lo~ded into lateh ~0 and the addre~ to which it i~ to be written i~ loaded into an add~es~ ~omparator 21. As~uming that the data i~ Quece3 ~ully writterl to the store l~, a "~u~ce~yul write" ~ignal i~ d~livered to a detector 22. The suc~e~ful write ~ignal will be carried hy one o~ the contYol line3 o the bu~ 12 in an entirely conv~ntional ~anner. I~ the addre~s is no~ within ~ predetermin~d r~nge se~ by the ~o~parato~ 21, i~ relate~ to loc~l da~a and i~ not to be ~h~re~ with the o~her node If on the other ~and it ls withln the set range, the comparator providqs ~n output to an ~N~ gate 2~. The detector 72 also provlde~ an output to th~ ya~e 23 whi~h control~ the latch 20 ~o that the add~ess and data in the lat~h 20 i~ load~d lnto a firQt in ~ir~t out (FIF0) re~ister 24 only if the ~ddres~ i~ wi~hin the ~et range and -the ~uc~es~ful write ~ignal ~as been ~etected.
The FIFO 24 can a~Pmble a ~ueue o~ u~ to sixty ~V four me~sage~ for tran~miYsion ~lthough nor~ally there wlll be only one or two mesqages in the queue.
I~ a queue of ~ixty or mor~ mes~age~ are a~semb1ed a '`busy" signa1 i~ deliv~red to the Yystem ~o aq to inarea~e th0 priority o~ the re~pective node ~hen 2g ~aklng bus aaces~ Feque~t~. An appr~priate circuit (not ~ho~n) i~ provide~ to ~u~pend the associat~d node pro~e~aor if the ~IE'0 is illed ~p wlth me~ages aw~lting tran~misqion.
When the FIF0 ~4 store~ ~ me~age for tran~ sion, thi~ i~ deteated by a bus request 1O~1C
cirault 25 whi~h output~ ~ bu~ ~equest slgna1 onto the re~pectiv~ line of bu~ 18. The bu~ reque~t ~ignal i~ tran~mitted to the CPU 13 (Fi~. 2) whiah control~ the operation o~ ~he bua 18. The CPU 13 5~rant~ ac~ to thQ nodeOE which h~vo me~ 4 t~
tranq~it one at ~ ti~e in a pre~et ord~r ~o that the - ~2~ ;3 first message in -the queue at each node is transmitted during one cycle oE operation of the bus 1~, and so on. Thus, in due course the. bus request loglc will receive a "bus grant" signal from -the bus 18 and will then cause the message in FIF0 24 to be pu-t onto the bus 18 by a transmitter 26.

Assuming now that the arrangement of Fig. 4 is that of a node receiving the transmitted message, the handling of tha-t message will be describedO When the CPU 13 grants a bus request, a clock signal is transmitted on one of the control lines of the bus 18. The clock signal is used to initiate a message transmis~
sion and reception cycle. When the message has been transmitted, it is checked for validity by the CPU 13 in accordance with con-ventional routines, e.g. a parity check, and if the data on the bus is found to be valid a "data valid" signal is transmltted on the other control line of the bus 18. Thus, the transmitted mes-sage is bracketed by the clock and data valid signals.

The transmitted data and address are loaded into a latch 27 by a receiver 28, and the address is loaded into an address comparator 29. The data valid signal is detected by data valid detector 30. An AND gate 31 has its inputs connected to the comparator 29 and detector 30. A predetermined range of addresses is set in the comparator corresponding to those parts of the da-ta store 16 which can be written to by nodes other than that to which the store 16 is local. If the received address is within the range, and the data valid signal is detected, the gate 31 transfers the message data in latch 27 to a FIF0 32 which stores a queue of up to sixty four messages containing data to be written to the store 16.

When the FIFO 32 has a message to be written to the store 16, a memory transfer request is made to a MI~IR.E!~ '85: 1:3~ IT 'll~d~P~-IT I~K~.l 8~ 1 1.1/18 P. 15 ~Z;~ 3 L ;~

reque~t logic ~ircul~ ~3 whl~h aommullic~tes with tlle ~tore 16 ~n~ in due ~our~e receive~ ~ reque~. gran~
signal from the ~tore 1~. The ~ir~t me~aye ln the ~ueue in FIFO 32 is then released to updat~ the S appropri~te addre3~ o~ ~tore 16.
It ~y ~e tha~ ~ signi~icant number of l~e~a~e~
bulld up in the FIFO' B 24 and 32 contairllng data i~ems which are ln due cour~e written ~o the 6tores in an order different from the chronological order in whiç~ they wese generated. H~wever, a~ each addre~
for sh~r~d dat~ can only be written to ~y its ow~
~nique "m~ster" node, and the mss3age~ generated by that node are ~embled in and t~n~mitted ~rom the PIFO 24 i~ chronological order, each individual memory addre~ i upd~ted in the correçt order. The d~ta in di~ferent addre~ es may get out of chronologi~al etep ~omew~at ~ut in resl time interactive system~ the rate of ch~nge of ~tored parameters i~ relatively slow w~en ~ompared with the iterstion ra~e of the gy~tem and therefor~ thi~ doe~
not present a problem. There i~ thue. no need for the ~y~tem de~iyne~ to impo~e ~trict proc~dur~s ~o maintain chronology, it baing merely nqces~ary ~o set t~e addre~ comparatorE. ~1 and 2~ ~orr~tly. T~e ~y~tem i8 th~efore ver~ 1~xible ~nd rel~tive1y ea~y to implement even when considering very complex re~l tim~ t~k~ 3uch a~ flight ~i~ulati~n.
The $our node identity lineq ~ Fig, ~ ) o~ bU~ 18 identi~y the node o~igin~tin~ a ~es~age transmitted 30 on the bus. This information ie no~ r~quire~ t~
en~hl.~ dat~ to b~ handl~d by the ~e~d/w~ r~
loglc 17 (Fig. 4) but i~ provided to enable traf~ic on the bu~ 18 to be monitor~d. Fault~ and "bo~tleneck~" on the bus 18 can be ~ore ea~ily detected and dealt with if t~i~ e~tra information ls avai}able~

C ~ IT ~ lC~C3T Gl;;1 C!.. !.'~ I ~ A-- P ~ ~

~ig~. 5 to 7 ~ahema~i~ally illuHtr~te thre~
~y~em aonfi~u~tioll~ that are po~ible wit~ the pre~ent invention. Fig. S ~howq the configuration ~
Fiy. 2, ~hat i~ a series of node~ N connected ~y a ~ingle r~flected ~e~ory bu~ ~Ml. Fi~. 6 ~how~ a coniguratlon ~imilar to that of Fig. 5 ~ut wi~h parallel re1ected memory hu~eg RMl and ~M~. In ~uah an arr~llye~ n~ ~he ~y~tem WoUld normally oper~te using bu~ R~1 with bu~ RM2 ldle, but in the eYent o~
damage to bu~ RMl the 5y~tem could ~witch ~ub~t~ntially i~mediately to ~u~ ~M2. By monitoring the condition of the bu~e~ and routin~ RMl and RM~
separately a f~ afe/~elf healing arrangement can be achieved. Further ~y3tem aecurl~y could be obtained by duplic~ting the pro~e~ing node~ themsel~e~ wlth on~ nvrmally oper~ting and the o~er on hot ~tandby, each of the pair o~ nodes b~ing eonnected to both the bu~e~ RMl and RM2.
In the arran~ement~ of Fig~O 5 and ~ e~ah refleated ~emory bu~ i~ connected ~o each node ~o that the two node~ that are farthe.st apart mu~t be no further apart than the carry r~nge of the bu~, typically forty feet at 26,6 MB/Sec~ In some ~ircumstance~ it i~ highly desirable to b~ abl~ to locate nod~ at a greater di~t~nce apart than thia, e.~ in shipbo~rd fire control ~y~temq where one eek~ ~o re~ain ~y~tem op~rability even i-E a node i2 totally destroysd and to wid~ly distribute the node~
~o that l~cali~ed da~a~e cann~ di~able a ~igni~ic~nt 30 nu~ber of node~. Fig. 7 illu~trate~ an arrange~nt in accordance with the invention ~ich ~nable~ the distance bek~een clo e~t adjac~nt node~ to be e~al to the maximum carry range of ~he re~lected me~ory ~u~ .
3S In the arrange~ent o~ Fig~ 7, a serie~ of BiX
nodes Nl to N~ are arranged ~f~ectivel~ in ~ive pairs IT ~ FlQ-r l~ P, 17 ~;~2~

~ 15 -Nl N2, ~ N3, Nl N4, ~2 N5 and ~ N~ Witil each pair operating in accordanae with the procedure~ describe~
above with rç~e.ren~e to Fig. ~. The pairs of nodes are li~ked by re~pective re~lected memory bu~es R~l 5 to RM5. Each node ha~ a memory parti~iorled illtO
lo~al and ~hared d~t~ section~, but the ~h~red d~ta ~eat1on i~ further pa.r~itioned in~o ~u~-~ections ea~h dedi~ated to a ~e~pectlve re~lected memory bu~. Thu~
each no~e ha~ ~ ~hared data ~ectiQn, but that of node Nl iB divided into two ~u~-~ec-tions, that o~ node N2 i3 divlded into thrce sub~sections, and t~at of nodq N4 is not subdivided. ~ach sub-3ection o~ the ~hared memo~y ha~ it~ own read ~enqe circuit equivale~t ~o component~ 27 to 33 o~ Fi~. 4~
lS A~umin~ that nhd~ Nl ~enerat~ dat~ ~o he shared, then t~at dat~ ha~ ~n addre~3 ~nique throughout the ~ystem to whi~h onlY nod~ Nl ~n wrlte~ Noda Nl ~ttempts to write that data in~o eaah of it~ ~hared memory ~ub-~ectio~ and i~ succe~ful zo only if the add~ss ~llocated to the d~ta 1~ within the range ~et by the addre~ comparator of the read ~en~e logic. As8uming the ~at~ i~ writ~ to each o~
the ~ e~tion~ t~at d~t~ i~ then tran~ferre~ ~o node3 N2 and N~. At node ~2, ~o~re con~rol~ the tra~e~ o~ the ~re~hly written dat~ in its own ehar~d ~emo~y to nod~ N3 and ~5 by copying data fro~
the me~ory gub-eea-tlon devoted to memory bu~ RMl, in~o the memory 3u~-section devoted to m~mory ~u~e~
RM2 and RM4. A further tr~n~fer i~ arranged ~rom node M3 to N6. Becau~e each m~mory location can be written to by only one node, rela~ively imple proced~rea ~an ~e followed ~or transferring data between node~. In th~ illu~trated ~rrangement there i~ only one po~sible route ~or data between any two node~. Thi~ need not be the ca~e howcver~ For example a furt~er ~emory bu~ RM~ may be pr~vided a~

I~'h~.~6 '~; 13:51 ~ T '~ lF'rlT la~,L 8~ Li~2 P. 1~

~Z2~463 ~hown by dot~ed line~ between s~odeu N4 ~nd N5. IE
d~a originating at node Nl waa to he written in the ~h~red ~emory ~ node ~ the ~oftwa~e could be ar~nyed to tran~er the ~ata via RMl ~nd RM4, or, i~
that ~ail~d, to alternatively ~elect the ruute RM3 and ~M~. All that is ~equired l~ a ~o~tw~re routine controlling the sub-section~ o~ the ~h~red data ~emorle~ to w~ich the d~ta i8 wrltten.
T~ arran~ement of Fig. B i~ sl~ila~ to that of Fig. 7 ex~ept ~hat rather th~n having sin~le node~
interconno~t~d in p~ir~; by r~:upQctiv~ refl~ e~
memory busq~ the node~ ~re inte~connected in ~roup~
ea~ containing nine node~ an~ the~e group~ are in turn interconnec~ed in pair~ by furth~r reflected memory buse~ ~onnecte~ to only Onq node withi~ each group, A~ ~hown in Fig. 8, eaq~ group co~prise~ a ~ront end procee~or noae FEN similAr to that includin~ proce~in~ unit 13 in ~ig~ 2 inter~onnect~d by a re~lected memory bu~ RM~ to eight ~urther node~
7o Nl ~o N8 3imilar to tho~e in~luding pro~e~ing unitq ll in Fig. 2. The n~d~ N~ are connected in pair~ ~y re~lected memory bu~e~ ~Ml to ~M~ and o~erate a~
"~o~tware ~xchange~" in the ~ame m~nner a~ he case wlth the node N2 in Fig. 7 The "~twara exc~anges" ~etween dif~eren~

refleicted me~ory 4u~e~ in~du~e ~me delays i~ -the tran~fex o~ data ~e-tween the ~use~ Fig. 9 illustrate~ ~ "repeater" arran~ement designed to replacie the "30~tware exchange" and thereby provide ~n ~utomatic h~rdw~re conn~ction whi~h ~peeds up dat~
tran~fer, Referrin~ to Fig. 9, the illu~trated repeater repl~ces the node N~ between bu~e~ RM2 and RM3 ~nd co~prl~es three s~t~ of re~d/write ~ense h~rdware 3g simila~ to thei read/write ~en~e hardware 17 of Ylg~

2 and 4~ Each ~et co~pri~es a re~d ~en~2 clrcuit RSC

R.~ 5 la~'~' t~ PI-IT ~ 2 P, lS~J

~2Z~63 and ~ wrlte uen~e cir~uit WSC ~or each port to which a reflectcd memory ~u~ i~ ccnnee~ed, ~ach read ~ense circuit communicatin~ data tc the write ~en~e circuit~ of t~e other two port~, Each port ha~ it~
S o~n partl~ioned addre~3 the range of w~ich 1~ ~et ~o limit the tr~nsfer 0~ data b~tw~en bu~e~ to th~t which iH requir~d. ~hu~ ~h~ r~peat~r react~ irl exactly the ~amR way ~ the other node~ ~n the ~u~e~
to which it ~ connected ~nd d~ta tran~erred to r~lected m~ory buq by a repeater i~ handled in exactly the ~me way as ~ata gene~aked by any other noda on that bu~. Dat~ uf~ered through the repea~r by FI~O air~it~ to control hus AQce3s a~
wl~h a norm~l node. T~ere is no ~oftware overhead involved in ~at~ tran~fer~ between bu~es, and traf~lc ~n the b~e~ i~ lmited to that whlch i8 e~en~ia1 by ~electiny the range o~ add`~es~e~ ~or which d~-ta c~n be tran~ferred by e~h rea~/wri~e sqn~e circuit.
Thug ~he sy~tçm operate~ ~t a hiyh speed making it 20 e~ier to u~ relatively low d~ta rate ~u~e~ wit~ a long car~y range.
The repeater facilitateq ~he provi~io~ of a hot ~t~ndby sy~tem which is kept fully updated ~o that it can be ~witched in at onae ln th~ event of a failu~e. The repe~ter it~elf ~ould b~ duplica~ed.

~0

Claims (8)

THE EMBODIMENTS OF THE INVENTIONIIN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A real-time data processing system comprising at least two processing nodes, a data store in respect of each node, each data store being partitioned into sections a first one of which is reserved for the storage of data local to the respective node and a second one of which is reserved for the storage of data to be shared between nodes, a data link interconnecting the nodes, means at each node for generating a write message comprising an address and data to be written to that address whenever that node writes to an address in the second section of a data store, means for transmitting each generated message via the data link to each of the nodes, means for allocating to each address in the second sections of the data stores a respective node which is to be the master node for that address, and means for preventing data being written to any address in the second section of a data store other than by the allocated master node.
2. A real time data processing system according to claim 1, wherein the allocating and preventing means comprises an address range comparator in respect of each node for comparing the address of a data write message generated by that node with a preset range of addresses and for transferring the data write message to the data link only if the compared address is within the preset range.
3. A real time data processing system according to claim 2, comprising in respect of each node a latch into which each data write message is loaded, a first in first out register connected to the output of the latch, a detector for detecting a successful writing of data to the data store local to the node, an AND gate connected to the detector and comparator and controlling the latch such that the content of the latch is transferred to the registor when the compared address is within range and a successful writing of data is detected, and a transmitter connected to the register for transmitting messages stored in the register over the data link.
4. A real time data processing system according to claim 2, wherein the allocating and preventing means further comprises an address comparator in respect of each node for comparing the address of a data write message received from the data link with a preset range of addresses and for transferring the received data write message to the local data store only if the compared address is within the preset range of addresses.
5. A real time data processing system according to claim 4, wherein one of the nodes controls traffic on the data link, and each other node comprises access request logic, the said one node comprising means for allocating data link access to the said other nodes one at a time in response to access requests from the request logic, means for applying a clock signal to the data link each time a node is granted access thereto, means for checking the validity of messages transmitted on the data link, means for applying a data valid signal to the data link if a checked message is valid, and means at each node for preventing data contained in transmitted messages from being transferred to the local data store until receipt of the data valid signal.
6. A real time data processing system according to claim 1, comprising two data links connected in parallel to the nodes.
7. A real time data processing system according to claim 1, comprising a plurality of groups of nodes with the nodes in each group being connected in parallel by a respective data link and at least one of the nodes belonging to two of the groups, wherein the said second section of the data store of the said at least one node is divided into a plurality of subsections each of which receives data to be shared with the nodes of a respective group of nodes, and means are provided for controlling the transfer of data from one subsection to the other when data is to be shared between two groups of nodes.
8. A real time data processing system according to claim 1, comprising two group of nodes each interconnected by a respective data link and a repeater connected to each data link, the repeater comprising read/write sense hardware in respect of each data link for tranferring messages having a first selected range of addresses from a first data link to the second and for transferring messages having a second selected range of addresses from the second data link to the first.
CA000476086A 1984-03-10 1985-03-08 Real-time data processing system Expired CA1221463A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB8406322 1984-03-10
GB848406322A GB8406322D0 (en) 1984-03-10 1984-03-10 Data processing system
GB848420617A GB8420617D0 (en) 1984-08-14 1984-08-14 Data processing system
GB8420617 1984-08-14

Publications (1)

Publication Number Publication Date
CA1221463A true CA1221463A (en) 1987-05-05

Family

ID=26287440

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000476086A Expired CA1221463A (en) 1984-03-10 1985-03-08 Real-time data processing system

Country Status (5)

Country Link
US (2) US4991079A (en)
CA (1) CA1221463A (en)
DE (1) DE3508291C2 (en)
FR (1) FR2561009B1 (en)
GB (1) GB2156554B (en)

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5581732A (en) * 1984-03-10 1996-12-03 Encore Computer, U.S., Inc. Multiprocessor system with reflective memory data transfer device
US5146607A (en) * 1986-06-30 1992-09-08 Encore Computer Corporation Method and apparatus for sharing information between a plurality of processing units
US4951193A (en) * 1986-09-05 1990-08-21 Hitachi, Ltd. Parallel computer with distributed shared memories and distributed task activating circuits
FR2604003B1 (en) * 1986-09-15 1992-05-22 France Etat SYSTEM FOR INTERCONNECTING IDENTICAL OR COMPATIBLE COMPUTERS
AU598101B2 (en) * 1987-02-27 1990-06-14 Honeywell Bull Inc. Shared memory controller arrangement
NL8801116A (en) * 1988-04-29 1989-11-16 Oce Nederland Bv METHOD AND APPARATUS FOR CONVERTING CONFIRMATION DATA TO GRID DATA
US5124943A (en) * 1988-08-22 1992-06-23 Pacific Bell Digital network utilizing telephone lines
US5276806A (en) * 1988-09-19 1994-01-04 Princeton University Oblivious memory computer networking
US5594866A (en) * 1989-01-18 1997-01-14 Intel Corporation Message routing in a multi-processor computer system with alternate edge strobe regeneration
WO1991010195A1 (en) * 1990-01-05 1991-07-11 Sun Microsystems, Inc. High speed active bus
US5301340A (en) * 1990-10-31 1994-04-05 International Business Machines Corporation IC chips including ALUs and identical register files whereby a number of ALUs directly and concurrently write results to every register file per cycle
EP0543512B1 (en) * 1991-11-19 1999-10-06 International Business Machines Corporation Multiprocessor system
DE69329904T2 (en) * 1992-03-25 2001-06-13 Sun Microsystems Inc REAL-TIME PROCESSING SYSTEM
ES2170066T3 (en) * 1992-03-25 2002-08-01 Sun Microsystems Inc OPTICAL FIBER MEMORY COUPLING SYSTEM.
WO1993025945A1 (en) * 1992-06-12 1993-12-23 The Dow Chemical Company Stealth interface for process control computers
US5444714A (en) * 1992-11-30 1995-08-22 Samsung Electronics Co., Ltd. Communication and exchange processing system
DE69316559T2 (en) * 1992-12-03 1998-09-10 Advanced Micro Devices Inc Servo loop control
JP2826028B2 (en) * 1993-01-28 1998-11-18 富士通株式会社 Distributed memory processor system
US5515537A (en) * 1993-06-01 1996-05-07 The United States Of America As Represented By The Secretary Of The Navy Real-time distributed data base locking manager
US5581703A (en) * 1993-06-29 1996-12-03 International Business Machines Corporation Method and apparatus for reserving system resources to assure quality of service
US5694548A (en) * 1993-06-29 1997-12-02 International Business Machines Corporation System and method for providing multimedia quality of service sessions in a communications network
US5388097A (en) * 1993-06-29 1995-02-07 International Business Machines Corporation System and method for bandwidth reservation for multimedia traffic in communication networks
US5530907A (en) * 1993-08-23 1996-06-25 Tcsi Corporation Modular networked image processing system and method therefor
EP0640929A3 (en) * 1993-08-30 1995-11-29 Advanced Micro Devices Inc Inter-processor communication via post office RAM.
US5456252A (en) * 1993-09-30 1995-10-10 Cedars-Sinai Medical Center Induced fluorescence spectroscopy blood perfusion and pH monitor and method
US5503559A (en) * 1993-09-30 1996-04-02 Cedars-Sinai Medical Center Fiber-optic endodontic apparatus and method
JPH07225727A (en) * 1994-02-14 1995-08-22 Fujitsu Ltd Computer system
US5606666A (en) * 1994-07-19 1997-02-25 International Business Machines Corporation Method and apparatus for distributing control messages between interconnected processing elements by mapping control messages of a shared memory addressable by the receiving processing element
US5588132A (en) * 1994-10-20 1996-12-24 Digital Equipment Corporation Method and apparatus for synchronizing data queues in asymmetric reflective memories
US5574863A (en) * 1994-10-25 1996-11-12 Hewlett-Packard Company System for using mirrored memory as a robust communication path between dual disk storage controllers
US5550973A (en) * 1995-03-15 1996-08-27 International Business Machines Corporation System and method for failure recovery in a shared resource system having a moving write lock
JPH0926892A (en) * 1995-04-27 1997-01-28 Tandem Comput Inc Computer system with remotely duplicated and dynamically reconstitutible memory
US6295585B1 (en) 1995-06-07 2001-09-25 Compaq Computer Corporation High-performance communication method and apparatus for write-only networks
US6049889A (en) * 1995-06-07 2000-04-11 Digital Equipment Corporation High performance recoverable communication method and apparatus for write-only networks
EP0817094B1 (en) * 1996-07-02 2002-10-09 Sun Microsystems, Inc. A split-SMP computer system
US5754877A (en) * 1996-07-02 1998-05-19 Sun Microsystems, Inc. Extended symmetrical multiprocessor architecture
US5923847A (en) * 1996-07-02 1999-07-13 Sun Microsystems, Inc. Split-SMP computer system configured to operate in a protected mode having repeater which inhibits transaction to local address partiton
US5758183A (en) * 1996-07-17 1998-05-26 Digital Equipment Corporation Method of reducing the number of overhead instructions by modifying the program to locate instructions that access shared data stored at target addresses before program execution
US5887184A (en) * 1997-07-17 1999-03-23 International Business Machines Corporation Method and apparatus for partitioning an interconnection medium in a partitioned multiprocessor computer system
US6961801B1 (en) 1998-04-03 2005-11-01 Avid Technology, Inc. Method and apparatus for accessing video data in memory across flow-controlled interconnects
US7836329B1 (en) * 2000-12-29 2010-11-16 3Par, Inc. Communication link protocol optimized for storage architectures
US7831974B2 (en) * 2002-11-12 2010-11-09 Intel Corporation Method and apparatus for serialized mutual exclusion
US6898687B2 (en) * 2002-12-13 2005-05-24 Sun Microsystems, Inc. System and method for synchronizing access to shared resources
US6917967B2 (en) * 2002-12-13 2005-07-12 Sun Microsystems, Inc. System and method for implementing shared memory regions in distributed shared memory systems
US6795850B2 (en) * 2002-12-13 2004-09-21 Sun Microsystems, Inc. System and method for sharing memory among multiple storage device controllers
US7028147B2 (en) * 2002-12-13 2006-04-11 Sun Microsystems, Inc. System and method for efficiently and reliably performing write cache mirroring
US7185223B2 (en) * 2003-09-29 2007-02-27 International Business Machines Corporation Logical partitioning in redundant systems
US20060039949A1 (en) * 2004-08-20 2006-02-23 Nycz Jeffrey H Acetabular cup with controlled release of an osteoinductive formulation
US20070038432A1 (en) * 2005-08-15 2007-02-15 Maurice De Grandmont Data acquisition and simulation architecture
US9578054B1 (en) 2015-08-31 2017-02-21 Newman H-R Computer Design, LLC Hacking-resistant computer design
US10949289B1 (en) * 2018-12-28 2021-03-16 Virtuozzo International Gmbh System and method for maintaining data integrity of data on a storage device

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3618045A (en) * 1969-05-05 1971-11-02 Honeywell Inf Systems Management control subsystem for multiprogrammed data processing system
US3940743A (en) * 1973-11-05 1976-02-24 Digital Equipment Corporation Interconnecting unit for independently operable data processing systems
US3845474A (en) * 1973-11-05 1974-10-29 Honeywell Inf Systems Cache store clearing operation for multiprocessor mode
US3889237A (en) * 1973-11-16 1975-06-10 Sperry Rand Corp Common storage controller for dual processor system
US3873819A (en) * 1973-12-10 1975-03-25 Honeywell Inf Systems Apparatus and method for fault-condition signal processing
JPS5440182B2 (en) * 1974-02-26 1979-12-01
US4007450A (en) * 1975-06-30 1977-02-08 International Business Machines Corporation Data sharing computer network
US4212057A (en) * 1976-04-22 1980-07-08 General Electric Company Shared memory multi-microprocessor computer system
US4228496A (en) * 1976-09-07 1980-10-14 Tandem Computers Incorporated Multiprocessor system
US4209839A (en) * 1978-06-16 1980-06-24 International Business Machines Corporation Shared synchronous memory multiprocessing arrangement
US4253146A (en) * 1978-12-21 1981-02-24 Burroughs Corporation Module for coupling computer-processors
US4351025A (en) * 1979-07-06 1982-09-21 Hall Jr William B Parallel digital computer architecture
US4335426A (en) * 1980-03-10 1982-06-15 International Business Machines Corporation Remote processor initialization in a multi-station peer-to-peer intercommunication system
US4394731A (en) * 1980-11-10 1983-07-19 International Business Machines Corporation Cache storage line shareability control for a multiprocessor system
US4414624A (en) * 1980-11-19 1983-11-08 The United States Of America As Represented By The Secretary Of The Navy Multiple-microcomputer processing
US4442487A (en) * 1981-12-31 1984-04-10 International Business Machines Corporation Three level memory hierarchy using write and share flags
DE3376590D1 (en) * 1982-04-28 1988-06-16 Int Computers Ltd Data processing system
US4539637A (en) * 1982-08-26 1985-09-03 At&T Bell Laboratories Method and apparatus for handling interprocessor calls in a multiprocessor system
US4527238A (en) * 1983-02-28 1985-07-02 Honeywell Information Systems Inc. Cache with independent addressable data and directory arrays
US4642755A (en) * 1983-03-31 1987-02-10 At&T Bell Laboratories Shared memory with two distinct addressing structures
US4669043A (en) * 1984-02-17 1987-05-26 Signetics Corporation Memory access controller
DE3788826T2 (en) * 1986-06-30 1994-05-19 Encore Computer Corp Method and device for sharing information between a plurality of processing units.

Also Published As

Publication number Publication date
GB2156554B (en) 1987-07-29
US4991079A (en) 1991-02-05
GB8505967D0 (en) 1985-04-11
GB2156554A (en) 1985-10-09
US5072373A (en) 1991-12-10
FR2561009A1 (en) 1985-09-13
FR2561009B1 (en) 1991-03-29
DE3508291A1 (en) 1985-09-12
DE3508291C2 (en) 1997-01-16

Similar Documents

Publication Publication Date Title
CA1221463A (en) Real-time data processing system
US4965718A (en) Data processing system incorporating a memory resident directive for synchronizing multiple tasks among plurality of processing elements by monitoring alternation of semaphore data
CA1312963C (en) Software configurable memory architecture for data processing system having graphics capability
US4041472A (en) Data processing internal communications system having plural time-shared intercommunication buses and inter-bus communication means
EP0248906B1 (en) Multi-port memory system
US6606676B1 (en) Method and apparatus to distribute interrupts to multiple interrupt handlers in a distributed symmetric multiprocessor system
KR920008430B1 (en) Read in process memory apparatus
US5276886A (en) Hardware semaphores in a multi-processor environment
EP0886225B1 (en) Microprocessor architecture capable of supporting multiple heterogenous processors
US6816947B1 (en) System and method for memory arbitration
EP0472879B1 (en) Method and apparatus for dynamic detection and routing of non-uniform traffic in parallel buffered multistage interconnection networks
US8335892B1 (en) Cache arbitration between multiple clients
US20030140197A1 (en) Multi-processor computer system using partition group directories to maintain cache coherence
US20070208885A1 (en) Methods And Apparatus For Providing Independent Logical Address Space And Access Management
CN104050033A (en) System and method for hardware scheduling of indexed barriers
EP0360527A2 (en) Parallel computer system using a SIMD method
US6518971B1 (en) Graphics processing system with multiple strip breakers
CN103106120A (en) Multithreaded physics engine with impulse propagation
JPH07504774A (en) real-time processing system
JPH02232747A (en) Synthesization and processing of memory access operation for multiprocessor system
CN104050032A (en) System and method for hardware scheduling of conditional barriers and impatient barriers
US5249297A (en) Methods and apparatus for carrying out transactions in a computer system
JP2010009580A (en) Partition-free multisocket memory system architecture
JPH06348593A (en) Data transfer controller
US20110271060A1 (en) Method And System For Lockless Interprocessor Communication

Legal Events

Date Code Title Description
MKEX Expiry
MKEX Expiry

Effective date: 20050308