WO2006044122A2 - System and method for streaming sequential data through an automotive switch fabric network - Google Patents

System and method for streaming sequential data through an automotive switch fabric network Download PDF

Info

Publication number
WO2006044122A2
WO2006044122A2 PCT/US2005/034630 US2005034630W WO2006044122A2 WO 2006044122 A2 WO2006044122 A2 WO 2006044122A2 US 2005034630 W US2005034630 W US 2005034630W WO 2006044122 A2 WO2006044122 A2 WO 2006044122A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
message
record
switch fabric
data packet
Prior art date
Application number
PCT/US2005/034630
Other languages
French (fr)
Other versions
WO2006044122A3 (en
Inventor
Patrick D. Jordan
Hai Dong
Hugh W. Johnson
Prakash U. Kartha
Samuel M. Levenson
Original Assignee
Motorola, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola, Inc. filed Critical Motorola, Inc.
Publication of WO2006044122A2 publication Critical patent/WO2006044122A2/en
Publication of WO2006044122A3 publication Critical patent/WO2006044122A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2854Wide area networks, e.g. public data networks

Definitions

  • This invention in general relates to in-vehicle communication networks and particularly to a system and method for streaming sequential data through an automotive switch fabric network.
  • the switch fabric is a web of interconnected switching devices or nodes.
  • the switching device or nodes are joined by communication links for the transmission of data packets between the switching devices or nodes.
  • Control devices, sensors, actuators and the like are coupled to the switch fabric, and the switch fabric facilitates communication between these coupled devices.
  • the coupled devices may be indicator lights, vehicle control systems, vehicle safety systems, and comfort and convenience systems.
  • a command to actuate a device or devices may be generated by a control element coupled to the switch fabric and is communicated to the device or devices via the switch fabric nodes.
  • FIG. 1 is a block diagram illustrating an embodiment of a vehicle switch fabric network
  • FIG. 2 is a diagram illustrating a portion of the switch fabric network connected to a plurality of interfaces and devices;
  • FIG. 3 is a diagram illustrating a portion of the switch fabric network connected to a diagnostic device and interface for the downloading of large records and files;
  • FIG. 4 is a diagram illustrating one embodiment of the components of a target node in the switch fabric network
  • FIG. 5 is a diagram illustrating two memory portions of the target node in the switch fabric network for receiving large records and files
  • FIG. 6 is a message flow diagram illustrating one embodiment of the types of message that may be exchanged during the reprogramming of the target node;
  • FIG. 7 illustrates various data packets that may be adapted for use in a vehicle switch fabric network;
  • FIG. 8 illustrates a relatively large record or message that needs to be transmitted through the vehicle switch fabric network
  • FIG. 9 illustrates a data packet having a small payload portion relative to the record or message of FIG. 8.
  • FIG. 10 illustrates a set of payload portions of data packets that carry information contained in the record or message of FIG. 8.
  • the system and method described herein takes large data records and breaks them down into smaller units (data packets) that fit within the constraints of the physical layer on which communication links in the switch fabric network is built.
  • the smaller data packets are assigned with a message identification and a sequence number.
  • Data packets associated with the same data record or message are assigned with the same message identification but may differ in their sequence number.
  • Each data packet is transmitted over the vehicle switch fabric network to a destination node. At the destination node, the data packets may be reassembled to its original data format based on the message identification and sequence numbers. The reassembled message may then be presented to an application in the node for processing.
  • FIG. 1 illustrates a vehicle 20 including a network 22 to which various vehicle devices 24a-d are coupled via respective interfaces 26a-d.
  • the vehicle devices 24a-d may be sensors, actuators, and processors used in connection with various vehicle functional systems and sub-systems, such as, but not limited to, diagnostic, control-by-wire applications for throttle, braking and steering control, adaptive suspension, power accessory control, communications, entertainment, and the like.
  • the devices 24a-d may be external or internal to the vehicle.
  • the embodiment in FIG. 1 includes an external device 24a and several internal devices 24b-d.
  • the interfaces 26a-d are any suitable interface for coupling the particular vehicle device 24a-d to the network 22, and may be wire, optical, wireless or combinations thereof.
  • the vehicle device 24a-d is particularly adapted to provide one or more functions associated with the vehicle 20. These vehicle devices 24a-d may be data producing, such as a sensor, data consuming, such as an actuator, or processing, which both produces and consumes data.
  • the external device 24a is a diagnostic device that permits a user to exchange data with the network of the vehicle, as will be explained further below. Data produced by or provided to a vehicle device 24a-d, and carried by the network 22, is independent of the function of the vehicle device 24a-d itself. That is, the interfaces 26a-d provides independent data exchange between the coupled device 24a-d and the network 22.
  • the connection between the devices 24a-d and the interfaces 26a-d may be a wired or wireless connection.
  • FIG. 1 illustrates both types of connections between the diagnostic device 24a and its interface 26a, a wired connection 25 and a wireless connection 27.
  • the device 24a and the interface 26a include wireless communication transceivers permitting the units to communicate with each other via an optical or radio frequency transmission.
  • the interface 26a may be a single device or incorporated as a single assembly as part of a gateway node 30a. Irregardless of the type of connection or type of assembly, the interface 26a to the diagnostic device 24a should arbitrate the linking of the device 24a to the network 22 through an authentication, security and encryption process.
  • the network 22 may include a switch fabric 28 defining a plurality of communication paths between the vehicle devices 24a-d.
  • the communication paths permit multiple simultaneous peer-to-peer, one-to-many, many-to-many, etc. communications between the vehicle devices 24a-d.
  • data exchanged, for example, between devices 24a and 24d may utilize any available path or paths between the vehicle devices 24a, 24d.
  • a single path through the switch fabric 28 may carry all of a single data communication between one vehicle device 24a and another vehicle device 24d, or several communication paths may carry portions of the data communication. Subsequent communications may use the same path or other paths as dictated by the then state of the network 22. This provides reliability and speed advantages over bus architectures that provide single communication paths between devices, and hence are subject to failure with failure of the single path.
  • communications between other of the devices 24b, 24c may occur simultaneously using the communication paths within the switch fabric 28.
  • the network 22 may comply with transmission control protocol/Internet (TCP/IP), asynchronous transfer mode (ATM), Infiniband, RapidIO, or other packet data protocols. As such, the network 22 utilizes data packets, having fixed or variable length, defined by the applicable protocol. For example, if the network 22 uses asynchronous transfer mode (ATM) communication protocol, ATM standard data cells are used.
  • TCP/IP transmission control protocol/Internet
  • ATM asynchronous transfer mode
  • ATM asynchronous transfer mode
  • the internal vehicle devices 24b-d need not be discrete devices. Instead, the devices may be systems or subsystems of the vehicle and may- include one or more legacy communication media, i.e., legacy bus architectures such as the Controller Area Network (CAN) Protocol, the SAE J1850 Communication Standard, the Local
  • the respective interface 26b-d may be configured as a proxy or gateway to permit communication between the network 22 and the legacy device.
  • an active network 22 in accordance with one embodiment of the present invention includes a switch fabric 28 of nodes 30a-h that communicatively couples a plurality of devices 24a-d via respective interfaces 26a-d.
  • Connection links or media 32 interconnects the nodes 30a-h.
  • the connection media 32 may be bounded media, such as wire or optical fiber, unbounded media, such as free optical or radio frequency, or combinations thereof.
  • the term node is used broadly in connection with the definition of the switch fabric 28 to include any number of intelligent structures for communicating data packets within the network 22 without an arbiter or other network controller and may include: switches, intelligent switches, routers, bridges, gateways and the like. For instance, in the embodiment shown in FIG.
  • the node 30a may be a gateway node that connects the diagnostic interface 26a (and the diagnostic device 24a) to the switch fabric 28.
  • Data is carried through the network 22 in data packet form guided by the nodes 30a-h.
  • the cooperation of the nodes 30a-h and the connection media 32 define a plurality of communication paths between the devices 24a-d that are communicatively coupled to the network 22.
  • a route 34 defines a communication path from the gateway node 30a to a target node 30g.
  • route 36 a new route, illustrated as route 36, can be used.
  • the route 36 may be dynamically generated or previously defined as a possible communication path, to ensure the communication between the gateway node 30a and the target node 3Og.
  • FIG. 3 shows a user 42 that can interact with a diagnostic device 24a.
  • the diagnostic device 24a contains a software manager 40 that includes instructions for initiating and controlling a reprogramming process of upgrading or replacing software and code in the switch fabric 28.
  • the diagnostic device 24a is connected via a wired link 25 or a wireless link 27 to diagnostic interface 26a.
  • the diagnostic interface 26a couples the diagnostic device 24a to the vehicle network 22 (and the switch fabric 28) through one of the nodes 30a-h, for example a gateway node 30a.
  • the diagnostic interface 26 is separate from the nodes 30a-h in the switch fabric network 28.
  • the diagnostic interface 26a and its functions may be incorporated into the gateway node 30a.
  • Each of the nodes 30a-h in the switch fabric 28 contain software components to enable data communications between the nodes 30a-h and devices 24a-d.
  • a user 42 may use the diagnostic device 24a and the system manager 40 to send commands to upgrade or replace software and code in the switch fabric 28, including reprogramming software and code residing in the nodes 30a-h.
  • FIG. 4 shows one embodiment of a target node 3Og that may be in need of new software components.
  • the target node 3Og includes a processor 52, at least one transceiver 54, and a memory 56.
  • the memory 56 includes an erasable memory portion 62 and a protected memory portion 64.
  • the processor 52 is configured to transfer control and execute instructions from software components residing in either the erasable memory portion 62 or the protected memory portion 64.
  • the erasable memory portion 62 contains a set of software components (code block) to operate the target node 30g for normal data communications and operation within the switch fabric 28.
  • the software components in the erasable memory portion 62 may include the complete software for an application layer 72, a network layer 74, and a link (or bus) layer 76.
  • the erasable memory portion 62 may also include an embedded Distributed System Management (DSM) component 76 that can satisfy or act upon requests from the system manager 40.
  • DSM Distributed System Management
  • the DSM component 76 may be configured to work at one or more of the layers 72, 74, 78.
  • the protected memory portion 64 contains a set of software components (boot block) that includes functions to load software components safely and securely to the erasable memory portion 62.
  • the software components residing on the protected memory portion 64 include a flash memory loader module 80, a system manager agent 82 (that can communicate with the system manager 40), and standard components for a network layer 84, a Distributed System Management (DSM) component 86, and a link (or bus) layer 88.
  • DSM Distributed System Management
  • the protected memory portion 64 cannot be erased by the user 42, the diagnostic device 24a, or the system manager 40.
  • the protected memory portion 64 is also not accessible from the software components residing on the erasable memory portion 62.
  • control should go directly to the software components residing on the protected memory portion 64, including the flash memory loader module 80 mentioned above. If the flash memory loader module 80 fails to initialize hardware in the target node 30g, the target node 30g may be configured to go to a low power standby. In one embodiment, the flash memory loader 80, upon node startup, will determine if valid software components reside (and is available) in the erasable memory portion 62. This will ensure that corrupted or partial software components in the erasable memory portion 62 does not deadlock the target node 3Og. This determination may be done by checking a key number stored in a prescribed location in the erasable memory portion 62.
  • the processor 50 may be configured to switch control of the target node 30g from executing the software components residing on its protected memory portion 64 to the software components residing on its erasable memory portion 62. If, however, the key number is not stored in the prescribed location, the flash memory loader 80 may assume that the software components in the erasable memory portion 62 is not valid and send a notification that the target node 3Og needs to be reprogrammed. This notification may be sent to the gateway node 30a that will then forward the request to the system manager 40 residing on the diagnostic device 24a. The flash memory loader 80 should then remain in an idle state to await instructions from the system manager 40 to initiate reprogramming of the software components in the erasable memory portion 62, as will be explained in more detail below.
  • the diagnostic system may be configured to allow the system manager 40 to query each node 30a-h in the switch fabric 28 to determine whether a node needs to be reprogrammed.
  • the system manager 40 may initiate a status dialogue with a target node 30g by sending a status request message to the gateway node 30a.
  • the gateway node 30a will then route the status request message to the target node 30g.
  • the target node 30g may then be configured to respond to the status request message by transmitting a status response message to the gateway node 30a, who may then forward the message back to the system manager 40.
  • a user 42 may decide to reprogram a specific target node 30g.
  • FIG. 6 is a message flow diagram that illustrates one embodiment of a sequence of steps that a user 42 may take in reprogramming a target node 30g.
  • the message flow diagram shows messages that may be exchanged between the user 42, the system manager 40 (residing on the diagnostic device 24a), the gateway node 30a, and the target node 3Og.
  • the user 42 may initiate the reprogramming operation using the system manager 42 by selecting the node identification of the target node 30g to be reprogrammed (arrow 102).
  • the user 42 may then load a record file in the system manager 40 from a host's file system (arrow 104).
  • the system manager 40 residing on the diagnostic device 24a, will then initiate a download session with the target node 30g.
  • the system manager 40 may send an initiate download session message through the diagnostic interface 26a to the gateway node 30a (arrow 106).
  • the gateway node 30a will then route the initiate download session message to the target node 3Og (arrow 108).
  • the target node 30g may be configured to switch from executing the software components residing on its erasable memory portion 62 to the software components residing on its protected memory portion 64.
  • software components in both the erasable memory portion 62 and the protected memory portion 64 include at least standard software components for the network layer 74, the Distributed System Management (DSM) component 76, and the link (or bus) layer 78. This will cause normal network functions to continue uninterrupted. However, any applications running on the target node 30g will not be available.
  • DSM Distributed System Management
  • the target node 3Og may then send an acknowledge download session message to the gateway node 30a (arrow 110), who will then forward the message to the system manager 40 (arrow 112).
  • the system manager 40 After receiving the acknowledgement from the target node 30g, the system manager 40 will then send an erase flash command to the gateway node 30a for each block of memory that needs to be erased (arrow 114).
  • the diagnostic device 24a may be configured to analyze the current software components and send one or more commands to erase some or all of the memory blocks in erasable memory portion 62.
  • the gateway node 30a will route the erase flash command to the target node 30g (arrow 116).
  • the target node 30g Upon receipt of the erase flash command, the target node 30g will erase the corresponding memory locations in the command.
  • the target node 30g may then send an acknowledge erase flash command to the gateway node 30a (arrow 118), who will then forward the message to the system manager 40 (arrow 120).
  • the system manager 40 may then send a new set of compiled software components or records to the gateway node 30a (arrow 122).
  • the software components or records are included in the build file loaded into the system manager 40 (arrow 104).
  • the downloadable build file for reprogramming the target node 30g may contain thousands of records. Each record may be relatively large in size compared to the physical constraints of the data packets that can be transmitted over the communication links 32. In that case, the records should be broken down as described further below in relation to FIGS. 7-10.
  • the gateway node 30a will route the new set of compiled software components or records to the target node 30g (arrow 124).
  • the target node 30g may then send an acknowledgement to the gateway node 30a (arrow 126) when each component or record is received.
  • the gateway node 30a will then forward the message to the system manager 40 (arrow 128).
  • the system manager 40 may repeat the process of downloading software components or records until all necessary components or records are received by the target node 30g.
  • the system manager 40 may then send a check data message to the gateway node 30a (arrow 130).
  • the check data message includes a checksum for the new downloaded software components.
  • the gateway node 30a will route the check data message to the target node 30g (arrow 132).
  • the target node 3Og will then calculate the checksum for the new set of software components into its erasable memory portion 62 and compare it against checksum received from the system manager 40. Assuming that the checksum matches, the target node 30g will then write the new set of software components into its erasable memory portion 62.
  • the target node 3Og may then send an acknowledge check data message to the gateway node 30a (arrow 134), who will then forward the message to the system manager 40 (arrow 136).
  • the system manager 40 may then send an entry point message to the gateway node 30a (arrow 138).
  • the entry point message includes an entry point for the code block.
  • the gateway node 30a will route the entry point message to the target node 30g (arrow 140).
  • the target node 30g sends an acknowledge entry point message to the gateway node 30a (arrow 142), who will then forward the message to the system manager 40 (arrow 144).
  • the system manager 40 may then inform the user 42 about the successful completion of the download operation and provide the user 42 with an option to restore or reset the target node 30g (arrow 146).
  • the user 42 may wish to postpone the restoration of the node until diagnosis of other nodes is complete.
  • the user 42 may select a restore option to the system manager 40 (arrow 148).
  • the system manager 40 may then send a restore operation message to the gateway node 30a (arrow 150).
  • the gateway node 30a will then route the restore operation message to the target node 30g (arrow 152).
  • the target node 30g After receiving the restore operation message, the target node 30g, including processor 50, will then switch from executing the software components residing on its protected memory portion 64 to the software components residing on its erasable memory portion 62. This will allow normal operation of applications to run again on the target node 3Og.
  • the target node 30g may then send an acknowledge restore operation message to the gateway node 30a (arrow 154), who will then forward the message to the system manager 40 (arrow 156).
  • the system manager 40 may then alert the user 42 that the acknowledgement was received from the target node 3Og
  • FIG. 7 illustrates several data packet configurations that may be used in connection with switch fabric networks according to the embodiments of the present invention.
  • the network 22 may be configured to operate in accordance with TCP/IP, ATM, RapidIO, Infiniband and other suitable communication protocols. These data packets include structure to conform to the standard required.
  • a data packet for this invention may include a data packet 200 having a header portion 202, a payload portion 204, and a trailer portion 206.
  • the network 22 and the nodes 30a-h forming the switch fabric 28 may contain processing capability.
  • a data packet 210 includes along with a header portion 212, payload portion 214, and trailer portion 216 an active portion 218.
  • the active portion 218 may cause the network element to take some specific action, for example providing alternative routing of the data packet, reconfiguration of the data packet, reconfiguration of the node, or other action, based upon the content of the active portion 218.
  • the data packet 220 includes an active portion 228 integrated with the header portion 222 along with a payload portion 224 and a trailer portion 226.
  • the data packet 230 includes a header portion 232, a payload portion 234 and a trailer portion 236.
  • An active portion 238 is also provided, disposed between the payload portion 234 and the trailer portion 236.
  • an active portion 248 may be integrated with the trailer portion 246 along with a payload portion 244 and a header portion 242.
  • the data packet 250 illustrates a first active portion 258 and a second active portion 260, wherein the first active portion 258 is integrated a header portion 252 and the second active portion 258 is integrated with the trailer portion 256.
  • the data packet 250 also includes a payload portion 254. Other arrangements of the data packets for use with the present invention may be envisioned.
  • the active portion of the data packet may represent a packet state.
  • the active portion may reflect a priority of the data packet based on aging time. That is, a packet initially generated may have a normal state, but for various reasons, is not promptly delivered. As the packet ages as it is routed through the active network, the active portion can monitor time since the data packet was generated or time when the packet is required, and change the priority of the data packet accordingly.
  • the packet state may also represent an error state, either of the data packet or of one or more nodes of the network 22.
  • the active portion may also be used to messenger data unrelated to the payload within the network 22, track the communication path taken by the data packet through the network 22, provide configuration information (route, timing, etc.) to nodes 30a-h of the network 22, provide functional data to one or more devices 24a-d coupled to the network 22 or provide receipt acknowledgement.
  • the payload portion of the data packets carries data and other information relating to the message being transmitted through the network 22.
  • the size of the data packet (including the payload portion) will be constrained by the physical layer on which the switch fabric 28 is built. There are situations where the message size at the application layer will be larger than the packet size allowed to be transmitted over the network 22. OrIe situation, as described above, is where software components or records need to be downloaded to a node 30a-h. Accordingly, in one embodiment of the present invention, a message in the application layer that is larger than the packet size of the network 22 will be broken into smaller units to fit the packet size limitation.
  • Each unit is placed into an individual data packet and transmitted independently over the switch fabric 28 to a destination node (such as the target node 3Og receiving downloaded software components or records described above).
  • a destination node such as the target node 3Og receiving downloaded software components or records described above.
  • the individual data packets are reassembled to its original form and passed to the application that receives and processes the message.
  • FIGS. 8 and 9 further illustrate one embodiment of dividing a large message down into individual units or data packets for transmission through the switch fabric 28.
  • FIG. 8 illustrates a message 300 containing the a variety of fields including a message type field 302, a message length field 304, an address field 306, a message data field 308, and a checksum field 310.
  • FIG. 9 illustrates a data packet 200 having a specific header portion 202, a payload portion 204, and a trailer portion 206. Assume for purposes of illustration that the payload portion 204 of network data packets 200 is limited to 8 bytes. Also assume for purposes of illustration that the message 300 that needs to be transmitted through the switch fabric 28 is larger than the network limitation.
  • the downloadable build file for reprogramming node software components may contain thousands of build records.
  • one build file may include: the message type field 302 (1 byte); the message length field 304 (1 byte); the address field 306 (3 bytes); the message data field 306 (32 bytes); and the checksum field 310 (1 byte).
  • the message 300 is divided into smaller data packets 200 where each data packet is assigned the same message identification but different sequence numbers. This is shown further in FIG. 10.
  • the message 300 may be divided into a plurality of data packets 200 (seven data packets in this example), each having different payload portions 204a-204g.
  • the data packets may include an active portion (such as those shown in FIG. 7 as data packets 210, 220, 230, 240, 250) or no active portion (such as that shown in FIG. 7 as data packet 200).
  • the payload portion 204a-204g may include eight fields that are each one byte long: a me'ssage identification field 322, a command or record identification (RID) / sequence field 324, and six data fields 326a- f.
  • Each payload portion 204a-204g is carried over the switch fabric 28 by one switch fabric data packet 200.
  • the message identification field 322 for each of the payload portions 204a-g will contain a unique message identification assigned to the particular record or message 300 being transmitted.
  • the message identification within the field 322 will be the same for all payload portions 204a-g that are common to the same record or message 300.
  • the message identification is used by the flash loader module 80 to track the received data packets so that it can associate different payload portions 204a-g with the same record or message 300.
  • the command or sequence field 324 contains either a command or a sequence number associated with the payload portion 204a-g.
  • the command will indicate to the receiving node how to use the data carried by the following payload portions 204a-g.
  • the command value should be different from the record identification (RID) / sequence value by design.
  • Each payload portion 204a-g may have a record identification (RID) / sequence value except for the first payload portion 204a, which contains a command.
  • the record identification (RID) / sequence values may be used by the flash loader module 80 to group the received data packets so that it can re-assemble the record or message in the right order at the receiving node.
  • the first payload portion 204a may include the values for the address field 306 (divided into 1 byte segments), the message length field 304 (1 byte), and the message type field (1 byte) of the original record or message 300.
  • the first payload portion 204a may also include a record identification (RID) (1 byte).
  • the remaining payload portions 204b-g may include the values found in the message data field 306 (divided into 32 byte segments) and the checksum field 310 of the original record or message 300.
  • the value in the checksum 310 field may be used to protect against possible data corruption.
  • Each data packet is transmitted over the vehicle switch fabric network to a destination node.
  • the data packets may be reassembled to its original data format based on the message identification and sequence numbers.
  • the reassembled message may then be presented to an application in the node for processing.

Abstract

The system and method described herein takes large data records and breaks them down into smaller units (data packets) that fit within the constraints of the physical layer on which communication links in the switch fabric network is built. The smaller data packets are assigned with a message identification and a sequence number. Data packets associated with the same data record or message are assigned with the same message identification but may differ in their sequence number. Each data packet is transmitted over the vehicle switch fabric network to a destination node. At the destination node, the data packets may be reassembled to its original data format based on the message identification and sequence numbers. The reassembled message may then be presented to an application in the node for processing.

Description

SYSTEM AND METHOD FOR STREAMING SEQUENTIAL DATA THROUGH AN AUTOMOTIVE SWITCH FABRIC NETWORK
The present application claims priority from provisional application, Serial No. 60/619,669, entitled "System and Method for Streaming Sequential Data Through an Automotive Switch Fabric Network," filed October 18, 2004, which is commonly owned and incorporated herein by reference in its entirety.
FIELD OF THE INVENTION This invention in general relates to in-vehicle communication networks and particularly to a system and method for streaming sequential data through an automotive switch fabric network.
BACKGROUND OF THE INVENTION The commonly assigned United States patent application entitled "Vehicle
Active Network," serial no. 09/945,581, filed August 31, 2001, Publication No. US 20030043793, the disclosure of which is hereby expressly incorporated herein by reference, introduces the concept of an active network that includes a switch fabric. The switch fabric is a web of interconnected switching devices or nodes. The switching device or nodes are joined by communication links for the transmission of data packets between the switching devices or nodes. Control devices, sensors, actuators and the like are coupled to the switch fabric, and the switch fabric facilitates communication between these coupled devices. The coupled devices may be indicator lights, vehicle control systems, vehicle safety systems, and comfort and convenience systems. A command to actuate a device or devices may be generated by a control element coupled to the switch fabric and is communicated to the device or devices via the switch fabric nodes. In the context of vehicular switch fabric networks, a challenge is presented in terms of how relatively large data records and messages are transported across the switch fabric network. In particular, when sending large data records and messages across the switch fabric network, the size of the data packets may be constrained by the physical layer on which the communication links that join the switching devices or nodes are built. A need exists for the ability to transmit large records and messages across the switch fabric when size restrictions for the communication links exist. It is, therefore, desirable to provide a system and method to overcome or minimize most, if not all, of the preceding problems especially in the area of transmitting large data records and messages across the nodes in an automotive switch fabric network. This would help in several areas including the reprogramming of switch fabric nodes where large records need to be downloaded.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating an embodiment of a vehicle switch fabric network;
FIG. 2 is a diagram illustrating a portion of the switch fabric network connected to a plurality of interfaces and devices; FIG. 3 is a diagram illustrating a portion of the switch fabric network connected to a diagnostic device and interface for the downloading of large records and files;
FIG. 4 is a diagram illustrating one embodiment of the components of a target node in the switch fabric network;
FIG. 5 is a diagram illustrating two memory portions of the target node in the switch fabric network for receiving large records and files;
FIG. 6 is a message flow diagram illustrating one embodiment of the types of message that may be exchanged during the reprogramming of the target node; FIG. 7 illustrates various data packets that may be adapted for use in a vehicle switch fabric network;
FIG. 8 illustrates a relatively large record or message that needs to be transmitted through the vehicle switch fabric network;
FIG. 9 illustrates a data packet having a small payload portion relative to the record or message of FIG. 8; and
FIG. 10 illustrates a set of payload portions of data packets that carry information contained in the record or message of FIG. 8.
While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the invention as defined by the appended claims. DETAILED DESCRIPTION
What is described is a system and method for streaming sequential data through a vehicle switch fabric network. This is particular useful in areas such as reprogramming nodes in the automotive switch fabric network where relatively large records or messages need to be transmitted through the switch fabric, although the invention may be used in other areas. In sum, the system and method described herein takes large data records and breaks them down into smaller units (data packets) that fit within the constraints of the physical layer on which communication links in the switch fabric network is built. The smaller data packets are assigned with a message identification and a sequence number. Data packets associated with the same data record or message are assigned with the same message identification but may differ in their sequence number. Each data packet is transmitted over the vehicle switch fabric network to a destination node. At the destination node, the data packets may be reassembled to its original data format based on the message identification and sequence numbers. The reassembled message may then be presented to an application in the node for processing.
Now, turning to the drawings, FIG. 1 illustrates a vehicle 20 including a network 22 to which various vehicle devices 24a-d are coupled via respective interfaces 26a-d. The vehicle devices 24a-d may be sensors, actuators, and processors used in connection with various vehicle functional systems and sub-systems, such as, but not limited to, diagnostic, control-by-wire applications for throttle, braking and steering control, adaptive suspension, power accessory control, communications, entertainment, and the like. The devices 24a-d may be external or internal to the vehicle. The embodiment in FIG. 1 includes an external device 24a and several internal devices 24b-d.
The interfaces 26a-d are any suitable interface for coupling the particular vehicle device 24a-d to the network 22, and may be wire, optical, wireless or combinations thereof. The vehicle device 24a-d is particularly adapted to provide one or more functions associated with the vehicle 20. These vehicle devices 24a-d may be data producing, such as a sensor, data consuming, such as an actuator, or processing, which both produces and consumes data. In one embodiment, the external device 24a is a diagnostic device that permits a user to exchange data with the network of the vehicle, as will be explained further below. Data produced by or provided to a vehicle device 24a-d, and carried by the network 22, is independent of the function of the vehicle device 24a-d itself. That is, the interfaces 26a-d provides independent data exchange between the coupled device 24a-d and the network 22.
The connection between the devices 24a-d and the interfaces 26a-d may be a wired or wireless connection. FIG. 1 illustrates both types of connections between the diagnostic device 24a and its interface 26a, a wired connection 25 and a wireless connection 27. In the wireless connection, the device 24a and the interface 26a include wireless communication transceivers permitting the units to communicate with each other via an optical or radio frequency transmission. Additionally, the interface 26a may be a single device or incorporated as a single assembly as part of a gateway node 30a. Irregardless of the type of connection or type of assembly, the interface 26a to the diagnostic device 24a should arbitrate the linking of the device 24a to the network 22 through an authentication, security and encryption process. The network 22 may include a switch fabric 28 defining a plurality of communication paths between the vehicle devices 24a-d. The communication paths permit multiple simultaneous peer-to-peer, one-to-many, many-to-many, etc. communications between the vehicle devices 24a-d. During operation of the vehicle 20, data exchanged, for example, between devices 24a and 24d may utilize any available path or paths between the vehicle devices 24a, 24d. In operation, a single path through the switch fabric 28 may carry all of a single data communication between one vehicle device 24a and another vehicle device 24d, or several communication paths may carry portions of the data communication. Subsequent communications may use the same path or other paths as dictated by the then state of the network 22. This provides reliability and speed advantages over bus architectures that provide single communication paths between devices, and hence are subject to failure with failure of the single path. Moreover, communications between other of the devices 24b, 24c may occur simultaneously using the communication paths within the switch fabric 28.
The network 22 may comply with transmission control protocol/Internet (TCP/IP), asynchronous transfer mode (ATM), Infiniband, RapidIO, or other packet data protocols. As such, the network 22 utilizes data packets, having fixed or variable length, defined by the applicable protocol. For example, if the network 22 uses asynchronous transfer mode (ATM) communication protocol, ATM standard data cells are used.
The internal vehicle devices 24b-d need not be discrete devices. Instead, the devices may be systems or subsystems of the vehicle and may- include one or more legacy communication media, i.e., legacy bus architectures such as the Controller Area Network (CAN) Protocol, the SAE J1850 Communication Standard, the Local
Interconnect Network (LIN) Protocol, the FLEXRAY Communications System Standard, the Media Oriented Systems Transport or MOST Protocol, or similar bus structures. In such embodiments, the respective interface 26b-d may be configured as a proxy or gateway to permit communication between the network 22 and the legacy device.
Referring to FIG. 2, an active network 22 in accordance with one embodiment of the present invention includes a switch fabric 28 of nodes 30a-h that communicatively couples a plurality of devices 24a-d via respective interfaces 26a-d. Connection links or media 32 interconnects the nodes 30a-h. The connection media 32 may be bounded media, such as wire or optical fiber, unbounded media, such as free optical or radio frequency, or combinations thereof. In addition, the term node is used broadly in connection with the definition of the switch fabric 28 to include any number of intelligent structures for communicating data packets within the network 22 without an arbiter or other network controller and may include: switches, intelligent switches, routers, bridges, gateways and the like. For instance, in the embodiment shown in FIG. 2, the node 30a may be a gateway node that connects the diagnostic interface 26a (and the diagnostic device 24a) to the switch fabric 28. Data is carried through the network 22 in data packet form guided by the nodes 30a-h. The cooperation of the nodes 30a-h and the connection media 32 define a plurality of communication paths between the devices 24a-d that are communicatively coupled to the network 22. For example, a route 34 defines a communication path from the gateway node 30a to a target node 30g. If there is a disruption along the route 34 inhibiting communication of the data packets from the gateway node 30a to the target node 3Og, for example, if one or more nodes are at capacity or have become disabled or there is a disruption in the connection media joining the nodes along route 34, a new route, illustrated as route 36, can be used. The route 36 may be dynamically generated or previously defined as a possible communication path, to ensure the communication between the gateway node 30a and the target node 3Og.
Some applications may require reprogramming of one or more nodes 30a-h in the switch fabric 28. The embodiment and topology shown in FIG. 3 advantageously permits the ability to upgrade or replace software and code in the switch fabric 28, including reprogramming software and code residing in the nodes 30a-h. FIG. 3 shows a user 42 that can interact with a diagnostic device 24a. The diagnostic device 24a contains a software manager 40 that includes instructions for initiating and controlling a reprogramming process of upgrading or replacing software and code in the switch fabric 28. The diagnostic device 24a is connected via a wired link 25 or a wireless link 27 to diagnostic interface 26a. The diagnostic interface 26a couples the diagnostic device 24a to the vehicle network 22 (and the switch fabric 28) through one of the nodes 30a-h, for example a gateway node 30a. In one embodiment, the diagnostic interface 26 is separate from the nodes 30a-h in the switch fabric network 28. However, in other embodiment, the diagnostic interface 26a and its functions may be incorporated into the gateway node 30a. Each of the nodes 30a-h in the switch fabric 28 contain software components to enable data communications between the nodes 30a-h and devices 24a-d. A user 42 may use the diagnostic device 24a and the system manager 40 to send commands to upgrade or replace software and code in the switch fabric 28, including reprogramming software and code residing in the nodes 30a-h. For purposes of illustrating the present invention, assume that a user 42 desires to reprogram software components residing in a target node 3Og. FIG. 4 shows one embodiment of a target node 3Og that may be in need of new software components.
To illustrate the functionality and the adaptability of the target node 3Og, it is shown to include a plurality of input/output ports 50a-d although separate input and output ports could also be used. Various configurations of the target node 30g having more or fewer ports may be used in the network 22 depending on the application. The target node 30g includes a processor 52, at least one transceiver 54, and a memory 56. The memory 56 includes an erasable memory portion 62 and a protected memory portion 64. The processor 52 is configured to transfer control and execute instructions from software components residing in either the erasable memory portion 62 or the protected memory portion 64. The erasable memory portion 62 contains a set of software components (code block) to operate the target node 30g for normal data communications and operation within the switch fabric 28. In one embodiment, as shown in FIG. 5, the software components in the erasable memory portion 62 may include the complete software for an application layer 72, a network layer 74, and a link (or bus) layer 76. The erasable memory portion 62 may also include an embedded Distributed System Management (DSM) component 76 that can satisfy or act upon requests from the system manager 40. The DSM component 76 may be configured to work at one or more of the layers 72, 74, 78.
The protected memory portion 64 contains a set of software components (boot block) that includes functions to load software components safely and securely to the erasable memory portion 62. In one embodiment, as shown in FIG. 5, the software components residing on the protected memory portion 64 include a flash memory loader module 80, a system manager agent 82 (that can communicate with the system manager 40), and standard components for a network layer 84, a Distributed System Management (DSM) component 86, and a link (or bus) layer 88. The protected memory portion 64 cannot be erased by the user 42, the diagnostic device 24a, or the system manager 40. The protected memory portion 64 is also not accessible from the software components residing on the erasable memory portion 62.
Upon startup of the target node 3Og, control should go directly to the software components residing on the protected memory portion 64, including the flash memory loader module 80 mentioned above. If the flash memory loader module 80 fails to initialize hardware in the target node 30g, the target node 30g may be configured to go to a low power standby. In one embodiment, the flash memory loader 80, upon node startup, will determine if valid software components reside (and is available) in the erasable memory portion 62. This will ensure that corrupted or partial software components in the erasable memory portion 62 does not deadlock the target node 3Og. This determination may be done by checking a key number stored in a prescribed location in the erasable memory portion 62. If the key number is stored in the prescribed location, the processor 50 may be configured to switch control of the target node 30g from executing the software components residing on its protected memory portion 64 to the software components residing on its erasable memory portion 62. If, however, the key number is not stored in the prescribed location, the flash memory loader 80 may assume that the software components in the erasable memory portion 62 is not valid and send a notification that the target node 3Og needs to be reprogrammed. This notification may be sent to the gateway node 30a that will then forward the request to the system manager 40 residing on the diagnostic device 24a. The flash memory loader 80 should then remain in an idle state to await instructions from the system manager 40 to initiate reprogramming of the software components in the erasable memory portion 62, as will be explained in more detail below.
Additionally, the diagnostic system may be configured to allow the system manager 40 to query each node 30a-h in the switch fabric 28 to determine whether a node needs to be reprogrammed. In one embodiment, the system manager 40 may initiate a status dialogue with a target node 30g by sending a status request message to the gateway node 30a. The gateway node 30a will then route the status request message to the target node 30g. The target node 30g may then be configured to respond to the status request message by transmitting a status response message to the gateway node 30a, who may then forward the message back to the system manager 40. Depending on the content of the status response message, a user 42 may decide to reprogram a specific target node 30g.
FIG. 6 is a message flow diagram that illustrates one embodiment of a sequence of steps that a user 42 may take in reprogramming a target node 30g. The message flow diagram shows messages that may be exchanged between the user 42, the system manager 40 (residing on the diagnostic device 24a), the gateway node 30a, and the target node 3Og. The user 42 may initiate the reprogramming operation using the system manager 42 by selecting the node identification of the target node 30g to be reprogrammed (arrow 102). The user 42 may then load a record file in the system manager 40 from a host's file system (arrow 104).
The system manager 40, residing on the diagnostic device 24a, will then initiate a download session with the target node 30g. In one embodiment, the system manager 40 may send an initiate download session message through the diagnostic interface 26a to the gateway node 30a (arrow 106). The gateway node 30a will then route the initiate download session message to the target node 3Og (arrow 108).
In response to receiving an initiate download session message, the target node 30g, including processor 50, may be configured to switch from executing the software components residing on its erasable memory portion 62 to the software components residing on its protected memory portion 64. As mentioned above, it is preferred that software components in both the erasable memory portion 62 and the protected memory portion 64 include at least standard software components for the network layer 74, the Distributed System Management (DSM) component 76, and the link (or bus) layer 78. This will cause normal network functions to continue uninterrupted. However, any applications running on the target node 30g will not be available. After switching control from the software components residing on its erasable memory portion 62 to the software components residing on its protected memory portion 64, the target node 3Og may then send an acknowledge download session message to the gateway node 30a (arrow 110), who will then forward the message to the system manager 40 (arrow 112).
After receiving the acknowledgement from the target node 30g, the system manager 40 will then send an erase flash command to the gateway node 30a for each block of memory that needs to be erased (arrow 114). The diagnostic device 24a may be configured to analyze the current software components and send one or more commands to erase some or all of the memory blocks in erasable memory portion 62. The gateway node 30a will route the erase flash command to the target node 30g (arrow 116). Upon receipt of the erase flash command, the target node 30g will erase the corresponding memory locations in the command. The target node 30g may then send an acknowledge erase flash command to the gateway node 30a (arrow 118), who will then forward the message to the system manager 40 (arrow 120).
The system manager 40 may then send a new set of compiled software components or records to the gateway node 30a (arrow 122). The software components or records are included in the build file loaded into the system manager 40 (arrow 104). The downloadable build file for reprogramming the target node 30g may contain thousands of records. Each record may be relatively large in size compared to the physical constraints of the data packets that can be transmitted over the communication links 32. In that case, the records should be broken down as described further below in relation to FIGS. 7-10. In any event, the gateway node 30a will route the new set of compiled software components or records to the target node 30g (arrow 124). The target node 30g may then send an acknowledgement to the gateway node 30a (arrow 126) when each component or record is received. The gateway node 30a will then forward the message to the system manager 40 (arrow 128). The system manager 40 may repeat the process of downloading software components or records until all necessary components or records are received by the target node 30g.
The system manager 40 may then send a check data message to the gateway node 30a (arrow 130). In one embodiment, the check data message includes a checksum for the new downloaded software components. The gateway node 30a will route the check data message to the target node 30g (arrow 132). The target node 3Og will then calculate the checksum for the new set of software components into its erasable memory portion 62 and compare it against checksum received from the system manager 40. Assuming that the checksum matches, the target node 30g will then write the new set of software components into its erasable memory portion 62.
The target node 3Og may then send an acknowledge check data message to the gateway node 30a (arrow 134), who will then forward the message to the system manager 40 (arrow 136). The system manager 40 may then send an entry point message to the gateway node 30a (arrow 138). In one embodiment, the entry point message includes an entry point for the code block. The gateway node 30a will route the entry point message to the target node 30g (arrow 140). In response, the target node 30g sends an acknowledge entry point message to the gateway node 30a (arrow 142), who will then forward the message to the system manager 40 (arrow 144).
Upon receiving the acknowledgement for the entry point message, the system manager 40 may then inform the user 42 about the successful completion of the download operation and provide the user 42 with an option to restore or reset the target node 30g (arrow 146). The user 42 may wish to postpone the restoration of the node until diagnosis of other nodes is complete. However, when the user 42 desires to restore the node, the user 42 may select a restore option to the system manager 40 (arrow 148). At this point, the system manager 40 may then send a restore operation message to the gateway node 30a (arrow 150). The gateway node 30a will then route the restore operation message to the target node 30g (arrow 152). After receiving the restore operation message, the target node 30g, including processor 50, will then switch from executing the software components residing on its protected memory portion 64 to the software components residing on its erasable memory portion 62. This will allow normal operation of applications to run again on the target node 3Og. The target node 30g may then send an acknowledge restore operation message to the gateway node 30a (arrow 154), who will then forward the message to the system manager 40 (arrow 156). The system manager 40 may then alert the user 42 that the acknowledgement was received from the target node 3Og
(arrow 158). FIG. 7 illustrates several data packet configurations that may be used in connection with switch fabric networks according to the embodiments of the present invention. As described, the network 22 may be configured to operate in accordance with TCP/IP, ATM, RapidIO, Infiniband and other suitable communication protocols. These data packets include structure to conform to the standard required. In one embodiment, a data packet for this invention may include a data packet 200 having a header portion 202, a payload portion 204, and a trailer portion 206. As described herein, the network 22 and the nodes 30a-h forming the switch fabric 28 may contain processing capability. In that regard, a data packet 210 includes along with a header portion 212, payload portion 214, and trailer portion 216 an active portion 218. The active portion 218 may cause the network element to take some specific action, for example providing alternative routing of the data packet, reconfiguration of the data packet, reconfiguration of the node, or other action, based upon the content of the active portion 218. The data packet 220 includes an active portion 228 integrated with the header portion 222 along with a payload portion 224 and a trailer portion 226. The data packet 230 includes a header portion 232, a payload portion 234 and a trailer portion 236. An active portion 238 is also provided, disposed between the payload portion 234 and the trailer portion 236. Alternatively, as shown by the data packet 240, an active portion 248 may be integrated with the trailer portion 246 along with a payload portion 244 and a header portion 242. The data packet 250 illustrates a first active portion 258 and a second active portion 260, wherein the first active portion 258 is integrated a header portion 252 and the second active portion 258 is integrated with the trailer portion 256. The data packet 250 also includes a payload portion 254. Other arrangements of the data packets for use with the present invention may be envisioned.
The active portion of the data packet may represent a packet state. For example, the active portion may reflect a priority of the data packet based on aging time. That is, a packet initially generated may have a normal state, but for various reasons, is not promptly delivered. As the packet ages as it is routed through the active network, the active portion can monitor time since the data packet was generated or time when the packet is required, and change the priority of the data packet accordingly. The packet state may also represent an error state, either of the data packet or of one or more nodes of the network 22. The active portion may also be used to messenger data unrelated to the payload within the network 22, track the communication path taken by the data packet through the network 22, provide configuration information (route, timing, etc.) to nodes 30a-h of the network 22, provide functional data to one or more devices 24a-d coupled to the network 22 or provide receipt acknowledgement.
The payload portion of the data packets carries data and other information relating to the message being transmitted through the network 22. The size of the data packet (including the payload portion) will be constrained by the physical layer on which the switch fabric 28 is built. There are situations where the message size at the application layer will be larger than the packet size allowed to be transmitted over the network 22. OrIe situation, as described above, is where software components or records need to be downloaded to a node 30a-h. Accordingly, in one embodiment of the present invention, a message in the application layer that is larger than the packet size of the network 22 will be broken into smaller units to fit the packet size limitation. Each unit is placed into an individual data packet and transmitted independently over the switch fabric 28 to a destination node (such as the target node 3Og receiving downloaded software components or records described above). At the destination node, the individual data packets are reassembled to its original form and passed to the application that receives and processes the message.
FIGS. 8 and 9 further illustrate one embodiment of dividing a large message down into individual units or data packets for transmission through the switch fabric 28. FIG. 8 illustrates a message 300 containing the a variety of fields including a message type field 302, a message length field 304, an address field 306, a message data field 308, and a checksum field 310. FIG. 9 illustrates a data packet 200 having a specific header portion 202, a payload portion 204, and a trailer portion 206. Assume for purposes of illustration that the payload portion 204 of network data packets 200 is limited to 8 bytes. Also assume for purposes of illustration that the message 300 that needs to be transmitted through the switch fabric 28 is larger than the network limitation. For instance, the downloadable build file for reprogramming node software components may contain thousands of build records. In one embodiment, where the size of each build record is up to 38 bytes, one build file may include: the message type field 302 (1 byte); the message length field 304 (1 byte); the address field 306 (3 bytes); the message data field 306 (32 bytes); and the checksum field 310 (1 byte). In one embodiment of the present invention, the message 300 is divided into smaller data packets 200 where each data packet is assigned the same message identification but different sequence numbers. This is shown further in FIG. 10.
In FIG. 10, the message 300 may be divided into a plurality of data packets 200 (seven data packets in this example), each having different payload portions 204a-204g. The data packets may include an active portion (such as those shown in FIG. 7 as data packets 210, 220, 230, 240, 250) or no active portion (such as that shown in FIG. 7 as data packet 200). In either event, in one embodiment, where the payload portion 204 is constrained to 8 bytes, the payload portion 204a-204g may include eight fields that are each one byte long: a me'ssage identification field 322, a command or record identification (RID) / sequence field 324, and six data fields 326a- f. Each payload portion 204a-204g is carried over the switch fabric 28 by one switch fabric data packet 200.
The message identification field 322 for each of the payload portions 204a-g will contain a unique message identification assigned to the particular record or message 300 being transmitted. The message identification within the field 322 will be the same for all payload portions 204a-g that are common to the same record or message 300. In the reprogramming example described above, the message identification is used by the flash loader module 80 to track the received data packets so that it can associate different payload portions 204a-g with the same record or message 300.
The command or sequence field 324 contains either a command or a sequence number associated with the payload portion 204a-g. The command will indicate to the receiving node how to use the data carried by the following payload portions 204a-g. The command value should be different from the record identification (RID) / sequence value by design. Each payload portion 204a-g may have a record identification (RID) / sequence value except for the first payload portion 204a, which contains a command. In the reprogramming example described above, the record identification (RID) / sequence values may be used by the flash loader module 80 to group the received data packets so that it can re-assemble the record or message in the right order at the receiving node.
In one embodiment, the first payload portion 204a may include the values for the address field 306 (divided into 1 byte segments), the message length field 304 (1 byte), and the message type field (1 byte) of the original record or message 300. The first payload portion 204a may also include a record identification (RID) (1 byte). The remaining payload portions 204b-g may include the values found in the message data field 306 (divided into 32 byte segments) and the checksum field 310 of the original record or message 300. The value in the checksum 310 field may be used to protect against possible data corruption. After the original build record is reassembled at the receiving node, the build record's checksum is calculated. If the checksum does not match the received value, the whole record should be discarded and a negative response is sent.
What has been described is a system and method for streaming sequential data through a vehicle switch fabric network. This is particular useful in areas such as reprogramming nodes in the automotive switch fabric network where relatively large records or messages need to be transmitted through the switch fabric, although the invention may be used in other areas. In sum, the system and method described herein takes large data records and breaks them down into smaller units (data packets) that fit within the constraints of the physical layer on which communication links in the switch fabric network is built. The smaller data packets are assigned with a message identification and a sequence number. Data packets associated with the same data record or message are assigned with the same message identification but
may differ in their sequence number. Each data packet is transmitted over the vehicle switch fabric network to a destination node. At the destination node, the data packets may be reassembled to its original data format based on the message identification and sequence numbers. The reassembled message may then be presented to an application in the node for processing. The above description of the present invention is intended to be exemplary only and is not intended to limit the scope of any patent issuing from this application. The present invention is intended to be limited only by the scope and spirit of the following claims.

Claims

What is claimed is:
1. A method for sending a record through a switch fabric of a vehicle communication network, the switch fabric including a plurality of nodes joined by communication links for the transmission of data packets there between, the record having at least message data, the method comprising the steps of: generating a first data packet comprising a first message identification, a first sequence number, and a plurality of first data elements, the plurality of first data elements containing at least a first portion of the message data in the record; generating a second data packet comprising a second message identification, a second sequence number, and a plurality of second data elements, the plurality of second data elements containing at least a second portion of the message data in the record; transmitting the first data packet and the second data packet to a target node in the switch fabric of the vehicle communication network; receiving the first data packet and the second data packet at the target node of the vehicle communication network; assembling at least the first portion of the message data and the second portion of the message data based on the first and second message identifications and the first and second sequence numbers.
2. The method in claim 1, wherein the first data packet comprising a header portion, a payload portion, and a trailer portion, the payload portion containing the first message identification, the first sequence number, and the plurality of first data elements.
3. The method in claim 1 further comprising the step of: generating a third data packet comprising a third message identification, a third sequence number, and a plurality of third data elements, the plurality of third data elements containing at least a third portion of the message data in the record and a checksum for the message data; wherein the step of assembling further includes assembling the first, second, and third portions of the message data based on the first, second, and third message identifications and on the first, second, and third sequence numbers.
4. A method for sending a record through a switch fabric of a vehicle communication network, the switch fabric including a plurality of nodes joined by communication links for the transmission of data packets there between, the record having at least message data, the method comprising the steps of: generating a first data packet comprising a message identification associated with the record, a command, and a message length; generating a second data packet comprising the message identification associated with the record, a first sequence number, and a plurality of first data elements, the plurality of first data elements containing at least a first portion of the message data in the record; generating a third data packet comprising the message identification associated with the record, a second sequence number, and a plurality of second data elements, the plurality of second data elements containing at least a second portion of the message data in the record; transmitting the first, second, and third data packets to a target node in the switch fabric of the vehicle communication network; receiving the first, second, and third data packets at the target node of the vehicle communication network; assembling at least the first portion of the message data and the second portion of the message data based on the message identification associated with the record and the first and second sequence numbers.
5. The method in claim 4 further comprising the step of: providing a fourth data packet comprising the message identification associated with the record, a third sequence number, and a plurality of third data elements; wherein the step of assembling further includes assembling the first, second, and third portions of the message data based on the message identification associated with the record and on the first, second, and third sequence numbers.
6. The method in claim 1 or 4, wherein the steps of generating at least the first data packet and the second data packet are performed by one of the following: a gateway node that interconnects the switch fabric to an external diagnostic device; or an external diagnostic device that is connected to the switch fabric.
7. The method in claim 1 or 4, wherein the message data includes software components for reprogramming a portion of memory in the target node.
8. The method in claim 1 or 4 wherein the target node comprises a memory having an erasable memory portion and a protected memory portion, the method further comprising the steps of: erasing data in the erasable memory portion; and storing at least the assembled first portion of the message data and the second portion of message data in the erasable memory portion.
9. A node in a switch fabric of a vehicle communication network, the switch fabric including a plurality of other nodes joined by communication links for the transmission of data packets there between, the node comprising: a transceiver for receiving at least a first data packet, a second data packet, and a third data packet, the first data packet comprising a message identification associated with the record and command information, the second data packet comprising the message identification associated with the record, a first sequence number, and a plurality of first data elements, the plurality of first data elements containing at least a first portion of the message data in the record, the third data packet comprising the message identification associated with the record, a second sequence number, and a plurality of second data elements, the plurality of second data elements containing at least a second portion of the message data in the record; and a processor for assembling the first portion of the message data and the second portion of the message data based on the message identification associated with the record and the first and second sequence numbers.
10. The node in claim 9 further comprising a memory having an erasable memory portion and a protected memory portion,- the processor further capable of erasing data in the erasable memory portion and storing at least the assembled first portion of the message data and the second portion of the message data in the erasable memory portion.
PCT/US2005/034630 2004-10-18 2005-09-29 System and method for streaming sequential data through an automotive switch fabric network WO2006044122A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US61966904P 2004-10-18 2004-10-18
US60/619,669 2004-10-18
US11/015,153 US7613190B2 (en) 2004-10-18 2004-12-17 System and method for streaming sequential data through an automotive switch fabric
US11/015,153 2004-12-17

Publications (2)

Publication Number Publication Date
WO2006044122A2 true WO2006044122A2 (en) 2006-04-27
WO2006044122A3 WO2006044122A3 (en) 2006-07-27

Family

ID=36180681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/034630 WO2006044122A2 (en) 2004-10-18 2005-09-29 System and method for streaming sequential data through an automotive switch fabric network

Country Status (2)

Country Link
US (1) US7613190B2 (en)
WO (1) WO2006044122A2 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060083172A1 (en) * 2004-10-14 2006-04-20 Jordan Patrick D System and method for evaluating the performance of an automotive switch fabric network
US7593344B2 (en) * 2004-10-14 2009-09-22 Temic Automotive Of North America, Inc. System and method for reprogramming nodes in an automotive switch fabric network
US7623552B2 (en) * 2004-10-14 2009-11-24 Temic Automotive Of North America, Inc. System and method for time synchronizing nodes in an automotive network using input capture
US7593429B2 (en) * 2004-10-14 2009-09-22 Temic Automotive Of North America, Inc. System and method for time synchronizing nodes in an automotive network using input capture
US7599377B2 (en) * 2004-10-15 2009-10-06 Temic Automotive Of North America, Inc. System and method for tunneling standard bus protocol messages through an automotive switch fabric network
US7733841B2 (en) * 2005-05-10 2010-06-08 Continental Automotive Systems, Inc. Vehicle network with time slotted access and method
BRPI0618725A2 (en) * 2005-11-18 2011-09-06 Rick L Orsini secure data analyzer method and system
US7756132B2 (en) * 2005-12-13 2010-07-13 Digital Recorders, Inc. Rapid messaging protocol wireless network data communication system
CN100571202C (en) * 2006-01-27 2009-12-16 华为技术有限公司 A kind of transfer approach and transfer system that carries the data of routing iinformation
US20080046142A1 (en) * 2006-06-29 2008-02-21 Motorola, Inc. Layered architecture supports distributed failover for applications
DE102007004044B4 (en) * 2007-01-22 2009-09-10 Phoenix Contact Gmbh & Co. Kg Method and system for optimized transmission of data between a control device and a plurality of field devices
US8102857B2 (en) * 2007-02-02 2012-01-24 Motorola Solutions, Inc. System and method for processing data and control messages in a communication system
EP2028797B1 (en) * 2007-08-23 2010-02-24 Siemens Aktiengesellschaft Data transmission method
DE102007061724A1 (en) * 2007-12-20 2009-06-25 Robert Bosch Gmbh Method for transmitting data in a cycle-based communication system
US20130325203A1 (en) * 2012-06-05 2013-12-05 GM Global Technology Operations LLC Methods and systems for monitoring a vehicle for faults
KR101491293B1 (en) * 2013-08-09 2015-02-10 현대자동차주식회사 Gateway apparatus and message routing method thereof
US10541916B2 (en) * 2014-12-17 2020-01-21 Google Llc Tunneled routing
KR20180076725A (en) * 2016-12-28 2018-07-06 현대자동차주식회사 System and method for transmitting data using vehicle
CN111447110B (en) * 2020-03-24 2023-03-10 北京润科通用技术有限公司 Data monitoring method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5151899A (en) * 1991-02-11 1992-09-29 Digital Equipment Corporation Tracking sequence numbers in packet data communication system
US20030185201A1 (en) * 2002-03-29 2003-10-02 Dorgan John D. System and method for 1 + 1 flow protected transmission of time-sensitive data in packet-based communication networks
US20040131014A1 (en) * 2003-01-03 2004-07-08 Microsoft Corporation Frame protocol and scheduling system

Family Cites Families (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4914657A (en) 1987-04-15 1990-04-03 Allied-Signal Inc. Operations controller for a fault tolerant multiple node processing system
JP2851124B2 (en) 1990-04-27 1999-01-27 古河電気工業株式会社 Multiplex transmission method
US5612953A (en) 1991-02-22 1997-03-18 International Business Machines Corporation Multi-media serial line switching adapter for parallel networks and heterogeneous and homologous computer systems
US5195091A (en) 1991-07-09 1993-03-16 At&T Bell Laboratories Adaptive synchronization arrangement
US5566180A (en) 1994-12-21 1996-10-15 Hewlett-Packard Company Method for recognizing events and synchronizing clocks
US5802052A (en) 1996-06-26 1998-09-01 Level One Communication, Inc. Scalable high performance switch element for a shared memory packet or ATM cell switch fabric
US6611537B1 (en) 1997-05-30 2003-08-26 Centillium Communications, Inc. Synchronous network for digital media streams
US6373834B1 (en) 1997-12-19 2002-04-16 Telefonaktiebolaget Lm Ericsson Synchronization for cellular telecommunications network
US6420797B1 (en) 1998-02-19 2002-07-16 Robert Edward Steele Electrical/electronic system architecture
US7035247B2 (en) 1998-07-22 2006-04-25 Synchrodyne Networks, Inc. Link transmission control with common time reference
US6611519B1 (en) 1998-08-19 2003-08-26 Swxtch The Rules, Llc Layer one switching in a packet, cell, or frame-based network
US7027773B1 (en) 1999-05-28 2006-04-11 Afx Technology Group International, Inc. On/off keying node-to-node messaging transceiver network with dynamic routing and configuring
US6430164B1 (en) 1999-06-17 2002-08-06 Cellport Systems, Inc. Communications involving disparate protocol network/bus and device subsystems
DE19931838C1 (en) * 1999-07-09 2001-10-31 Daimler Chrysler Ag Method for checking a ring-shaped optical network line for data transmission between several network participants in a motor vehicle
US6356823B1 (en) 1999-11-01 2002-03-12 Itt Research Institute System for monitoring and recording motor vehicle operating parameters and other data
US6757521B1 (en) 2000-06-12 2004-06-29 I/O Controls Corporation Method and system for locating and assisting portable devices performing remote diagnostic analysis of a control network
US6636790B1 (en) 2000-07-25 2003-10-21 Reynolds And Reynolds Holdings, Inc. Wireless diagnostic system and method for monitoring vehicles
US6845416B1 (en) * 2000-08-02 2005-01-18 National Instruments Corporation System and method for interfacing a CAN device and a peripheral device
CA2419761A1 (en) 2000-08-14 2002-02-21 Audi Performance & Racing Enhanced module chipping system
US6559783B1 (en) 2000-08-16 2003-05-06 Microchip Technology Incorporated Programmable auto-converting analog to digital conversion module
US7463626B2 (en) 2000-11-21 2008-12-09 Roy Subhash C Phase and frequency drift and jitter compensation in a distributed telecommunications switch
NO315248B1 (en) 2000-12-15 2003-08-04 Knutsen Oas Shipping As Gas bottle device
JP4491967B2 (en) 2000-12-28 2010-06-30 株式会社デンソー VEHICLE CONTROL DEVICE AND RECORDING MEDIUM HAVING SELF-DIAGNOSTIC FUNCTION
US6981150B2 (en) 2001-01-04 2005-12-27 Cummins, Inc. Apparatus and method for authorizing transfer of software into one or more embedded systems
US8458689B2 (en) 2001-03-30 2013-06-04 Roderick A. Barman Method and apparatus for reprogramming engine controllers
US20030045234A1 (en) * 2001-08-31 2003-03-06 Remboski Donald J. Vehicle active network with reserved portions
US20030043824A1 (en) 2001-08-31 2003-03-06 Remboski Donald J. Vehicle active network and device
US7170853B2 (en) * 2001-08-31 2007-01-30 Temic Automotive Of North America, Inc. Vehicle active network topologies
US7027387B2 (en) 2001-08-31 2006-04-11 Motorola, Inc. Vehicle active network with data redundancy
US6885916B2 (en) * 2001-08-31 2005-04-26 Motorola, Inc. Data packet for a vehicle active network
US6931004B2 (en) 2001-08-31 2005-08-16 Motorola, Inc. Vehicle active network with backbone structure
US7415508B2 (en) 2001-08-31 2008-08-19 Temic Automotive Of North America, Inc. Linked vehicle active networks
US8194536B2 (en) 2001-08-31 2012-06-05 Continental Automotive Systems, Inc. Vehicle active network with fault tolerant devices
US20030051131A1 (en) 2001-08-31 2003-03-13 Juergen Reinold Vehicle active network with data encryption
US7173903B2 (en) 2001-08-31 2007-02-06 Temic Automotive Of North America, Inc. Vehicle active network with communication path redundancy
US20030043793A1 (en) * 2001-08-31 2003-03-06 Juergen Reinold Vehicle active network
US6747365B2 (en) 2001-08-31 2004-06-08 Motorola, Inc. Vehicle active network adapted to legacy architecture
US20030065630A1 (en) 2001-10-02 2003-04-03 International Business Machines Corporation Adjusting an amount owed for fueling based on vehicle characteristics
US7152385B2 (en) 2001-10-31 2006-12-26 W.R. Grace & Co.-Conn. In situ molded thermal barriers
US7065651B2 (en) 2002-01-16 2006-06-20 Microsoft Corporation Secure video card methods and systems
US20040043824A1 (en) 2002-06-08 2004-03-04 Nicholas Uzelac Swing training device
US7137142B2 (en) 2002-06-28 2006-11-14 Motorola, Inc. Method and system for vehicle authentication of a component using key separation
US7127611B2 (en) 2002-06-28 2006-10-24 Motorola, Inc. Method and system for vehicle authentication of a component class
US7131005B2 (en) 2002-06-28 2006-10-31 Motorola, Inc. Method and system for component authentication of a vehicle
US7600114B2 (en) 2002-06-28 2009-10-06 Temic Automotive Of North America, Inc. Method and system for vehicle authentication of another vehicle
US20040003234A1 (en) 2002-06-28 2004-01-01 Jurgen Reinold Method and system for vehicle authentication of a subassembly
US6839710B2 (en) 2002-06-28 2005-01-04 Motorola, Inc. Method and system for maintaining a configuration history of a vehicle
US7549046B2 (en) 2002-06-28 2009-06-16 Temic Automotive Of North America, Inc. Method and system for vehicle authorization of a service technician
US7325135B2 (en) 2002-06-28 2008-01-29 Temic Automotive Of North America, Inc. Method and system for authorizing reconfiguration of a vehicle
US20040003232A1 (en) 2002-06-28 2004-01-01 Levenson Samuel M. Method and system for vehicle component authentication of another vehicle component
US7010682B2 (en) 2002-06-28 2006-03-07 Motorola, Inc. Method and system for vehicle authentication of a component
US20040003230A1 (en) 2002-06-28 2004-01-01 Puhl Larry C. Method and system for vehicle authentication of a service technician
US7076665B2 (en) 2002-06-28 2006-07-11 Motorola, Inc. Method and system for vehicle subassembly authentication of a component
US20040001593A1 (en) 2002-06-28 2004-01-01 Jurgen Reinold Method and system for component obtainment of vehicle authentication
US7137001B2 (en) 2002-06-28 2006-11-14 Motorola, Inc. Authentication of vehicle components
US7228420B2 (en) 2002-06-28 2007-06-05 Temic Automotive Of North America, Inc. Method and system for technician authentication of a vehicle
US7181615B2 (en) 2002-06-28 2007-02-20 Motorola, Inc. Method and system for vehicle authentication of a remote access device
US7210063B2 (en) 2002-08-27 2007-04-24 Lsi Logic Corporation Programmable device and method of programming
KR100489046B1 (en) 2002-08-27 2005-05-11 엘지전자 주식회사 LPA Shelf
US7113759B2 (en) 2002-08-28 2006-09-26 Texas Instruments Incorporated Controller area network transceiver having capacitive balancing circuit for improved receiver common-mode rejection
US8305926B2 (en) * 2002-09-04 2012-11-06 At&T Intellectual Property Ii, L.P. Method and apparatus for self-learning of call routing information
DE10261174B3 (en) 2002-12-20 2004-06-17 Daimlerchrysler Ag Automatic addressing method for control devices connected to data bus system with series or ring structure
US7167929B2 (en) 2003-01-13 2007-01-23 Sierra Logic Integrated-circuit implementation of a storage-shelf router and a path controller card for combined use in high-availability mass-storage-device shelves that may be incorporated within disk arrays, and a storage-shelf-interface tunneling method and system
US7310327B2 (en) 2003-04-28 2007-12-18 Temic Automotive Of North America, Inc. Method and apparatus for time synchronizing an in-vehicle network
US7999408B2 (en) 2003-05-16 2011-08-16 Continental Automotive Systems, Inc. Power and communication architecture for a vehicle
US7272496B2 (en) 2003-06-12 2007-09-18 Temic Automotive Of North America, Inc. Vehicle network and method of communicating data packets in a vehicle network
EP1735672A1 (en) 2004-04-01 2006-12-27 Delphi Technologies, Inc. Method and protocol for diagnostics of arbitrarily complex networks of devices
US20050251608A1 (en) * 2004-05-10 2005-11-10 Fehr Walton L Vehicle network with interrupted shared access bus
WO2006101504A1 (en) 2004-06-22 2006-09-28 Sarnoff Corporation Method and apparatus for measuring and/or correcting audio/visual synchronization
US7551647B2 (en) 2004-07-19 2009-06-23 Qvidium Technologies, Inc. System and method for clock synchronization over packet-switched networks
US7593344B2 (en) * 2004-10-14 2009-09-22 Temic Automotive Of North America, Inc. System and method for reprogramming nodes in an automotive switch fabric network
US7593429B2 (en) 2004-10-14 2009-09-22 Temic Automotive Of North America, Inc. System and method for time synchronizing nodes in an automotive network using input capture
US7623552B2 (en) 2004-10-14 2009-11-24 Temic Automotive Of North America, Inc. System and method for time synchronizing nodes in an automotive network using input capture
US20060083172A1 (en) * 2004-10-14 2006-04-20 Jordan Patrick D System and method for evaluating the performance of an automotive switch fabric network
US7599377B2 (en) * 2004-10-15 2009-10-06 Temic Automotive Of North America, Inc. System and method for tunneling standard bus protocol messages through an automotive switch fabric network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5151899A (en) * 1991-02-11 1992-09-29 Digital Equipment Corporation Tracking sequence numbers in packet data communication system
US20030185201A1 (en) * 2002-03-29 2003-10-02 Dorgan John D. System and method for 1 + 1 flow protected transmission of time-sensitive data in packet-based communication networks
US20040131014A1 (en) * 2003-01-03 2004-07-08 Microsoft Corporation Frame protocol and scheduling system

Also Published As

Publication number Publication date
WO2006044122A3 (en) 2006-07-27
US7613190B2 (en) 2009-11-03
US20060083229A1 (en) 2006-04-20

Similar Documents

Publication Publication Date Title
US7593344B2 (en) System and method for reprogramming nodes in an automotive switch fabric network
WO2006044122A2 (en) System and method for streaming sequential data through an automotive switch fabric network
EP1805945B1 (en) System and method for tunneling standard bus protocol messages through an automotive switch fabric network
EP1802986A2 (en) Evaluating the performance of an automotive switch network
US8930036B2 (en) Reconfigurable interface-based electrical architecture
US8116205B2 (en) Vehicle active network
US7027387B2 (en) Vehicle active network with data redundancy
US20110106340A1 (en) Method of accessing a device in a communication network in a motor vehicle via an external device and gateway
WO2003021898A1 (en) Vehicle active network with redundant devices
US20030043793A1 (en) Vehicle active network
WO2003021894A1 (en) Vehicle active network with communication path redundancy
WO2003021879A1 (en) Vehicle active network topologies
KR102355092B1 (en) Operation method of communication node for diagnosing in vehicle network
EP1741239A1 (en) Intelligent adjunct network device
WO2003021910A1 (en) Data packet for a vehicle active network
WO2003021867A2 (en) Linked vehicle active networks
US20030184158A1 (en) Method for operating a distributed safety-relevant system
US8605698B2 (en) Vehicle with mobile router
KR101744998B1 (en) Re-programming control module and re-programming system and method using the re-programming control module
CN100588162C (en) System and method for reprogramming nodes in automotive switch fabric network
CN109565519A (en) The method that file is transmitted between the control device of vehicle and the server unit of outside vehicle, control device and vehicle
JP7140011B2 (en) Gateway device
JP7439717B2 (en) Communication method in vehicle system and communication management method for in-vehicle network
US9204327B2 (en) Mobile router network
US20120155450A1 (en) Mobile router network

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05802139

Country of ref document: EP

Kind code of ref document: A2