WO2003061175A2 - Self-organizing hierarchical wireless network for surveillance and control - Google Patents

Self-organizing hierarchical wireless network for surveillance and control Download PDF

Info

Publication number
WO2003061175A2
WO2003061175A2 PCT/US2003/000781 US0300781W WO03061175A2 WO 2003061175 A2 WO2003061175 A2 WO 2003061175A2 US 0300781 W US0300781 W US 0300781W WO 03061175 A2 WO03061175 A2 WO 03061175A2
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
wireless network
actuator
nodes
network
Prior art date
Application number
PCT/US2003/000781
Other languages
French (fr)
Other versions
WO2003061175A3 (en
Inventor
Falk Herrmann
Andreas Hensel
Arati Manjeshwar
Mikael Israelsson
Johannes Karlsson
Jason Hill
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Priority to EP03707354A priority Critical patent/EP1474935A4/en
Priority to AU2003209207A priority patent/AU2003209207A1/en
Priority to JP2003561140A priority patent/JP4230917B2/en
Publication of WO2003061175A2 publication Critical patent/WO2003061175A2/en
Publication of WO2003061175A3 publication Critical patent/WO2003061175A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/24Connectivity information management, e.g. connectivity discovery or connectivity update
    • H04W40/246Connectivity information discovery
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/003Address allocation methods and details
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/009Signalling of the alarm condition to a substation whose identity is signalled to a central station, e.g. relaying alarm signals in order to extend communication range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
    • Y04S40/18Network protocols supporting networked applications, e.g. including control of end-device applications over a network

Definitions

  • the present invention relates to a wireless network of sensors and actuators for surveillance and control, and a method of operation for the network.
  • Wire-based networks may be applied in security systems, building automation, climate control, and control and surveillance of industrial plants.
  • Such networks may include, for example, a large number of sensors and actuators arranged in a ring or tree configuration controlled by a central unit (e.g. a panel or base station) with a user interface.
  • a central unit e.g. a panel or base station
  • Battery-powered versions of the aforementioned sensors and actuators maybe deployed with built-in wireless transmitters and/or receivers, as described in, for example, United States Patent Nos. 5,854,994 and 6,255,942.
  • a group of such devices may report to or be controlled by a dedicated pick-up/control unit mounted within the transmission range of all devices.
  • the pick-up/control unit may or may not be part of a larger wire-based network. Due to the RF propagation characteristics of electromagnetic waves under conditions that may exist inside buildings, e.g. multi-path, high path losses, and interference, problems may arise during and after the installation process associated with the location of the devices and their pick-up/control unit. Hence, careful planning prior to installation as well as trial and error during the installation process may be required.
  • the number of sensors or actuators per pick-up/control unit may be limited. Furthermore, should failure of the pick-up/control unit occur, all subsequent wireless devices may become inoperable.
  • the present invention relates to a wireless network of sensors and actuators for surveillance and control, and a method of operation for the network.
  • the sensors and actuators may include, for example, smoke detectors, motion detectors, temperature sensors, door/window contacts, alarm sounders, or valve actuators.
  • Applications may include, but are not limited to, security systems, home automation, climate control, and control and surveillance of industrial plants.
  • the system may support devices of varying complexity, which maybe capable of self-organizing in a hierarchical network.
  • the system may be arranged in a flexible manner with a minimum prior planning in an environment that may possess difficult RF propagation characteristics, and may ensure connectivity of the majority of the devices in case of localized failures.
  • the network may include two physical portions or layers.
  • the first physical layer may connect a small number of relatively more complex devices to form a wireless backbone network.
  • the second physical layer may connect a large number of relatively less complex low-power devices with each other and with the backbone nodes. Such an arrangement of two separate physical layers may impose less severe energy constraints upon the network.
  • the central base station may be eliminated so that a single point of failure may be avoided.
  • the system may instead be controlled in a distributed manner by the backbone nodes, and the information about the entire network may be accessed, for example, any time at any backbone node.
  • the network may configure itself without the need for user interaction and for detailed planning (therefore operating in a so-called "ad-hoc network" manner).
  • the system may automatically reconfigure in order to ensure connectivity.
  • an exemplary embodiment and/or an exemplary method according to the present invention may improve the load distribution, ensure scalability and small delays, and eliminate the single point of failure of the aforementioned ad-hoc network, while preserving its ability to self-configure and reconfigure.
  • Figure 1 is a schematic drawing of a conventional ad-hoc wireless sensor network.
  • Figure 2 is a schematic drawing of a hierarchical ad-hoc network without a central control unit.
  • Figure 1 shows a self-configuring (ad-hoc) wireless network of a large number of battery- powered devices with short-range wireless transceivers.
  • a self-configuring wireless network may be capable of determining the optimum routes to all nodes, and may therefore reconfigure itself in case of link or node failures. Relaying of the data may occur in short hops from remote sensors or to remote actuators through intermediate nodes to or from the central control unit (base station, see Figure 1), respectively, while data compression techniques, and low duty cycles of the individual devices may prolong battery life.
  • large systems with hundreds or even thousands of nodes may lead to an increased burden (e.g.
  • FIG. 2 is a schematic drawing of a hierarchical ad-hoc network absent a central control unit.
  • the network may include devices of different types with varying complexity and functionality.
  • the network may include a battery-powered sensor/actuator node (also referred to a Class 1 node), a power-line powered sensor/actuator node (also referred to as a Class 0 node), a battery-powered sensor/actuator node with limited capabilities (also referred to as a Class 2 node), a cluster head, and a panel.
  • the network may also include, for example, a battery-powered node or input device (e.g., key fob) with limited capabilities, which may not be a "fixed" part of the actual network topology, or a
  • a battery-powered node or input device e.g., key fob
  • RF transmitter device also referred to as a Class 3 node. Each device type is more fully explained below.
  • Class 1 sensor/actuator nodes may be battery-powered devices that include sensor elements to sense one or more physical quantities, or an actuator to perform required actuations.
  • the battery may be supported or replaced by an autonomous power supply, such as, for example, a solar cell or an electro-ceramic generator.
  • the senor or actuator may be connected to a low-power microcontroller to process the raw sensor signal or to provide the actuator with the required control information, respectively.
  • the microcontroller may handle a network protocol and may store data in both volatile and non- volatile memory.
  • Class 1 nodes may be fully functional network nodes, for example, capable of serving as a multi- hop point.
  • the microcontroller may be further connected to a low power (such as, for sample, short to medium range) RF radio transceiver module.
  • a low power such as, for sample, short to medium range
  • the sensor or actuator may include its own microcontroller interfaced with a dedicated network node controller to handle the network protocol and interface the radio.
  • Each sensor/actuator node may be factory-programmed with a unique device ID.
  • Class 0 nodes may have a similar general architecture as Class 1 nodes, i.e., one or more microcontrollers, a sensor or actuator device, and interface circuitry, but unlike Class 1 nodes, Class 0 nodes may be powered by a power line rather than a battery. Furthermore, Class 0 nodes may also have a backup supply allowing limited operation during power line failure. Class 0 nodes may be capable of performing similar tasks as Class 1 nodes but may also form preferential multi-hop points in the network tree so that the load on battery-powered devices may be reduced. Additionally, they may be useful for actuators performing power- intensive tasks. Moreover, there may be Class 0 nodes without a sensor or actuator device solely forming a dedicated multi-hop point.
  • Class 2 nodes may have a general architecture similar to those of Class 0 and Class 1 nodes. However, Class 2 nodes may include applications devices that are constrained with limitations in terms of cost and size. Class 2 nodes may be applied, for example, in devices such as door/window contacts, window contacts, temperature sensors. Such devices may be equipped with a less powerful, and therefore, for example, more economic, microcontroller as well as a smaller battery. Class 2 nodes may be arranged at the periphery (i.e. edge) of the network tree since Class 2 nodes may allow for only limited networking capabilities. For example, Class 2 nodes may not be capable of forming a multi-hop point.
  • Class 3 nodes may have a general architecture similar to those of Class 0 and Class 1 nodes. However, Class 3 node may include applications devices that may be constrained, for example, by cost and size. Moreover, they may feature a RF transmitter only rather than a transceiver. Class 3 nodes may be applied, for example, in devices such as key fobs, panic buttons, or location service devices. These devices may be equipped with a less powerful, ad therefore, more economic microcontroller, as well as a smaller battery.
  • Class 3 nodes maybe configured to be a non-permanent part of the network topology (e.g., they may not be supervised and may not be eligible to form a multi-hop point). However, they may be capable of issuing messages that may be received and forwarded by other nodes in the network.
  • Cluster heads are dedicated network control nodes. They may additionally but not necessarily perform sensor or actuator functions. They may differ in several properties from the Class 0, 1, and 2 nodes. First, the cluster heads may have a significantly more powerful microcontroller and more memory at their disposal. Second, the cluster heads may not be as limited in energy resources as the Class 1 and 2 nodes. For example, cluster heads may have a mains (AC) power supply with an optional backup battery for a limited time in case of power blackouts (such as, for example, 72 hrs)
  • AC mains
  • Cluster heads may further include a short or medium range radio module ("Radio 1") for communication with the sensor/actuator nodes.
  • Radio 1 a short or medium range radio module
  • cluster heads may in addition include a second RF transceiver ("Radio 2") with a larger bandwidth and/or longer communication range, such as, for example, but not necessarily at another frequency band than the Radio 1.
  • the cluster heads may have only one radio module capable of communicating with both the sensor/actuator nodes and other cluster heads. This radio may offer an extended communication range compared to those in the sensor/actuator nodes.
  • the microcontroller of the cluster heads maybe more capable, for example, in terms of clock speed and memory size, than the microcontrollers of the sensor/actuator nodes.
  • the microcontroller of the cluster heads may run several software programs to coordinate the communication with other cluster heads (as may be required, for example, by the clusterhead network), to control the sensor/actuator network, and to interpret and store infoimation retrieved from sensors and other cluster heads (database).
  • a cluster head may include a standard wire-based or wireless transceiver, such as, for example, an Ethernet or Bluetooth transceiver, in order to connect to any stationary, mobile, or handheld computer ("panel") with the same interface. This functionality may also be performed wirelessly by one of the radio modules.
  • Each cluster head may be factory-programmed with a unique device ID.
  • the top-level device may include a stationary, mobile or handheld computer with a software program including for example a user interface ("panel").
  • a panel may be connected through the abovementioned transceiver to one cluster head, either through a local area network (LAN), or through a dedicated wire-based or wireless link.
  • LAN local area network
  • An exemplary system may include a number (up to several hundred, for example) of cluster heads distributed over the site to be monitored and/or controlled (see Fig. 2).
  • the installer may be required to take the precaution that the distance between the cluster heads does not exceed the transmission range of the respective radio transceiver (For example,
  • Radio 2 may operate 50 to 500 meters).
  • the cluster heads may be connected to an AC power line.
  • the cluster heads may be set to operate in an inquiry mode. In this mode, the radio transceiver may continuously scan the channels to receive data packets from other cluster heads in order to set up communication links.
  • the required sensor/actuator nodes may be installed (up to several thousand, typically 10 to 500 times as many as cluster heads). Devices with different functionality, such as, for example, different sensor types and actuators, may be mixed. During installation, the installer should follow two general rules.
  • Radio 1 the distance between individual sensor/actuator nodes and between them and at least one cluster head should not exceed the maximum transmission range of their radios
  • Radio 1 the maximum transmission range of their radios
  • the depth of the network i.e., the number of hops from a node at the periphery of the network in order to reach the nearest cluster head, should be limited according to the latency requirements of the current application (such as, for example, 1 to 10 hops).
  • all sensor/actuator nodes may be set to operate in an energy-saving polling mode. In this mode, the controller and all other components of the sensor/actuator node remain in a power save or sleep mode. In defined time intervals, the radio and the network controller may be very briefly woken up by a built-in timer in order to scan the channel for a beacon signal (such as, for example, an RF tone or a special sequence).
  • the unique IDs of the cluster heads (or their radio modules, respectively) as well as the unique IDs of all sensor/actuator nodes may be made known, for example, by reading a barcode on each device, triggering an ID transmission from each device through a wireless or wire-based interface, or manually inputting the device IDs via an installation tool.
  • the unique IDs may be recorded and/or stored, as well as other related information, such as, for example, a predefined security zone or a corresponding device location. This information may be required to derive the network topology during the initialization of the network, and to ensure that only registered devices are allowed to participate in the network.
  • At least one panel may be connected to one of the cluster heads. This may be accomplished, for example, via a Local Area Network (LAN), a direct Ethernet connection, another wire-based link, or a wireless link.
  • LAN Local Area Network
  • a software program for performing an auto-configuration may then be started on the controller of the cluster head in order to setup a network between all cluster heads.
  • the progress of the configuration may be displayed on the user interface, which may include an option for the user to intervene.
  • the user interface may include an option to define preferred connections between individual devices.
  • the cluster head connected to the panel is provided with an ID list of all cluster heads that are part of the actual network (i.e., the allowed IDs).
  • the discovery of all links between all cluster heads with allowed IDs is started from this cluster head. This may be realized by successive exchange of inquiry packets and the allowed ID list between neighboring nodes.
  • the entries of the routing tables of all cluster heads may be routinely updated with all newly inquired nodes or new links to known nodes.
  • the link discovery process is finished when at least one route can be found between any two installed cluster heads, no new links can be discovered, and the routing table of every cluster head is updated with the latest status.
  • an optimum routing topology is determined to establish a reliable communication network connecting all cluster heads.
  • the optimization algorithm may be based on a cost function including, but not limited to, message delay, number of hops, and number of connections from/to individual cluster heads ("load").
  • the associated routine maybe performed on the panel or on the cluster head connected to the panel.
  • the particular algorithm may depend, for example, on the restrictions of the physical and the link layer, and on requirements of the actual application.
  • the link discovery and routing may be performed using standardized layered protocols including, for example, Bluetooth, IEEE 802.11, or IEEE 802.15.
  • link discovery and routing may be implemented as an application supported by the services of the lower layers of a standardized communications protocol stack.
  • the lower layers may include, for example, a Media Access Control (MAC) and physical layer.
  • MAC Media Access Control
  • standardized radio transceivers may be used as "Radio 2".
  • the use of standardized protocols may provide special features and/or services including, for example, a master/slave mode operation, that may be used in the routing algorithm.
  • an ad-hoc multi-hop network between the sensor/actuator nodes and the cluster head network is established.
  • all links between the cluster heads and between the sensor/actuator nodes may be secured by encryption and/or a message authentication code (MAC) using, for example, public shared keys.
  • MAC message authentication code
  • the cluster heads initially broadcast a beacon signal (such as, for example, an RF tone or a fixed sequence, e.g., 1-0-1-0-1-8) in order to wake up the sensor/actuator nodes within their transmission range (“first layer nodes") from the above-mentioned polling mode. Then, the cluster heads broadcast for a predefined time messages ("link discovery packets") containing all or a subset of the following: A header with preamble, the node ID, node class and type, and time stamps to allow the recipients to synchronize their internal clocks. After the cluster heads stop transmitting, the first layer nodes begin by broadcasting beacon signals and then next broadcast "link discovery packets".
  • a beacon signal such as, for example, an RF tone or a fixed sequence, e.g., 1-0-1-0-1-
  • the beacons wake up a second layer of sensor/actuator nodes, i.e., nodes within the broadcast range of first layer nodes that could not receive messages directly from any cluster head. This procedure of successive transmission of beacons and link discovery packets may occur until the last layer of sensor/actuator nodes has been reached. (The maximum number of allowed layers may be a user-defined constraint.)
  • All nodes may be synchronized with respect to the cluster head network. Hence, there are time slots in which only nodes within one particular layer are transmitting. All other activated nodes may receive and build a list of sensor/actuator nodes within their transmission range, a list of cluster heads, and the average cost required to reach each cluster head via the associated links.
  • the cost (e.g., the number of hops, latency, etc.) may be calculated based on measures such as signal strength or packet loss rate, power reserves at the node, and networking capabilities of the node (see node classes
  • route discovery packets which may includes all or a subset of the following: A header with preamble, the node ID, node class and type, the cost in order to reach the nearest cluster head (e.g., set to zero cost), the number of hops to reach this cluster head (e.g., set to zero cost), and the ID of the nearest cluster head (e.g., set to its own ID).
  • the sensor/actuator nodes broadcast route discovery packets (now with cost > 0, number of hops > 0) in the following manner: There is one time slot for each possible cost > 0, i.e. 1, 2, 3 .... max. Within each time slot, all nodes having a route for reaching the nearest cluster head with this particular cost broadcast several route discovery packets. All other nodes listen and update their list of cluster heads using a new cost function. This new cost function may contain the cost for the node to receive the message from the particular cluster head, the overall number of hops to reach this cluster head, and the cost of the link between the receiving node and the transmitting node (from the previously built list of neighboring nodes).
  • Nodes having no direct link to any cluster head so far now may start to build their cluster head list. Moreover, new routes with a higher number of hops but a lower cost than the previously known routes may become available. Furthermore, all nodes continuously determine the layer of their neighboring nodes from the route discovery packets of these nodes.
  • this node starts broadcasting its route discovery packets, stops updating the cluster head list, and builds a new list of neighboring nodes belonging to the next higher layer n+1 only. This procedure may ensure that every node receives a route to a cluster head with the least possible cost, knows the logical layer n it belongs to, and has a list of direct neighbors in the next higher layer n+1.
  • nodes may start to send "route registration packets" to their cluster heads using intermediate nodes as multi- hop points.
  • Link-level acknowledgement packets may ensure the reliability of these transmissions.
  • nodes may keep track of their neighbors in layer n+1 which use them as multi-hop points: supervision packets from these nodes may have to be confirmed during the following mode of normal operation.
  • the route registration packets may contain all or a subset of the following: A header with preamble, the ID of the transmitting node, the list of direct neighbors in the next higher layer (optionally including the associated link costs), and the IDs of all nodes which have forwarded the packet.
  • the cluster heads respond with acknowledgement packets (optionally including a time stamp for re-synchronization) being sent the inverse path to each individual sensor/actuator node. If there is no link-level acknowledgement, the route registration packets are periodically retransmitted by the sensor/actuator nodes until a valid acknowledgment has been received from the associated cluster head.
  • the cluster heads exchange and update the information about all registered nodes among each other. Hence, the complete topology of the sensor/actuator network may be derived at each cluster head.
  • the initialization is finished when valid acknowledgments have been received at all nodes, all cluster heads contain similar information about the network topology, and the quantity and IDs of the registered sensor/actuator nodes are consistent with the information known at the cluster heads from the installation. In case of inconsistencies, an error message may be generated at the panel and the initialization process is repeated. Once the initialization is finalized, the sensor/actuator nodes remain in a power-saving mode of normal operation
  • the cluster heads may broadcast beacon signals during a first time slot in order to wake up the sensor/actuator nodes within their transmission range ("first layer nodes") from the polling mode.
  • the cluster heads then broadcast a predefined number of "link discovery packets" containing all or a subset of the following: a header with preamble, an ID, and a time stamp allowing the recipients to synchronize their internal clocks.
  • All activated nodes may receive and build a list of cluster heads, as well as an average cost for the respective links. The cost may be calculated based on the signal strength and the packet loss rate.
  • the first layer nodes After the cluster heads stop transmitting, in a second time slot the first layer nodes start broadcasting beacons and link discovery packets, respectively.
  • the beacons wake up the second layer of sensor/actuator nodes, which were not previously woken up by the cluster heads.
  • the link discovery packets sent by sensor/actuator nodes may also contain their layer, node class and type, and the cost required to reach the "nearest" (i.e., with least cost) cluster head derived from previously received link discovery packets of other nodes. All activated nodes may receive and build a list of nodes in their transmission range and cluster heads, as well as the average cost for the respective links.
  • the cost may be calculated, for example, based on number of hops, signal strength, packet loss rate, power reserves at the nodes, and networking capabilities of the nodes (see infra description for node classes 0,1, 2). This process may continue until the nodes in the highest layer (farthest away from cluster heads) have woken up, have broadcast their messages in the respective time slot, and have built the respective node lists.
  • the decision of a node in layer n regarding its nearest cluster head may be based upon previously broadcasted decisions of nodes in the layer n-1 (closer to the cluster head). Should a new route including one additional hop lead to a lower cost, a node may re- broadcast its link discovery packets within the following time slot, i.e., the node is moved to the next higher layer n+1. This may result in changes of the "nearest cluster heads" for other nodes. However, these affected nodes may only need to re-broadcast in case of a layer change. Once the nodes within the highest layer have determined their route with the least cost to one of the cluster heads, all nodes may start to send route registration packets to their nearest determined cluster head using intermediate nodes as multi-hop points.
  • the nodes keep track of their neighbors in layer n+1 which use them as multi-hop points: Supervision packets from these nodes may be confirmed during the normal operation mode that follows initialization.
  • the route registration packets contain a header with preamble, the ID of the transmitting node, the list of neighbors in the next higher layer (optionally including the associated link costs), and the IDs of all nodes that have forwarded the packet.
  • the cluster heads may respond with acknowledgement packets (optionally including a time stamp for re-synchronization) sent to the individual sensor/actuator nodes along the reverse path traversed by the route registration packet. If there is no link-level acknowledgement, the route registration packets may be periodically retransmitted by the sensor/actuator nodes until a valid acknowledgment has been received from the respective cluster head.
  • the cluster heads may exchange and update the information about all registered nodes among each other. Hence, the complete topology of the sensor/actuator network maybe derived at each individual cluster head.
  • the initialization is finished when valid acknowledgments have been received at all nodes, all cluster heads contain similar information about the network topology, and the quantity and IDs of all registered sensor/actuator nodes are consistent with the information known at the cluster heads from the installation. In case of inconsistencies, an error message may be generated at the panel and the initialization process may be repeated. Once the initialization is finalized, the sensor/actuator nodes may remain in a power-saving mode of normal operation.
  • the cluster heads broadcast beacon signals to wake up the sensor/actuator nodes within their transmission range (i.e., "first layer nodes") from the polling mode.
  • the cluster heads broadcast messages ("link discovery packets") for a predefined time containing a header with preamble, an ID, and time stamps allowing the recipients to synchronize their internal clocks.
  • the first layer nodes may start broadcasting beacons and link discovery packets that additionally contain a node class and type.
  • the beacons wake up the next layer of sensor/actuator nodes. This procedure of successive transmission of beacons and link discovery packets may occur until the last layer of sensor/actuator nodes has been reached. Eventually, all nodes are active and synchronized with respect to the cluster head network.
  • the activated nodes may receive and build a list containing the IDs and classes/types of sensor/actuator nodes and cluster heads within their transmission range, as well as the average cost of the associated links.
  • the cost may be calculated based on signal strength, packet loss rate, power reserves at the node, and networking capabilities of the node. There may be a threshold of the average link cost above which neighboring nodes are not kept in the list.
  • These packets contain a header with preamble, the ID of the transmitting node, the list of all direct neighbors including the associated link costs, and a list of the IDs of all nodes that have forwarded the particular packet.
  • the cluster heads may respond by sending acknowledgement packets to the individual sensor/actuator nodes along the reverse path traversed by the registration packets.
  • the information received at individual cluster heads may be constantly shared and updated with all other cluster heads.
  • the global routing topology for the entire sensor/actuator network may be determined at the panel or at the cluster head connected to it.
  • the determined global routing topology may be optimized with respect to latency and equalized load distribution in the network.
  • the different capabilities of the different node classes 0, 1, 2 may also be considered in this algorithm.
  • the features of the centralized approach may include reduced overhead at the sensor/actuator nodes and a more evenly distributed load within the network.
  • each cluster head maybe connected to a cluster of nodes of approximately the same size, and each node within the cluster may again serve as a multi-point hop for an approximately equal number of nodes.
  • a "route definition packet" may be sent from the cluster head to each individual node containing all or a subset of the following: a header and preamble, the node's layer n, its neighbors in higher layer n+1, the neighbor in layer n-1 to be used for message forwarding, the cluster head to report to, and a time stamp for re-synchronization.
  • the route definition packet may be periodically retransmitted until the issuing cluster head receives a valid acknowledgment packet from each individual sensor/actuator node.
  • the reliability of the exchange maybe increased by link-level acknowledgements at each hop. Once acknowledged, this information may be continuously shared and updated among the cluster heads.
  • the initialization may be completed when valid acknowledgments have been received at the cluster heads from all sensor/actuator nodes, all cluster heads contain the same information regarding the network topology, and the quantity and IDs of all registered nodes is consistent with the information known from the installation. In case of inconsistencies, an error message may be generated at the panel and the initialization process is repeated. Once the initialization is finalized, the sensor/actuator nodes remain in a power-saving mode of normal operation.
  • cluster heads may maintain consistent information of the entire network, hi particular, the cluster heads may maintain consistent information regarding all other cluster heads as well as the sensor/actuator nodes associated with them. Therefore, the databases in each cluster head may be continuously updated by exchanging data packets with the neighboring cluster heads. Moreover, redundant information, such as, for example, information about the same sensor/actuator node at more than one cluster head, may be used in order to confirm messages.
  • the user may simultaneously monitor the entire network at multiple panels connected to several cluster heads, and may use different types of panels simultaneously.
  • some of the cluster heads may be connected to an already existing local area network (LAN), thus allowing for access from any PC with the panel software installed.
  • LAN local area network
  • remote control over a, for example, secured Internet connection may be performed.
  • LAN local area network
  • Internet server may still represent a potential single point of failure
  • at least one dedicated panel computer may be directly linked to one of the cluster heads.
  • This device may also provide a gateway to an outside network or to an operator.
  • a person carrying a mobile or handheld computer may link with any of the cluster heads in his or her vicinity via a wireless connection.
  • the network may be controlled from virtually any location within the communication range of the cluster heads.
  • the sensor/actuator nodes may operate in an energy-efficient mode with very low duty cycle.
  • the transceiver and the microcontroller may be predominantly in a power save or sleep mode, h certain intervals (such as, for example, every ten milliseconds up to every few minutes) the sensor/actuator nodes may wake up for very brief cycles (such as, for example, within the tens of microseconds to milliseconds range) in order to detect RF beacon signals, and to perform self-tests and other tasks depending on individual device functionality. If a RF beacon is detected, the controller checks the preamble and header of the following message. If it is a valid message for this particular node, the entire message is received.
  • timeouts at each of these steps allow the node to go back to low power mode in order to preserve power. If a valid message is received, the required action is taken, such as, for example, a task is performed or message is forwarded. If the sensor or the self-test circuitry detects an alarm state, an alarm message may be generated and broadcasted, which may be relayed towards the cluster heads by intermediate nodes. Depending on the actual application, the alarm-generating node may remain awake until a confirmation from a neighboring node or from one of the cluster heads or a control message from a cluster head has been received.
  • alarm messages may be forwarded to one or more of the cluster heads by a multi-hop operation through intermediate nodes. This may ensure redundancy and quick transfer of urgent alarm messages.
  • the packets may be unicasted from node to node in order to keep the network traffic low.
  • all nodes may synchronously wake up (such as, for example, within time intervals of several minutes to several hours) for exchange of supervision messages, hi order to keep the network traffic low and to preserve energy at the nodes, data aggregation schemes may be deployed.
  • nodes closer to a cluster head may wait for the status messages from the nodes farther away from the cluster head (i.e., the higher-layer nodes) in order to consolidate information into a single message.
  • the status messages may explicitly be forwarded. Since the entire network topology is maintained at each of the cluster heads, this information maybe sufficient to implicitly derive the status of every node without explicit OK-messages. By doing so, a minimum number of messages with minimum packet size may be generated. In an optimum case, only one brief OK message per node may be generated.
  • the status messages may be acknowledged by the receiving lower-layer nodes.
  • the acknowledgment packets may also contain a time stamp, thus allowing for successive re-synchronization of the internal clocks of every sensor/actuator node with the cluster head network.
  • the nodes may implicitly include the status information (e.g., not-OK information only) of all lower-layer nodes within their hearing range in their own messages, i.e., also of nodes which receive acknowledgments from other nodes and even report to other cluster heads. This may lead to a high redundancy of the status information received by the cluster heads. Since the entire network topology may be maintained at the cluster heads, this information may be utilized to distinguish between link and node failures, and to reduce the number of false alarms. Confirmation messages from the cluster heads or from neighboring nodes may also be used for resynchronization of the clocks of the sensor/actuator nodes. Reconfiguration
  • the network may reconfigure without user intervention so that links to all operable nodes of the remaining network may be re-established.
  • this task may be performed by using information about alternative ("second best") routes to one of the cluster heads derived and stored locally at the individual sensor/actuator nodes. Additionally, lost nodes may use "SOS" messages to retrieve a valid route from one of their neighbors.
  • the cluster heads may provide disconnected sensor/actuator nodes with a new route chosen from the list of all possible routes maintained at the cluster heads.
  • a combination of both approaches may be implemented into one system in order to increase the speed of reconfiguration and to decrease the necessary packet overhead in the case of small local glitches.
  • some or all cluster heads may start a partial or complete reconfiguration of the sensor/actuator network by sending new route update packets.

Abstract

A wireless network is described including a cluster network (Fig. 1) and a sensor/actuator network arranged in a hierarchical manner with the cluster head network. The cluster head network includes at least one cluster head and the sensor/actuator network includes a plurality of sensor/actuator nodes arranged in a plurality of node levels.

Description

SELF-ORGANIZING HIERARCHICAL WIRELESS NETWORK FOR SURVEILLANCE AND CONTROL
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of provisional application Serial Number 60/347,569 filed on January 10, 2002 and is related to application entitled "Protocol for Reliable, Self- Organizing, Low-Power Wireless Network for Security and Building Automation" filed on November 21 , 2002, both of which are incorporated herein by reference.
FIELD OF THE INVENTION
The present invention relates to a wireless network of sensors and actuators for surveillance and control, and a method of operation for the network.
BACKGROUND
Wire-based networks may be applied in security systems, building automation, climate control, and control and surveillance of industrial plants. Such networks may include, for example, a large number of sensors and actuators arranged in a ring or tree configuration controlled by a central unit (e.g. a panel or base station) with a user interface.
A substantial amount of the cost of such systems may arise due to the planning and installation of the network wires. Moreover, labor-intensive manual work may be required in case of reconfigurations such as the installation of additional devices or changes in the location of existing devices.
Battery-powered versions of the aforementioned sensors and actuators maybe deployed with built-in wireless transmitters and/or receivers, as described in, for example, United States Patent Nos. 5,854,994 and 6,255,942. A group of such devices may report to or be controlled by a dedicated pick-up/control unit mounted within the transmission range of all devices. The pick-up/control unit may or may not be part of a larger wire-based network. Due to the RF propagation characteristics of electromagnetic waves under conditions that may exist inside buildings, e.g. multi-path, high path losses, and interference, problems may arise during and after the installation process associated with the location of the devices and their pick-up/control unit. Hence, careful planning prior to installation as well as trial and error during the installation process may be required. Moreover, due to the limited range of low-power transceivers applicable for battery-powered devices, the number of sensors or actuators per pick-up/control unit may be limited. Furthermore, should failure of the pick-up/control unit occur, all subsequent wireless devices may become inoperable.
SUMMARY OF THE INVENTION
The present invention relates to a wireless network of sensors and actuators for surveillance and control, and a method of operation for the network. The sensors and actuators may include, for example, smoke detectors, motion detectors, temperature sensors, door/window contacts, alarm sounders, or valve actuators. Applications may include, but are not limited to, security systems, home automation, climate control, and control and surveillance of industrial plants. The system may support devices of varying complexity, which maybe capable of self-organizing in a hierarchical network. Furthermore, the system may be arranged in a flexible manner with a minimum prior planning in an environment that may possess difficult RF propagation characteristics, and may ensure connectivity of the majority of the devices in case of localized failures.
According to an exemplary embodiment of the present invention, the network may include two physical portions or layers. The first physical layer may connect a small number of relatively more complex devices to form a wireless backbone network. The second physical layer may connect a large number of relatively less complex low-power devices with each other and with the backbone nodes. Such an arrangement of two separate physical layers may impose less severe energy constraints upon the network.
To allow for reliable operation and scalability, the central base station may be eliminated so that a single point of failure may be avoided. The system may instead be controlled in a distributed manner by the backbone nodes, and the information about the entire network may be accessed, for example, any time at any backbone node. Upon installation, the network may configure itself without the need for user interaction and for detailed planning (therefore operating in a so-called "ad-hoc network" manner). In case of link or node failures during the operation, the system may automatically reconfigure in order to ensure connectivity.
Thus, an exemplary embodiment and/or an exemplary method according to the present invention may improve the load distribution, ensure scalability and small delays, and eliminate the single point of failure of the aforementioned ad-hoc network, while preserving its ability to self-configure and reconfigure.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a schematic drawing of a conventional ad-hoc wireless sensor network. Figure 2 is a schematic drawing of a hierarchical ad-hoc network without a central control unit.
DETAILED DESCRIPTION
Figure 1 shows a self-configuring (ad-hoc) wireless network of a large number of battery- powered devices with short-range wireless transceivers. As discussed in United States Patent Nos. 6,078,269, 5,553,076, and 6,208,247 a self-configuring wireless network may be capable of determining the optimum routes to all nodes, and may therefore reconfigure itself in case of link or node failures. Relaying of the data may occur in short hops from remote sensors or to remote actuators through intermediate nodes to or from the central control unit (base station, see Figure 1), respectively, while data compression techniques, and low duty cycles of the individual devices may prolong battery life. However, large systems with hundreds or even thousands of nodes may lead to an increased burden (e.g. battery drain) on those nodes closer to the base station which may serve as multi-hop points for many devices. Hence, the useful lifetime of the entire system may be reduced. Moreover, large networks may result in many hops of messages from nodes at the periphery of the network, which may lead to an increased average energy consumption per message as well as the potential for significant delays of the messages passing through the network. However, such delays may not be acceptable for time-critical applications, including, for example, security and control systems. Furthermore, if a central base station forms a single point of failure, the entire network may fail in case of the base station failure.
Device types
Figure 2 is a schematic drawing of a hierarchical ad-hoc network absent a central control unit. The network may include devices of different types with varying complexity and functionality. In particular, the network may include a battery-powered sensor/actuator node (also referred to a Class 1 node), a power-line powered sensor/actuator node (also referred to as a Class 0 node), a battery-powered sensor/actuator node with limited capabilities (also referred to as a Class 2 node), a cluster head, and a panel. The network may also include, for example, a battery-powered node or input device (e.g., key fob) with limited capabilities, which may not be a "fixed" part of the actual network topology, or a
RF transmitter device (also referred to as a Class 3 node). Each device type is more fully explained below.
Battery-powered sensor/actuator nodes (Class 1 nodes)
Class 1 sensor/actuator nodes may be battery-powered devices that include sensor elements to sense one or more physical quantities, or an actuator to perform required actuations. The battery may be supported or replaced by an autonomous power supply, such as, for example, a solar cell or an electro-ceramic generator.
Through a data interface, the sensor or actuator may be connected to a low-power microcontroller to process the raw sensor signal or to provide the actuator with the required control information, respectively. Moreover, the microcontroller may handle a network protocol and may store data in both volatile and non- volatile memory. Class 1 nodes may be fully functional network nodes, for example, capable of serving as a multi- hop point.
The microcontroller may be further connected to a low power (such as, for sample, short to medium range) RF radio transceiver module. Alternatively, the sensor or actuator may include its own microcontroller interfaced with a dedicated network node controller to handle the network protocol and interface the radio. Each sensor/actuator node may be factory-programmed with a unique device ID.
Power-line powered sensor/actuator nodes (Class 0 nodes)
Class 0 nodes may have a similar general architecture as Class 1 nodes, i.e., one or more microcontrollers, a sensor or actuator device, and interface circuitry, but unlike Class 1 nodes, Class 0 nodes may be powered by a power line rather than a battery. Furthermore, Class 0 nodes may also have a backup supply allowing limited operation during power line failure. Class 0 nodes may be capable of performing similar tasks as Class 1 nodes but may also form preferential multi-hop points in the network tree so that the load on battery-powered devices may be reduced. Additionally, they may be useful for actuators performing power- intensive tasks. Moreover, there may be Class 0 nodes without a sensor or actuator device solely forming a dedicated multi-hop point.
Battery-powered sensor/actuator nodes with limited networking capabilities (Class 2 nodes)
Class 2 nodes may have a general architecture similar to those of Class 0 and Class 1 nodes. However, Class 2 nodes may include applications devices that are constrained with limitations in terms of cost and size. Class 2 nodes may be applied, for example, in devices such as door/window contacts, window contacts, temperature sensors. Such devices may be equipped with a less powerful, and therefore, for example, more economic, microcontroller as well as a smaller battery. Class 2 nodes may be arranged at the periphery (i.e. edge) of the network tree since Class 2 nodes may allow for only limited networking capabilities. For example, Class 2 nodes may not be capable of forming a multi-hop point.
Battery-powered nodes not permanently a member of the network topology (Class 3 nodes) Class 3 nodes may have a general architecture similar to those of Class 0 and Class 1 nodes. However, Class 3 node may include applications devices that may be constrained, for example, by cost and size. Moreover, they may feature a RF transmitter only rather than a transceiver. Class 3 nodes may be applied, for example, in devices such as key fobs, panic buttons, or location service devices. These devices may be equipped with a less powerful, ad therefore, more economic microcontroller, as well as a smaller battery. Class 3 nodes maybe configured to be a non-permanent part of the network topology (e.g., they may not be supervised and may not be eligible to form a multi-hop point). However, they may be capable of issuing messages that may be received and forwarded by other nodes in the network.
Cluster heads
Cluster heads are dedicated network control nodes. They may additionally but not necessarily perform sensor or actuator functions. They may differ in several properties from the Class 0, 1, and 2 nodes. First, the cluster heads may have a significantly more powerful microcontroller and more memory at their disposal. Second, the cluster heads may not be as limited in energy resources as the Class 1 and 2 nodes. For example, cluster heads may have a mains (AC) power supply with an optional backup battery for a limited time in case of power blackouts (such as, for example, 72 hrs)
Cluster heads may further include a short or medium range radio module ("Radio 1") for communication with the sensor/actuator nodes. For communication with other cluster heads, cluster heads may in addition include a second RF transceiver ("Radio 2") with a larger bandwidth and/or longer communication range, such as, for example, but not necessarily at another frequency band than the Radio 1.
In an alternative exemplary setup, the cluster heads may have only one radio module capable of communicating with both the sensor/actuator nodes and other cluster heads. This radio may offer an extended communication range compared to those in the sensor/actuator nodes. The microcontroller of the cluster heads maybe more capable, for example, in terms of clock speed and memory size, than the microcontrollers of the sensor/actuator nodes. For example, the microcontroller of the cluster heads may run several software programs to coordinate the communication with other cluster heads (as may be required, for example, by the clusterhead network), to control the sensor/actuator network, and to interpret and store infoimation retrieved from sensors and other cluster heads (database). Additionally, a cluster head may include a standard wire-based or wireless transceiver, such as, for example, an Ethernet or Bluetooth transceiver, in order to connect to any stationary, mobile, or handheld computer ("panel") with the same interface. This functionality may also be performed wirelessly by one of the radio modules. Each cluster head may be factory-programmed with a unique device ID.
Panel
The top-level device may include a stationary, mobile or handheld computer with a software program including for example a user interface ("panel"). A panel may be connected through the abovementioned transceiver to one cluster head, either through a local area network (LAN), or through a dedicated wire-based or wireless link.
Installation
An exemplary system may include a number (up to several hundred, for example) of cluster heads distributed over the site to be monitored and/or controlled (see Fig. 2). The installer may be required to take the precaution that the distance between the cluster heads does not exceed the transmission range of the respective radio transceiver (For example,
"Radio 2", may operate 50 to 500 meters). The cluster heads may be connected to an AC power line. Upon installation, the cluster heads may be set to operate in an inquiry mode. In this mode, the radio transceiver may continuously scan the channels to receive data packets from other cluster heads in order to set up communication links. Between and around the cluster heads, the required sensor/actuator nodes may be installed (up to several thousand, typically 10 to 500 times as many as cluster heads). Devices with different functionality, such as, for example, different sensor types and actuators, may be mixed. During installation, the installer should follow two general rules. First, the distance between individual sensor/actuator nodes and between them and at least one cluster head should not exceed the maximum transmission range of their radios ("Radio 1" of the cluster heads, may operate within a range of, for example, 10 to 100 meters). Second, the depth of the network, i.e., the number of hops from a node at the periphery of the network in order to reach the nearest cluster head, should be limited according to the latency requirements of the current application (such as, for example, 1 to 10 hops). Upon installation, all sensor/actuator nodes may be set to operate in an energy-saving polling mode. In this mode, the controller and all other components of the sensor/actuator node remain in a power save or sleep mode. In defined time intervals, the radio and the network controller may be very briefly woken up by a built-in timer in order to scan the channel for a beacon signal (such as, for example, an RF tone or a special sequence).
During the installation, the unique IDs of the cluster heads (or their radio modules, respectively) as well as the unique IDs of all sensor/actuator nodes may be made known, for example, by reading a barcode on each device, triggering an ID transmission from each device through a wireless or wire-based interface, or manually inputting the device IDs via an installation tool. Once known, the unique IDs may be recorded and/or stored, as well as other related information, such as, for example, a predefined security zone or a corresponding device location. This information may be required to derive the network topology during the initialization of the network, and to ensure that only registered devices are allowed to participate in the network. Initialization
After the installation of all nodes, at least one panel may be connected to one of the cluster heads. This may be accomplished, for example, via a Local Area Network (LAN), a direct Ethernet connection, another wire-based link, or a wireless link. Through the user interface of the panel software, a software program for performing an auto-configuration may then be started on the controller of the cluster head in order to setup a network between all cluster heads. The progress of the configuration may be displayed on the user interface, which may include an option for the user to intervene. For example, the user interface may include an option to define preferred connections between individual devices.
In a first step, the cluster head connected to the panel is provided with an ID list of all cluster heads that are part of the actual network (i.e., the allowed IDs). Then, the discovery of all links between all cluster heads with allowed IDs (e.g., using the "Radio 2" transceivers) is started from this cluster head. This may be realized by successive exchange of inquiry packets and the allowed ID list between neighboring nodes. As links to other cluster heads are established, the entries of the routing tables of all cluster heads may be routinely updated with all newly inquired nodes or new links to known nodes. The link discovery process is finished when at least one route can be found between any two installed cluster heads, no new links can be discovered, and the routing table of every cluster head is updated with the latest status.
Next, an optimum routing topology is determined to establish a reliable communication network connecting all cluster heads. The optimization algorithm may be based on a cost function including, but not limited to, message delay, number of hops, and number of connections from/to individual cluster heads ("load"). The associated routine maybe performed on the panel or on the cluster head connected to the panel. The particular algorithm may depend, for example, on the restrictions of the physical and the link layer, and on requirements of the actual application.
The link discovery and routing may be performed using standardized layered protocols including, for example, Bluetooth, IEEE 802.11, or IEEE 802.15. In particular, link discovery and routing may be implemented as an application supported by the services of the lower layers of a standardized communications protocol stack. The lower layers may include, for example, a Media Access Control (MAC) and physical layer.
By using standardized protocols, standardized radio transceivers may be used as "Radio 2". Furthermore, the use of standardized protocols may provide special features and/or services including, for example, a master/slave mode operation, that may be used in the routing algorithm. Once the cluster head network is established, all active links may be continuously monitored to ensure connectivity by supervising the messages exchanged between neighboring nodes. Additionally, the cluster heads may synchronize their internal clocks so that they may perform tasks in a time-synchronized manner.
In the next step, an ad-hoc multi-hop network between the sensor/actuator nodes and the cluster head network is established. There may be several alternative approaches to establish the multi-hop and cluster head networks, including, for example, both decentralized and centralized approaches. To prevent unauthorized intrusion into the wireless network, all links between the cluster heads and between the sensor/actuator nodes may be secured by encryption and/or a message authentication code (MAC) using, for example, public shared keys.
a. Decentralized approach 1
According to a first exemplary decentralized method, the cluster heads initially broadcast a beacon signal (such as, for example, an RF tone or a fixed sequence, e.g., 1-0-1-0-1-...) in order to wake up the sensor/actuator nodes within their transmission range ("first layer nodes") from the above-mentioned polling mode. Then, the cluster heads broadcast for a predefined time messages ("link discovery packets") containing all or a subset of the following: A header with preamble, the node ID, node class and type, and time stamps to allow the recipients to synchronize their internal clocks. After the cluster heads stop transmitting, the first layer nodes begin by broadcasting beacon signals and then next broadcast "link discovery packets". The beacons wake up a second layer of sensor/actuator nodes, i.e., nodes within the broadcast range of first layer nodes that could not receive messages directly from any cluster head. This procedure of successive transmission of beacons and link discovery packets may occur until the last layer of sensor/actuator nodes has been reached. (The maximum number of allowed layers may be a user-defined constraint.)
Eventually, all nodes may be synchronized with respect to the cluster head network. Hence, there are time slots in which only nodes within one particular layer are transmitting. All other activated nodes may receive and build a list of sensor/actuator nodes within their transmission range, a list of cluster heads, and the average cost required to reach each cluster head via the associated links. The cost (e.g., the number of hops, latency, etc.) may be calculated based on measures such as signal strength or packet loss rate, power reserves at the node, and networking capabilities of the node (see node classes
0,1, 2). There may be a threshold of the average link cost above which neighboring nodes are not kept in the list.
Once the last layer of sensor/actuator nodes has been reached, i.e., all nodes have built their respective neighbor list, the cluster heads broadcast for a defined time another message type ("route discovery packets") which may includes all or a subset of the following: A header with preamble, the node ID, node class and type, the cost in order to reach the nearest cluster head (e.g., set to zero cost), the number of hops to reach this cluster head (e.g., set to zero cost), and the ID of the nearest cluster head (e.g., set to its own ID). Once the cluster heads stop transmitting, the sensor/actuator nodes broadcast route discovery packets (now with cost > 0, number of hops > 0) in the following manner: There is one time slot for each possible cost > 0, i.e. 1, 2, 3 .... max. Within each time slot, all nodes having a route for reaching the nearest cluster head with this particular cost broadcast several route discovery packets. All other nodes listen and update their list of cluster heads using a new cost function. This new cost function may contain the cost for the node to receive the message from the particular cluster head, the overall number of hops to reach this cluster head, and the cost of the link between the receiving node and the transmitting node (from the previously built list of neighboring nodes). Nodes having no direct link to any cluster head so far now may start to build their cluster head list. Moreover, new routes with a higher number of hops but a lower cost than the previously known routes may become available. Furthermore, all nodes continuously determine the layer of their neighboring nodes from the route discovery packets of these nodes.
Once the time slot associated with the lowest cost of an individual node to reach a cluster head has been approached, this node starts broadcasting its route discovery packets, stops updating the cluster head list, and builds a new list of neighboring nodes belonging to the next higher layer n+1 only. This procedure may ensure that every node receives a route to a cluster head with the least possible cost, knows the logical layer n it belongs to, and has a list of direct neighbors in the next higher layer n+1.
Once the last time slot for route discovery packets has been reached, all nodes may start to send "route registration packets" to their cluster heads using intermediate nodes as multi- hop points. Link-level acknowledgement packets may ensure the reliability of these transmissions. In this phase, nodes may keep track of their neighbors in layer n+1 which use them as multi-hop points: supervision packets from these nodes may have to be confirmed during the following mode of normal operation. The route registration packets may contain all or a subset of the following: A header with preamble, the ID of the transmitting node, the list of direct neighbors in the next higher layer (optionally including the associated link costs), and the IDs of all nodes which have forwarded the packet.
The cluster heads respond with acknowledgement packets (optionally including a time stamp for re-synchronization) being sent the inverse path to each individual sensor/actuator node. If there is no link-level acknowledgement, the route registration packets are periodically retransmitted by the sensor/actuator nodes until a valid acknowledgment has been received from the associated cluster head. The cluster heads exchange and update the information about all registered nodes among each other. Hence, the complete topology of the sensor/actuator network may be derived at each cluster head.
The initialization is finished when valid acknowledgments have been received at all nodes, all cluster heads contain similar information about the network topology, and the quantity and IDs of the registered sensor/actuator nodes are consistent with the information known at the cluster heads from the installation. In case of inconsistencies, an error message may be generated at the panel and the initialization process is repeated. Once the initialization is finalized, the sensor/actuator nodes remain in a power-saving mode of normal operation
(see below).
b. Decentralized approach 2
According to a second exemplary decentralized method to establish the multi-hop and cluster head networks, the cluster heads may broadcast beacon signals during a first time slot in order to wake up the sensor/actuator nodes within their transmission range ("first layer nodes") from the polling mode. The cluster heads then broadcast a predefined number of "link discovery packets" containing all or a subset of the following: a header with preamble, an ID, and a time stamp allowing the recipients to synchronize their internal clocks. All activated nodes may receive and build a list of cluster heads, as well as an average cost for the respective links. The cost may be calculated based on the signal strength and the packet loss rate.
After the cluster heads stop transmitting, in a second time slot the first layer nodes start broadcasting beacons and link discovery packets, respectively. The beacons wake up the second layer of sensor/actuator nodes, which were not previously woken up by the cluster heads. In addition to an ID and time stamp, the link discovery packets sent by sensor/actuator nodes may also contain their layer, node class and type, and the cost required to reach the "nearest" (i.e., with least cost) cluster head derived from previously received link discovery packets of other nodes. All activated nodes may receive and build a list of nodes in their transmission range and cluster heads, as well as the average cost for the respective links. The cost may be calculated, for example, based on number of hops, signal strength, packet loss rate, power reserves at the nodes, and networking capabilities of the nodes (see infra description for node classes 0,1, 2). This process may continue until the nodes in the highest layer (farthest away from cluster heads) have woken up, have broadcast their messages in the respective time slot, and have built the respective node lists.
The decision of a node in layer n regarding its nearest cluster head may be based upon previously broadcasted decisions of nodes in the layer n-1 (closer to the cluster head). Should a new route including one additional hop lead to a lower cost, a node may re- broadcast its link discovery packets within the following time slot, i.e., the node is moved to the next higher layer n+1. This may result in changes of the "nearest cluster heads" for other nodes. However, these affected nodes may only need to re-broadcast in case of a layer change. Once the nodes within the highest layer have determined their route with the least cost to one of the cluster heads, all nodes may start to send route registration packets to their nearest determined cluster head using intermediate nodes as multi-hop points. These transmissions may be made reliable via link-level acknowledgment. In this phase, the nodes keep track of their neighbors in layer n+1 which use them as multi-hop points: Supervision packets from these nodes may be confirmed during the normal operation mode that follows initialization. The route registration packets contain a header with preamble, the ID of the transmitting node, the list of neighbors in the next higher layer (optionally including the associated link costs), and the IDs of all nodes that have forwarded the packet.
The cluster heads may respond with acknowledgement packets (optionally including a time stamp for re-synchronization) sent to the individual sensor/actuator nodes along the reverse path traversed by the route registration packet. If there is no link-level acknowledgement, the route registration packets may be periodically retransmitted by the sensor/actuator nodes until a valid acknowledgment has been received from the respective cluster head. The cluster heads may exchange and update the information about all registered nodes among each other. Hence, the complete topology of the sensor/actuator network maybe derived at each individual cluster head. The initialization is finished when valid acknowledgments have been received at all nodes, all cluster heads contain similar information about the network topology, and the quantity and IDs of all registered sensor/actuator nodes are consistent with the information known at the cluster heads from the installation. In case of inconsistencies, an error message may be generated at the panel and the initialization process may be repeated. Once the initialization is finalized, the sensor/actuator nodes may remain in a power-saving mode of normal operation.
c. Centralized approach 1
According to a first exemplary centralized method to establish the multi-hop and cluster head networks, the cluster heads broadcast beacon signals to wake up the sensor/actuator nodes within their transmission range (i.e., "first layer nodes") from the polling mode.
Then, the cluster heads broadcast messages ("link discovery packets") for a predefined time containing a header with preamble, an ID, and time stamps allowing the recipients to synchronize their internal clocks. After the cluster heads stop transmitting, the first layer nodes may start broadcasting beacons and link discovery packets that additionally contain a node class and type. The beacons wake up the next layer of sensor/actuator nodes. This procedure of successive transmission of beacons and link discovery packets may occur until the last layer of sensor/actuator nodes has been reached. Eventually, all nodes are active and synchronized with respect to the cluster head network. The activated nodes may receive and build a list containing the IDs and classes/types of sensor/actuator nodes and cluster heads within their transmission range, as well as the average cost of the associated links. The cost may be calculated based on signal strength, packet loss rate, power reserves at the node, and networking capabilities of the node. There may be a threshold of the average link cost above which neighboring nodes are not kept in the list. Once the last layer of nodes has been activated, all nodes send link registration packets through nodes in lower layers to any one of the cluster heads. These transmissions may be made reliable via link-level acknowledgments. These packets contain a header with preamble, the ID of the transmitting node, the list of all direct neighbors including the associated link costs, and a list of the IDs of all nodes that have forwarded the particular packet. The cluster heads may respond by sending acknowledgement packets to the individual sensor/actuator nodes along the reverse path traversed by the registration packets.
The information received at individual cluster heads may be constantly shared and updated with all other cluster heads. Once a link registration packet has been received from every installed sensor/actuator node, the global routing topology for the entire sensor/actuator network may be determined at the panel or at the cluster head connected to it. The determined global routing topology may be optimized with respect to latency and equalized load distribution in the network. The different capabilities of the different node classes 0, 1, 2 may also be considered in this algorithm. Hence, the features of the centralized approach may include reduced overhead at the sensor/actuator nodes and a more evenly distributed load within the network. Under ideal circumstances, each cluster head maybe connected to a cluster of nodes of approximately the same size, and each node within the cluster may again serve as a multi-point hop for an approximately equal number of nodes.
Eventually, a "route definition packet" may be sent from the cluster head to each individual node containing all or a subset of the following: a header and preamble, the node's layer n, its neighbors in higher layer n+1, the neighbor in layer n-1 to be used for message forwarding, the cluster head to report to, and a time stamp for re-synchronization. The route definition packet may be periodically retransmitted until the issuing cluster head receives a valid acknowledgment packet from each individual sensor/actuator node. The reliability of the exchange maybe increased by link-level acknowledgements at each hop. Once acknowledged, this information may be continuously shared and updated among the cluster heads. The initialization may be completed when valid acknowledgments have been received at the cluster heads from all sensor/actuator nodes, all cluster heads contain the same information regarding the network topology, and the quantity and IDs of all registered nodes is consistent with the information known from the installation. In case of inconsistencies, an error message may be generated at the panel and the initialization process is repeated. Once the initialization is finalized, the sensor/actuator nodes remain in a power-saving mode of normal operation.
Operation Cluster head network
To eliminate a central base station as a single point of failure, and to make complex and large network topologies possible without an excessive number of hops (message retransmissions) and with low latency all cluster heads may maintain consistent information of the entire network, hi particular, the cluster heads may maintain consistent information regarding all other cluster heads as well as the sensor/actuator nodes associated with them. Therefore, the databases in each cluster head may be continuously updated by exchanging data packets with the neighboring cluster heads. Moreover, redundant information, such as, for example, information about the same sensor/actuator node at more than one cluster head, may be used in order to confirm messages.
Since the information about the status of the entire network is maintained at every cluster head, the user may simultaneously monitor the entire network at multiple panels connected to several cluster heads, and may use different types of panels simultaneously. According to one exemplary embodiment, some of the cluster heads may be connected to an already existing local area network (LAN), thus allowing for access from any PC with the panel software installed. Alternatively, remote control over a, for example, secured Internet connection may be performed. Since a local area network (LAN) or Internet server may still represent a potential single point of failure, at least one dedicated panel computer may be directly linked to one of the cluster heads. This device may also provide a gateway to an outside network or to an operator. Moreover, a person carrying a mobile or handheld computer may link with any of the cluster heads in his or her vicinity via a wireless connection. Hence, the network may be controlled from virtually any location within the communication range of the cluster heads.
Sensor/actuator network
During normal operation, the sensor/actuator nodes may operate in an energy-efficient mode with very low duty cycle. The transceiver and the microcontroller may be predominantly in a power save or sleep mode, h certain intervals (such as, for example, every ten milliseconds up to every few minutes) the sensor/actuator nodes may wake up for very brief cycles (such as, for example, within the tens of microseconds to milliseconds range) in order to detect RF beacon signals, and to perform self-tests and other tasks depending on individual device functionality. If a RF beacon is detected, the controller checks the preamble and header of the following message. If it is a valid message for this particular node, the entire message is received. If no message is received or an invalid message is received, timeouts at each of these steps allow the node to go back to low power mode in order to preserve power. If a valid message is received, the required action is taken, such as, for example, a task is performed or message is forwarded. If the sensor or the self-test circuitry detects an alarm state, an alarm message may be generated and broadcasted, which may be relayed towards the cluster heads by intermediate nodes. Depending on the actual application, the alarm-generating node may remain awake until a confirmation from a neighboring node or from one of the cluster heads or a control message from a cluster head has been received. By using this mechanism of "directed flooding", alarm messages may be forwarded to one or more of the cluster heads by a multi-hop operation through intermediate nodes. This may ensure redundancy and quick transfer of urgent alarm messages. Alternatively, in less time- critical applications and for control messages sent from the cluster heads to individual nodes, the packets may be unicasted from node to node in order to keep the network traffic low. In order to keep track of the status of the individual sensor/actuator nodes and the links between them, all nodes may synchronously wake up (such as, for example, within time intervals of several minutes to several hours) for exchange of supervision messages, hi order to keep the network traffic low and to preserve energy at the nodes, data aggregation schemes may be deployed.
According to one exemplary embodiment, nodes closer to a cluster head (i.e., the lower- layer nodes) may wait for the status messages from the nodes farther away from the cluster head (i.e., the higher-layer nodes) in order to consolidate information into a single message. To reduce packet size, only "not-OK status" information may explicitly be forwarded. Since the entire network topology is maintained at each of the cluster heads, this information maybe sufficient to implicitly derive the status of every node without explicit OK-messages. By doing so, a minimum number of messages with minimum packet size may be generated. In an optimum case, only one brief OK message per node may be generated.
In order to ensure that the status of each node is sent to at least one cluster head during every supervision interval, the status messages may be acknowledged by the receiving lower-layer nodes. The acknowledgment packets may also contain a time stamp, thus allowing for successive re-synchronization of the internal clocks of every sensor/actuator node with the cluster head network.
Furthermore, the nodes may implicitly include the status information (e.g., not-OK information only) of all lower-layer nodes within their hearing range in their own messages, i.e., also of nodes which receive acknowledgments from other nodes and even report to other cluster heads. This may lead to a high redundancy of the status information received by the cluster heads. Since the entire network topology may be maintained at the cluster heads, this information may be utilized to distinguish between link and node failures, and to reduce the number of false alarms. Confirmation messages from the cluster heads or from neighboring nodes may also be used for resynchronization of the clocks of the sensor/actuator nodes. Reconfiguration
In case of a failure of individual nodes or links in the sensor/actuator network, the network may reconfigure without user intervention so that links to all operable nodes of the remaining network may be re-established.
In an exemplary decentralized approach, this task may be performed by using information about alternative ("second best") routes to one of the cluster heads derived and stored locally at the individual sensor/actuator nodes. Additionally, lost nodes may use "SOS" messages to retrieve a valid route from one of their neighbors.
Alternatively, in an exemplary centralized approach the cluster heads may provide disconnected sensor/actuator nodes with a new route chosen from the list of all possible routes maintained at the cluster heads. Moreover, a combination of both approaches may be implemented into one system in order to increase the speed of reconfiguration and to decrease the necessary packet overhead in the case of small local glitches.
For either approach, in case of a severe failure of several nodes, some or all cluster heads may start a partial or complete reconfiguration of the sensor/actuator network by sending new route update packets.
In case of a link failure within the cluster head network, alternative routes may immediately be established when all cluster heads maintain a table with all possible links. However, in case of a failure of one or more cluster heads a reconfiguration of the according sensor/actuator nodes may be required to reintegrate them into the network of remaining cluster heads. Since all cluster heads are configured to have complete knowledge about the entire network topology, a majority of the network may remain operable despite failure of several cluster heads or links by fragmenting into two or more parts. In this instance, the information about the nodes in each of the fragments may still be available at any of the cluster heads in this fragment.

Claims

What Is Claimed Is:
1. A wireless network, comprising: a cluster head network having at least one cluster head; and a sensor/actuator network arranged in a hierarchical manner with the cluster head network and having a plurality of sensor/actuator nodes arranged in a plurality of node levels and being self-organizing.
2. The wireless network of claim 1, wherein the wireless network does not include a single point of failure.
3. The wireless network of claim 1, wherein the wireless network does not include a central base station.
4. The wireless network of claim 1, where the at least one cluster head includes redundant information regarding the wireless network as compared to another cluster head.
5. The wireless network of claim 4, wherein the redundant information is accessible by user at any of the at least one cluster head.
6. The wireless network of claim 5, wherein the wireless network is configured so that the user interacts with at least one of the at least one cluster head and the redundant information.
7. The wireless network of claim 1, wherein the at least one cluster head is configured to one of control and supervise the sensor/actuator nodes so that a task is performed by any cluster head.
8. The wireless network of claim 1, wherein the wireless network is configurable without at least one of user interaction and detailed planning.
9. The wireless network of claim 1, wherein the wireless network is reconfigurable despite at least one of a link failure and a sensor/actuator node failure.
10. The wireless network of claim 1, wherein the useful lifetime of the wireless network is optimized.
11. The wireless network of claim 1 , wherein the wireless network is applied for at least one of security, home automation, climate control, and control and surveillance.
12. The wireless network of claim 1, wherein the at least one cluster head includes a base station.
13. The wireless network of claim 1, wherein the sensor/actuator nodes includes a sensor element.
14. The wireless network of claim 13, wherein the sensor element includes at least one of a smoke detector, a motion detector, a temperature sensor, and a door/window contact.
15. The wireless network of claim 1, wherein the sensor/actuator nodes include an actuator.
16. The wireless network of claim 15, wherein the actuator includes at least one of an alarm sounder and a valve actuator.
17. The wireless network of claim 1, wherein the sensor/actuator nodes include a power- line powered node.
18. The wireless network of claim 17, wherein the power-line powered node is configured to serve as a multi-hop point.
19. The wireless network of claim 17, wherein the power-lined powered node includes a backup power supply.
20. The wireless network of claim 1, wherein the sensor/actuator nodes include a battery- powered node.
21. The wireless network of claim 20, wherein the battery-powered node includes at least one of a solar cell and an electro-ceramic generator.
22. The wireless network of claim 20, wherein the battery-powered node is configured to serve as a multi-hop point.
23. The wireless network of claim 1, wherein the sensor/actuator nodes include a battery- powered node with limited capabilities.
24. The wireless network of claim 1, wherein the cluster head includes a first radio module to communicate with the sensor/actuator nodes and a second radio module to coπrmunicate with other cluster heads.
25. The wireless network of claim 1, wherein the at least one cluster head includes a single radio module to communicate with the sensor/actuator nodes and with other cluster heads.
26. The wireless network of claim 1, wherein the at least one cluster head includes one of a standard wire-based or a standard wireless transceiver.
27. The wireless network of claim 1, wherein the at least one cluster head includes at least one of an Ethernet and a Bluetooth transceiver.
28. The wireless network of claim 1, further comprising: a panel connected to the cluster head network.
29. The wireless network of claim 28, wherein the panel is connected to the cluster head network via at least one of a local area network, a dedicated wire-based link, and a dedicated wireless link.
30. The wireless network of claim 28, wherein the panel includes software to auto- configure the at least one of the cluster head network and the sensor/actuator node network.
31. The wireless network of claim 28, wherein the panel includes a personal computer.
32. The wireless network of claim 1, wherein the sensor/actuator nodes are configured to operate in an energy-saving polling mode.
33. The wireless network of claim 1, wherein the sensor/actuator nodes are configured to be woken up by a built in timer to scan a channel for a beacon signal.
34. The wireless network of claim 33, wherein the beacon signal is one of a RF tone and a special sequence.
35. The wireless network of claim 1, wherein the at least one cluster head and the sensor/actuator nodes include a unique identifier.
36. A method of wirelessly networking sensor/actuator nodes, comprising: initializing a cluster head network; initializing a sensor/actuator node network to form an integrated wireless network, the sensor/actuator nodes forming a plurality of node levels; and operating the integrated wireless network.
37. The method of claim 36, wherein the step of initializing the cluster head network further includes: providing a cluster head with a list of identifiers of cluster heads of the cluster head network; discovering links between the cluster heads by exchanging inquiry packets and the list of identifiers; updating entries of routing tables; and determining an optimum topology based on at least one of message delay, a number of hops, and connections between the cluster heads.
38. The method of claim 36, wherein the step of initializing the sensor/actuator node network further includes: transmitting beacon signals and link discovery packets from cluster heads to a first layer of sensor/actuator nodes to wakeup the first layer of sensor/actuator nodes and to gather link information; successively transmitting the beacon signals and link discovery packets from the lower layer nodes to the higher layer nodes to wakeup the higher layer nodes and to gather the link information; and transmitting route discovery packets to the sensor/actuator nodes; transmitting route registration packets to the cluster heads including the link information; and sharing the link information with all cluster heads of the cluster head network.
39. The method of claim 36, wherein the step of initializing the sensor/actuator node network further includes: successively transmitting beacon signals and link discovery packets to each of the node levels to wakeup the sensor/actuator nodes and to gather link information; registering the sensor/actuator nodes by sending the link information to the cluster head network; and sharing the link information with all cluster heads of the cluster head network.
40. The method of claim 36, wherein the step of operating the integrated wireless network further includes: continuously sharing link information among the cluster heads of the cluster head network; and operating the sensor/actuator nodes in an energy-efficient mode.
41. The method of claim 40, wherein the step of operating the sensor/actuator node further includes: waking-up the sensor/actuator nodes for a brief cycle to one of detect beacon signals, perform a self-test, and perform a task.
42. The method of claim 36, further comprising: reconfiguring the sensor/actuator network in case of one of a link failure and a node failure.
43. The method of claim 42, wherein the reconfiguring step further includes: determining an alternate route according to link information stored at a sensor/actuator node.
44. The method of claim 42, wherein the reconfiguration step further includes: transmitting a SOS message to a neighbor sensor/actuator node of a lost sensor/actuator node to retrieve link information stored at the neighbor sensor/actuator node regarding the lost sensor/actuator node.
45. The method of claim 42, wherein reconfiguration step further includes: determining an alternative route according to the link information stored at the cluster head.
46. The method of claim 42, wherein the reconfiguration step further includes: fragmenting the integrated wireless network into more than one segment.
PCT/US2003/000781 2002-01-10 2003-01-10 Self-organizing hierarchical wireless network for surveillance and control WO2003061175A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP03707354A EP1474935A4 (en) 2002-01-10 2003-01-10 Self-organizing hierarchical wireless network for surveillance and control
AU2003209207A AU2003209207A1 (en) 2002-01-10 2003-01-10 Self-organizing hierarchical wireless network for surveillance and control
JP2003561140A JP4230917B2 (en) 2002-01-10 2003-01-10 Hierarchical wireless self-organizing network for management and control

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US34756902P 2002-01-10 2002-01-10
US60/347,569 2002-01-10
US10/301,394 US20030151513A1 (en) 2002-01-10 2002-11-21 Self-organizing hierarchical wireless network for surveillance and control
US10/301,394 2002-11-21

Publications (2)

Publication Number Publication Date
WO2003061175A2 true WO2003061175A2 (en) 2003-07-24
WO2003061175A3 WO2003061175A3 (en) 2003-10-16

Family

ID=26972342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/000781 WO2003061175A2 (en) 2002-01-10 2003-01-10 Self-organizing hierarchical wireless network for surveillance and control

Country Status (5)

Country Link
US (1) US20030151513A1 (en)
EP (1) EP1474935A4 (en)
JP (1) JP4230917B2 (en)
AU (1) AU2003209207A1 (en)
WO (1) WO2003061175A2 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005278185A (en) * 2004-03-23 2005-10-06 Agilent Technol Inc Method for operating sensor network and sensor device
WO2006058966A1 (en) * 2004-12-01 2006-06-08 Selmic Oy Wireless measuring arrangement and a measuring device utilized in it
WO2006068675A1 (en) 2004-09-10 2006-06-29 Nivis, Llc System and method for message consolidation in a mesh network
WO2006121114A1 (en) * 2005-05-12 2006-11-16 Yokogawa Electric Corporation Field device control system
WO2007035561A1 (en) * 2005-09-20 2007-03-29 Robert Bosch Gmbh Method and apparatus for adding wireless devices to a security system
EP1770667A2 (en) 2005-09-30 2007-04-04 Robert Bosch GmbH Method for synchronizing devices in wireless networks
EP1787404A2 (en) * 2004-09-10 2007-05-23 Nivis, LLC System and method for communicating messages in a mesh network
WO2007068710A1 (en) * 2005-12-14 2007-06-21 Siemens Aktiengesellschaft Method for operating a radio network and subscriber device for said type of network
JP2007517416A (en) * 2003-12-30 2007-06-28 ノキア コーポレイション Communication system using relay base station with asymmetric data link
EP1803309A2 (en) * 2004-09-30 2007-07-04 Smartvue Corporation Wireless video surveillance system and method
WO2007078422A2 (en) * 2005-12-22 2007-07-12 The Boeing Company Surveillance network system
JP2007520923A (en) * 2003-12-19 2007-07-26 ソニー ドイチュラント ゲゼルシャフト ミット ベシュレンクテル ハフツング Wireless communication network architecture
EP1829291A1 (en) * 2004-12-22 2007-09-05 Mikko Kohvakka Energy efficient wireless sensor network, node devices for the same and a method for arranging communications in a wireless sensor network
US7656829B2 (en) 2004-01-06 2010-02-02 Samsung Electronics Co., Ltd. System and method for determining data transmission path in communication system consisting of nodes
WO2010030346A1 (en) * 2008-09-09 2010-03-18 Dow Agrosciences, Llc Networked pest control system
WO2013096618A1 (en) * 2011-12-20 2013-06-27 Cisco Technology, Inc. Assisted intelligent routing for minimalistic connected object networks
WO2013167618A1 (en) 2012-05-11 2013-11-14 Continental Automotive Gmbh Method for activating de-activated control units of a vehicle and vehicle network and node of said vehicle network
US8681754B2 (en) 2007-09-20 2014-03-25 Yokogawa Electric Corporation Wireless control system
US8774080B2 (en) 2008-12-12 2014-07-08 Yokogawa Electric Corporation Gateway devices and wireless control network management system using the same
CN104025523A (en) * 2011-10-31 2014-09-03 德国弗劳恩霍夫应用研究促进协会 Apparatus And Method For Transmitting A Message To Multiple Receivers
WO2016050993A1 (en) * 2014-09-30 2016-04-07 Casiopea Esm2M, S.L. Method and system for the mass management of wireless devices by means of self-organising protocol
US9854653B1 (en) 2017-01-31 2017-12-26 Crestron Electronics Inc. Scalable building control system, method, and apparatus
US9888393B2 (en) 2005-03-10 2018-02-06 Qualocmm Incorporated Method and apparatus for automatic configuration of wireless communication networks
EP1626532B2 (en) 2004-08-09 2021-05-19 CF Magnus LLC Wireless building control architecture

Families Citing this family (223)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7480857B2 (en) * 2004-09-10 2009-01-20 Igt Method and apparatus for data communication in a gaming system
US7940716B2 (en) 2005-07-01 2011-05-10 Terahop Networks, Inc. Maintaining information facilitating deterministic network routing
ATE530961T1 (en) 2002-01-28 2011-11-15 Siemens Industry Inc BUILDING SYSTEM WITH REDUCED WIRING REQUIREMENTS AND EQUIPMENT
US7349761B1 (en) * 2002-02-07 2008-03-25 Cruse Mike B System and method for distributed facility management and operational control
US7145892B2 (en) * 2002-03-05 2006-12-05 Dell Products, L.P. Method and apparatus for adaptive wireless information handling system bridging
US7778606B2 (en) * 2002-05-17 2010-08-17 Network Security Technologies, Inc. Method and system for wireless intrusion detection
US20050254429A1 (en) * 2002-06-28 2005-11-17 Takeshi Kato Management node deice, node device, network configuration management system, network configuration management method, node device control method, management node device control method
US8149707B2 (en) * 2003-02-12 2012-04-03 Rockstar Bidco, LP Minimization of radio resource usage in multi-hop networks with multiple routings
US7603710B2 (en) * 2003-04-03 2009-10-13 Network Security Technologies, Inc. Method and system for detecting characteristics of a wireless network
US7853250B2 (en) 2003-04-03 2010-12-14 Network Security Technologies, Inc. Wireless intrusion detection system and method
DE10317586B3 (en) * 2003-04-16 2005-04-28 Siemens Ag Method for radio transmission in a hazard detection system
US7701858B2 (en) * 2003-07-17 2010-04-20 Sensicast Systems Method and apparatus for wireless communication in a mesh network
US20050151653A1 (en) * 2003-07-25 2005-07-14 Chan Wee P. Method and apparatus for determining the occurrence of animal incidence
US7346021B2 (en) * 2003-08-06 2008-03-18 Matsushita Electric Industrial Co., Ltd. Master station in communications system and access control method
US7457271B2 (en) * 2003-09-19 2008-11-25 Marvell International Ltd. Wireless local area network ad-hoc mode for reducing power consumption
DE10350906B4 (en) * 2003-10-31 2005-12-01 Siemens Ag Method for determining a path in an ad hoc radio communication system
US7414977B2 (en) * 2003-11-25 2008-08-19 Mitsubishi Electric Research Laboratories, Inc. Power and delay sensitive ad-hoc communication networks
JP2005184727A (en) * 2003-12-24 2005-07-07 Hitachi Ltd Radio communication system, radio node, constructing method for radio communication system, location measuring method for node
DE102004016580B4 (en) * 2004-03-31 2008-11-20 Nec Europe Ltd. Method of transmitting data in an ad hoc network or a sensor network
US20050267960A1 (en) * 2004-05-12 2005-12-01 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Mote-associated log creation
US9261383B2 (en) 2004-07-30 2016-02-16 Triplay, Inc. Discovery of occurrence-data
US20050227686A1 (en) * 2004-03-31 2005-10-13 Jung Edward K Y Federating mote-associated index data
US8161097B2 (en) * 2004-03-31 2012-04-17 The Invention Science Fund I, Llc Aggregating mote-associated index data
US7366544B2 (en) * 2004-03-31 2008-04-29 Searete, Llc Mote networks having directional antennas
US20060062252A1 (en) * 2004-06-30 2006-03-23 Jung Edward K Mote appropriate network power reduction techniques
US8275824B2 (en) * 2004-03-31 2012-09-25 The Invention Science Fund I, Llc Occurrence data detection and storage for mote networks
US20050265388A1 (en) * 2004-05-12 2005-12-01 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Aggregating mote-associated log data
US20060064402A1 (en) * 2004-07-27 2006-03-23 Jung Edward K Y Using federated mote-associated indexes
US7457834B2 (en) 2004-07-30 2008-11-25 Searete, Llc Aggregation and retrieval of network sensor data
US20050256667A1 (en) * 2004-05-12 2005-11-17 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Federating mote-associated log data
US20060079285A1 (en) * 2004-03-31 2006-04-13 Jung Edward K Y Transmission of mote-associated index data
US8335814B2 (en) * 2004-03-31 2012-12-18 The Invention Science Fund I, Llc Transmission of aggregated mote-associated index data
WO2005101710A2 (en) * 2004-03-31 2005-10-27 Searete Llc Transmission of aggregated mote-associated index data
US8200744B2 (en) * 2004-03-31 2012-06-12 The Invention Science Fund I, Llc Mote-associated index creation
US7929914B2 (en) * 2004-03-31 2011-04-19 The Invention Science Fund I, Llc Mote networks using directional antenna techniques
US7536388B2 (en) 2004-03-31 2009-05-19 Searete, Llc Data storage for distributed sensor networks
US7941188B2 (en) 2004-03-31 2011-05-10 The Invention Science Fund I, Llc Occurrence data detection and storage for generalized sensor networks
US8346846B2 (en) * 2004-05-12 2013-01-01 The Invention Science Fund I, Llc Transmission of aggregated mote-associated log data
US7580730B2 (en) * 2004-03-31 2009-08-25 Searete, Llc Mote networks having directional antennas
US7317898B2 (en) 2004-03-31 2008-01-08 Searete Llc Mote networks using directional antenna techniques
US7599696B2 (en) 2004-06-25 2009-10-06 Searete, Llc Frequency reuse techniques in mote-appropriate networks
US20060004888A1 (en) * 2004-05-21 2006-01-05 Searete Llc, A Limited Liability Corporation Of The State Delaware Using mote-associated logs
US7389295B2 (en) * 2004-06-25 2008-06-17 Searete Llc Using federated mote-associated logs
US9062992B2 (en) * 2004-07-27 2015-06-23 TriPlay Inc. Using mote-associated indexes
US20050220106A1 (en) * 2004-03-31 2005-10-06 Pierre Guillaume Raverdy Inter-wireless interactions using user discovery for ad-hoc environments
US20050255841A1 (en) * 2004-05-12 2005-11-17 Searete Llc Transmission of mote-associated log data
FR2869134B1 (en) 2004-04-16 2008-10-03 Somfy Soc Par Actions Simplifiee METHOD FOR TRANSMITTING INFORMATION BETWEEN BIDIRECTIONAL OBJECTS
FR2869182B1 (en) * 2004-04-20 2008-03-28 Thales Sa ROUTING METHOD IN AN AD HOC NETWORK
US7907934B2 (en) * 2004-04-27 2011-03-15 Nokia Corporation Method and system for providing security in proximity and Ad-Hoc networks
DE102004021385A1 (en) * 2004-04-30 2005-11-17 Daimlerchrysler Ag Data communication network with decentralized communication management
US7142107B2 (en) 2004-05-27 2006-11-28 Lawrence Kates Wireless sensor unit
US7623028B2 (en) 2004-05-27 2009-11-24 Lawrence Kates System and method for high-sensitivity sensor
US7475158B2 (en) * 2004-05-28 2009-01-06 International Business Machines Corporation Method for enabling a wireless sensor network by mote communication
US7239626B2 (en) 2004-06-30 2007-07-03 Sharp Laboratories Of America, Inc. System clock synchronization in an ad hoc and infrastructure wireless networks
US7505734B2 (en) * 2004-09-10 2009-03-17 Nivis, Llc System and method for communicating broadcast messages in a mesh network
US7554941B2 (en) * 2004-09-10 2009-06-30 Nivis, Llc System and method for a wireless mesh network
US7817994B2 (en) * 2004-09-20 2010-10-19 Robert Bosch Gmbh Secure control of wireless sensor network via the internet
US7769848B2 (en) * 2004-09-22 2010-08-03 International Business Machines Corporation Method and systems for copying data components between nodes of a wireless sensor network
US7139239B2 (en) * 2004-10-05 2006-11-21 Siemens Building Technologies, Inc. Self-healing control network for building automation systems
JP2008517563A (en) 2004-10-20 2008-05-22 ランコ インコーポレーテッド オブ デラウェア Communication method between simple functional devices in IEEE802.15.4 network
US20070198675A1 (en) * 2004-10-25 2007-08-23 International Business Machines Corporation Method, system and program product for deploying and allocating an autonomic sensor network ecosystem
DE112005003430A5 (en) 2004-11-25 2007-10-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for synchronization and data transmission
US20060114847A1 (en) * 2004-12-01 2006-06-01 Rachida Dssouli User agent and super user agent for cluster-based multi-party conferencing in ad-hoc networks
WO2006069067A2 (en) * 2004-12-20 2006-06-29 Sensicast Systems Method for reporting and accumulating data in a wireless communication network
US7565357B2 (en) * 2004-12-30 2009-07-21 Alcatel Lucent Multi-sensor communication system
DE102005019064A1 (en) 2005-01-25 2006-08-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Mobile telephone package monitoring method, involves providing packages with radio nodes, respectively, where one of nodes conveys information concerning change to monitoring unit and/or controls alarm transmitter for activating alarm
US7826373B2 (en) * 2005-01-28 2010-11-02 Honeywell International Inc. Wireless routing systems and methods
US20060178847A1 (en) * 2005-02-09 2006-08-10 Glancy John E Apparatus and method for wireless real time measurement and control of soil and turf conditions
US20060256802A1 (en) * 2005-03-31 2006-11-16 Paul Edwards Communication system using endpoint devices as routers
EP1713206A1 (en) * 2005-04-11 2006-10-18 Last Mile Communications/Tivis Limited A distributed communications network comprising wirelessly linked base stations
JP4801929B2 (en) * 2005-04-27 2011-10-26 新川センサテクノロジ株式会社 SENSOR DEVICE HAVING WIRELESS DATA TRANSMISSION FUNCTION, METHOD OF OPERATING THE SENSOR DEVICE, SENSOR SYSTEM COMPRISING THE SENSOR DEVICE
US7881755B1 (en) 2005-05-26 2011-02-01 Marvell International Ltd. Wireless LAN power savings
US7894372B2 (en) * 2005-05-31 2011-02-22 Iac Search & Media, Inc. Topology-centric resource management for large scale service clusters
US7742394B2 (en) * 2005-06-03 2010-06-22 Honeywell International Inc. Redundantly connected wireless sensor networking methods
US7848223B2 (en) * 2005-06-03 2010-12-07 Honeywell International Inc. Redundantly connected wireless sensor networking methods
US20060285509A1 (en) * 2005-06-15 2006-12-21 Johan Asplund Methods for measuring latency in a multicast environment
KR100679250B1 (en) * 2005-07-22 2007-02-05 한국전자통신연구원 Method for automatically selecting a cluster header in a wireless sensor network and for dynamically configuring a secure wireless sensor network
US20070030816A1 (en) * 2005-08-08 2007-02-08 Honeywell International Inc. Data compression and abnormal situation detection in a wireless sensor network
US8134942B2 (en) * 2005-08-24 2012-03-13 Avaak, Inc. Communication protocol for low-power network applications and a network of sensors using the same
US8041772B2 (en) * 2005-09-07 2011-10-18 International Business Machines Corporation Autonomic sensor network ecosystem
US20090153306A1 (en) * 2005-11-28 2009-06-18 Anatoli Stobbe Security System
KR100749177B1 (en) * 2005-12-14 2007-08-14 한국과학기술정보연구원 Routing method in sensor network
US8285326B2 (en) * 2005-12-30 2012-10-09 Honeywell International Inc. Multiprotocol wireless communication backbone
EP1804433A1 (en) * 2005-12-30 2007-07-04 Nederlandse Organisatie voor toegepast-natuurwetenschappelijk Onderzoek TNO Initialization of a wireless communication network
US7570623B2 (en) * 2006-01-24 2009-08-04 Motorola, Inc. Method and apparatus for operating a node in a beacon based ad hoc network
US8082340B2 (en) * 2006-01-30 2011-12-20 Cisco Technology, Inc. Technique for distinguishing between link and node failure using bidirectional forwarding detection (BFD)
US20070195808A1 (en) * 2006-02-17 2007-08-23 Wabash National, L.P. Wireless vehicle mesh network
JP2007228430A (en) * 2006-02-24 2007-09-06 Tops Systems:Kk Encryption wireless security system
JP4807701B2 (en) 2006-02-28 2011-11-02 国立大学法人 名古屋工業大学 Mobile terminal device, control method, and mobile communication system
US8300798B1 (en) 2006-04-03 2012-10-30 Wai Wu Intelligent communication routing system and method
US8645514B2 (en) * 2006-05-08 2014-02-04 Xerox Corporation Method and system for collaborative self-organization of devices
FR2901440B1 (en) * 2006-05-19 2008-11-21 Schneider Electric Ind Sas COMMUNICATION GATEWAY BETWEEN WIRELESS COMMUNICATION NETWORKS
CN100396049C (en) * 2006-05-26 2008-06-18 北京交通大学 Cluster chief election method based on node type for ad hoc network
JP4659680B2 (en) * 2006-06-01 2011-03-30 三菱電機株式会社 Base communication terminal and network system
JP4833745B2 (en) * 2006-06-12 2011-12-07 株式会社日立製作所 Data protection method for sensor node, computer system for distributing sensor node, and sensor node
US7957853B2 (en) * 2006-06-13 2011-06-07 The Mitre Corporation Flight restriction zone detection and avoidance
US7742399B2 (en) * 2006-06-22 2010-06-22 Harris Corporation Mobile ad-hoc network (MANET) and method for implementing multiple paths for fault tolerance
US7848278B2 (en) * 2006-10-23 2010-12-07 Telcordia Technologies, Inc. Roadside network unit and method of organizing, managing and maintaining local network using local peer groups as network groups
US7751340B2 (en) * 2006-11-03 2010-07-06 Microsoft Corporation Management of incoming information
JP4982191B2 (en) * 2007-01-17 2012-07-25 独立行政法人情報通信研究機構 Sensor network
US20080175210A1 (en) * 2007-01-24 2008-07-24 Johnson Controls Technology Company Distributed spectrum analyzer
US20100102926A1 (en) * 2007-03-13 2010-04-29 Syngenta Crop Protection, Inc. Methods and systems for ad hoc sensor network
WO2008135975A2 (en) * 2007-05-02 2008-11-13 Visonic Ltd. Wireless communication system
US20090065596A1 (en) * 2007-05-09 2009-03-12 Johnson Controls Technology Company Systems and methods for increasing building space comfort using wireless devices
US9918218B2 (en) * 2007-06-12 2018-03-13 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system for a networked self-configuring communication device utilizing user preference information
US7940177B2 (en) * 2007-06-15 2011-05-10 The Johns Hopkins University System and methods for monitoring security zones
KR101378257B1 (en) * 2007-06-15 2014-03-25 삼성전자주식회사 Method for construction virtual backbone in wireless sensor networks
US20090010207A1 (en) * 2007-07-02 2009-01-08 Amin Rashid Ismail Method and system to augment legacy telemetry systems and sensors
US20090067363A1 (en) * 2007-07-31 2009-03-12 Johnson Controls Technology Company System and method for communicating information from wireless sources to locations within a building
WO2009018215A1 (en) * 2007-07-31 2009-02-05 Johnson Controls Technology Company Devices for receiving and using energy from a building environment
US8280057B2 (en) * 2007-09-04 2012-10-02 Honeywell International Inc. Method and apparatus for providing security in wireless communication networks
JP5196931B2 (en) * 2007-09-25 2013-05-15 キヤノン株式会社 Network system and control wireless device
US8413227B2 (en) 2007-09-28 2013-04-02 Honeywell International Inc. Apparatus and method supporting wireless access to multiple security layers in an industrial control and automation system or other system
WO2009057833A1 (en) * 2007-10-30 2009-05-07 Ajou University Industry Cooperation Foundation Method of routing path in wireless sensor networks based on clusters
US20090201840A1 (en) * 2008-02-08 2009-08-13 Pfeiffer Jr Loren K Wireless networks using a rooted, directed topology
US9128202B2 (en) 2008-04-22 2015-09-08 Srd Innovations Inc. Wireless data acquisition network and operating methods
US8150950B2 (en) * 2008-05-13 2012-04-03 Schneider Electric USA, Inc. Automated discovery of devices in large utility monitoring systems
WO2009140669A2 (en) 2008-05-16 2009-11-19 Terahop Networks, Inc. Securing, monitoring and tracking shipping containers
US8023443B2 (en) * 2008-06-03 2011-09-20 Simmonds Precision Products, Inc. Wireless sensor system
KR100969158B1 (en) * 2008-06-30 2010-07-08 경희대학교 산학협력단 Method of trust management in wireless sensor networks
IT1391644B1 (en) * 2008-07-22 2012-01-17 Multimedia Italian Tech S R L MONITORING SYSTEM FOR ENVIRONMENTAL PARAMETERS IN AGRICULTURAL PLANTS AND / OR SINGLE TREES
NL2001870C2 (en) * 2008-08-01 2010-02-02 Jacobus Petrus Johannes Bisseling System for monitoring presence of vermin i.e. rat, and wood-depleting conditions in building, has sensors provided at different points in building, which is linked with processing unit, where each sensor includes transmitter and receiver
WO2010014872A1 (en) * 2008-08-01 2010-02-04 Nivis, Llc Systems and methods for determining link quality
US8392606B2 (en) * 2008-09-23 2013-03-05 Synapse Wireless, Inc. Wireless networks and methods using multiple valid network identifiers
WO2010036885A2 (en) * 2008-09-25 2010-04-01 Fisher-Rosemount Systems, Inc. Wireless mesh network with pinch point and low battery alerts
TWI384423B (en) * 2008-11-26 2013-02-01 Ind Tech Res Inst Alarm method and system based on voice events, and building method on behavior trajectory thereof
CN102365901B (en) * 2009-04-07 2014-10-29 瑞典爱立信有限公司 Attaching a sensor to a WSAN
TWI398182B (en) * 2009-09-01 2013-06-01 Univ Nat Taiwan Multi-hop routing algorithm for wireless sensor networks
TWI401979B (en) * 2009-10-14 2013-07-11 Ind Tech Res Inst Access authorization method and apparatus for a wireless sensor network
US9104211B2 (en) 2010-11-19 2015-08-11 Google Inc. Temperature controller with model-based time to target calculation and display
US8918492B2 (en) * 2010-10-29 2014-12-23 Siemens Industry, Inc. Field panel with embedded webserver and method of accessing the same
US9448567B2 (en) 2010-11-19 2016-09-20 Google Inc. Power management in single circuit HVAC systems and in multiple circuit HVAC systems
US9046898B2 (en) 2011-02-24 2015-06-02 Google Inc. Power-preserving communications architecture with long-polling persistent cloud channel for wireless network-connected thermostat
US9268344B2 (en) 2010-11-19 2016-02-23 Google Inc. Installation of thermostat powered by rechargeable battery
JP2012118655A (en) * 2010-11-30 2012-06-21 Shinshu Univ Remote monitoring system
KR101268009B1 (en) * 2011-02-22 2013-05-27 서울대학교산학협력단 System and method for self-organization of wireless sensor networks
US8944338B2 (en) 2011-02-24 2015-02-03 Google Inc. Thermostat with self-configuring connections to facilitate do-it-yourself installation
CN102122993B (en) * 2011-03-11 2013-09-25 华南理工大学 Method and device of remote underwater acoustic communication
WO2012139288A1 (en) 2011-04-13 2012-10-18 Renesas Mobile Corporation Sensor network information collection via mobile gateway
US8644999B2 (en) * 2011-06-15 2014-02-04 General Electric Company Keep alive method for RFD devices
JP5807199B2 (en) * 2011-06-22 2015-11-10 パナソニックIpマネジメント株式会社 Communication system, wireless device, and wireless device program
GB2492329B (en) 2011-06-24 2018-02-28 Skype Video coding
US9119019B2 (en) * 2011-07-11 2015-08-25 Srd Innovations Inc. Wireless mesh network and method for remote seismic recording
GB2495468B (en) 2011-09-02 2017-12-13 Skype Video coding
GB2495467B (en) 2011-09-02 2017-12-13 Skype Video coding
US20130114582A1 (en) * 2011-11-03 2013-05-09 Digi International Inc. Wireless mesh network device protocol translation
JP5762931B2 (en) * 2011-11-18 2015-08-12 公益財団法人鉄道総合技術研究所 Wireless sensor network installation and operation cost evaluation method for condition monitoring of target structures
FR2982959B1 (en) * 2011-11-22 2014-06-27 Schneider Electric Usa Inc SYNCHRONIZATION OF DATA IN A COOPERATIVE DISTRIBUTED COMMAND SYSTEM
US20130197955A1 (en) * 2012-01-31 2013-08-01 Fisher-Rosemount Systems, Inc. Apparatus and method for establishing maintenance routes within a process control system
EP2936849B1 (en) * 2012-12-20 2016-11-16 Telefonaktiebolaget LM Ericsson (publ) Integrating multi-hop mesh networks in mobile communication networks
US9699768B2 (en) * 2012-12-26 2017-07-04 Ict Research Llc Mobility extensions to industrial-strength wireless sensor networks
US20140223010A1 (en) * 2013-02-01 2014-08-07 David Alan Hayner Data Compression and Encryption in Sensor Networks
FR3004046B1 (en) * 2013-03-28 2015-04-17 Commissariat Energie Atomique METHOD AND DEVICE FOR FORMING A SECURE RESOURCE SECURE WIRELESS NETWORK
US9578599B2 (en) * 2013-04-25 2017-02-21 Honeywell International Inc. System and method for optimizing battery life in wireless multi-hop communication systems
US9172517B2 (en) * 2013-06-04 2015-10-27 Texas Instruments Incorporated Network power optimization via white lists
CN103337142B (en) * 2013-07-22 2015-12-02 长沙威胜信息技术有限公司 The wireless networking method of electric energy meter management system
US9858805B2 (en) * 2013-09-24 2018-01-02 Honeywell International Inc. Remote terminal unit (RTU) with wireless diversity and related method
US9338741B2 (en) * 2013-11-11 2016-05-10 Mivalife Mobile Technology, Inc. Security system device power management
US10327197B2 (en) 2014-01-31 2019-06-18 Qualcomm Incorporated Distributed clustering of wireless network nodes
US10248601B2 (en) 2014-03-27 2019-04-02 Honeywell International Inc. Remote terminal unit (RTU) with universal input/output (UIO) and related method
US8850034B1 (en) 2014-04-15 2014-09-30 Quisk, Inc. Service request fast fail circuit breaker
US20150350906A1 (en) * 2014-05-30 2015-12-03 Qualcomm Incorporated Systems and methods for selective association
FR3023662B1 (en) * 2014-07-10 2017-10-20 Traxens METHOD OF ADHESION TO A CLUSTER OF ELECTRONIC DEVICES COMMUNICATING VIA A WIRELESS NETWORK, ELECTRONIC DEVICE USING THE SAME, AND SYSTEM THEREFOR
US9875207B2 (en) 2014-08-14 2018-01-23 Honeywell International Inc. Remote terminal unit (RTU) hardware architecture
US10277748B2 (en) * 2014-09-29 2019-04-30 Cardo Systems, Inc. Ad-Hoc communication network and communication method
US10476743B2 (en) * 2014-10-13 2019-11-12 Cisco Technology, Inc. Automatic creation and management of a community of things for Internet of Things (IoT) applications
US9794873B2 (en) * 2014-11-14 2017-10-17 Telefonaktiebolaget Lm Ericsson (Publ) Power saving in wireless transceiver device
US9699594B2 (en) * 2015-02-27 2017-07-04 Plantronics, Inc. Mobile user device and method of communication over a wireless medium
US10684030B2 (en) 2015-03-05 2020-06-16 Honeywell International Inc. Wireless actuator service
CN104822143B (en) * 2015-05-04 2018-08-21 东南大学 A kind of source node location method for secret protection of anti-current amount analytical attack
US10374904B2 (en) 2015-05-15 2019-08-06 Cisco Technology, Inc. Diagnostic network visualization
DE102015209129B3 (en) * 2015-05-19 2016-11-10 Robert Bosch Gmbh Method for sensor synchronization
US10142353B2 (en) 2015-06-05 2018-11-27 Cisco Technology, Inc. System for monitoring and managing datacenters
US10536357B2 (en) 2015-06-05 2020-01-14 Cisco Technology, Inc. Late data detection in data center
US9967158B2 (en) 2015-06-05 2018-05-08 Cisco Technology, Inc. Interactive hierarchical network chord diagram for application dependency mapping
US10921154B2 (en) 2015-07-22 2021-02-16 Hewlett Packard Enterprise Development Lp Monitoring a sensor array
US9836426B2 (en) 2015-08-04 2017-12-05 Honeywell International Inc. SD card based RTU
US10070598B2 (en) * 2015-12-24 2018-09-11 Intel Corporation Intelligent agricultural systems
US10142196B1 (en) * 2016-04-15 2018-11-27 Senseware, Inc. System, method, and apparatus for bridge interface communication
WO2017199972A1 (en) 2016-05-18 2017-11-23 学校法人 関西大学 Position estimation device
US10825263B2 (en) 2016-06-16 2020-11-03 Honeywell International Inc. Advanced discrete control device diagnostic on digital output modules
US10289438B2 (en) 2016-06-16 2019-05-14 Cisco Technology, Inc. Techniques for coordination of application components deployed on distributed virtual machines
CN105988404B (en) * 2016-06-30 2018-12-04 深圳市优必选科技有限公司 A kind of servomechanism control system
US10708183B2 (en) 2016-07-21 2020-07-07 Cisco Technology, Inc. System and method of providing segment routing as a service
US20180063784A1 (en) * 2016-08-26 2018-03-01 Qualcomm Incorporated Devices and methods for an efficient wakeup protocol
US9953474B2 (en) 2016-09-02 2018-04-24 Honeywell International Inc. Multi-level security mechanism for accessing a panel
JP6774093B2 (en) * 2016-11-07 2020-10-21 国立研究開発法人情報通信研究機構 Wireless communication methods and systems, wireless communication programs
US10972388B2 (en) 2016-11-22 2021-04-06 Cisco Technology, Inc. Federated microburst detection
US10708152B2 (en) 2017-03-23 2020-07-07 Cisco Technology, Inc. Predicting application and network performance
US10523512B2 (en) 2017-03-24 2019-12-31 Cisco Technology, Inc. Network agent for generating platform specific network policies
US10594560B2 (en) 2017-03-27 2020-03-17 Cisco Technology, Inc. Intent driven network policy platform
US10764141B2 (en) 2017-03-27 2020-09-01 Cisco Technology, Inc. Network agent for reporting to a network policy system
US10250446B2 (en) 2017-03-27 2019-04-02 Cisco Technology, Inc. Distributed policy store
US10873794B2 (en) 2017-03-28 2020-12-22 Cisco Technology, Inc. Flowlet resolution for application performance monitoring and management
US10749692B2 (en) 2017-05-05 2020-08-18 Honeywell International Inc. Automated certificate enrollment for devices in industrial control systems or other systems
US10680887B2 (en) 2017-07-21 2020-06-09 Cisco Technology, Inc. Remote device status audit and recovery
US10554501B2 (en) 2017-10-23 2020-02-04 Cisco Technology, Inc. Network migration assistant
US10523541B2 (en) 2017-10-25 2019-12-31 Cisco Technology, Inc. Federated network and application data analytics platform
US10594542B2 (en) 2017-10-27 2020-03-17 Cisco Technology, Inc. System and method for network root cause analysis
CA3026918C (en) 2017-12-14 2023-08-01 Veris Industries, Llc Energy metering for a building
US11233821B2 (en) 2018-01-04 2022-01-25 Cisco Technology, Inc. Network intrusion counter-intelligence
JP7065313B2 (en) * 2018-01-15 2022-05-12 パナソニックIpマネジメント株式会社 Detection information communication device, detection information communication system, communication system, wireless communication method and program
US10826803B2 (en) 2018-01-25 2020-11-03 Cisco Technology, Inc. Mechanism for facilitating efficient policy updates
US10798015B2 (en) 2018-01-25 2020-10-06 Cisco Technology, Inc. Discovery of middleboxes using traffic flow stitching
US10999149B2 (en) 2018-01-25 2021-05-04 Cisco Technology, Inc. Automatic configuration discovery based on traffic flow data
US10574575B2 (en) 2018-01-25 2020-02-25 Cisco Technology, Inc. Network flow stitching using middle box flow stitching
US11128700B2 (en) 2018-01-26 2021-09-21 Cisco Technology, Inc. Load balancing configuration based on traffic flow telemetry
DE102018116339B4 (en) * 2018-07-05 2021-11-11 Auma Riester Gmbh & Co. Kg Actuator
US10455394B1 (en) * 2018-09-04 2019-10-22 Williams Sound Holdings II, LLC Bifurcation of PAN functionality
JP2022523564A (en) 2019-03-04 2022-04-25 アイオーカレンツ, インコーポレイテッド Data compression and communication using machine learning
US10832509B1 (en) 2019-05-24 2020-11-10 Ademco Inc. Systems and methods of a doorbell device initiating a state change of an access control device and/or a control panel responsive to two-factor authentication
US10789800B1 (en) 2019-05-24 2020-09-29 Ademco Inc. Systems and methods for authorizing transmission of commands and signals to an access control device or a control panel device
CN110855435B (en) * 2019-11-14 2022-04-19 北京京航计算通讯研究所 Access control method based on attribute cryptosystem in wireless sensor network
TWI701956B (en) * 2019-11-22 2020-08-11 明泰科技股份有限公司 Channel loading pre-adjusting system for 5g wireless communication
US11296966B2 (en) 2019-11-27 2022-04-05 Rockwell Collins, Inc. System and method for efficient information collection and distribution (EICD) via independent dominating sets
US11737121B2 (en) 2021-08-20 2023-08-22 Rockwell Collins, Inc. System and method to compile and distribute spatial awareness information for network
US11726162B2 (en) 2021-04-16 2023-08-15 Rockwell Collins, Inc. System and method for neighbor direction and relative velocity determination via doppler nulling techniques
US11290942B2 (en) * 2020-08-07 2022-03-29 Rockwell Collins, Inc. System and method for independent dominating set (IDS) based routing in mobile AD hoc networks (MANET)
US11665658B1 (en) 2021-04-16 2023-05-30 Rockwell Collins, Inc. System and method for application of doppler corrections for time synchronized transmitter and receiver
US11304084B1 (en) * 2020-10-23 2022-04-12 Rockwell Collins, Inc. System and method for beacon-based passive clustering in mobile ad hoc networks (MANET)
JP7368221B2 (en) * 2019-12-17 2023-10-24 Kyb株式会社 Condition monitoring system
WO2022268346A1 (en) * 2021-06-22 2022-12-29 Lenovo (Singapore) Pte. Ltd Configuring a wireless communication topology
CN113541736B (en) * 2021-07-12 2022-07-05 国网甘肃综合能源服务有限公司 Networking fault maintenance method based on power line carrier communication

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192230B1 (en) * 1993-03-06 2001-02-20 Lucent Technologies, Inc. Wireless data communication system having power saving function
US6208247B1 (en) * 1998-08-18 2001-03-27 Rockwell Science Center, Llc Wireless integrated sensor network using multiple relayed communications
US6255800B1 (en) * 2000-01-03 2001-07-03 Texas Instruments Incorporated Bluetooth enabled mobile device charging cradle and system
US6304556B1 (en) * 1998-08-24 2001-10-16 Cornell Research Foundation, Inc. Routing and mobility management protocols for ad-hoc networks

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5673031A (en) * 1988-08-04 1997-09-30 Norand Corporation Redundant radio frequency network having a roaming terminal communication protocol
US5940771A (en) * 1991-05-13 1999-08-17 Norand Corporation Network supporting roaming, sleeping terminals
US6374311B1 (en) * 1991-10-01 2002-04-16 Intermec Ip Corp. Communication network having a plurality of bridging nodes which transmit a beacon to terminal nodes in power saving state that it has messages awaiting delivery
US5974236A (en) * 1992-03-25 1999-10-26 Aes Corporation Dynamically reconfigurable communications network and method
US5553076A (en) * 1994-05-02 1996-09-03 Tcsi Corporation Method and apparatus for a wireless local area network
US5790952A (en) * 1995-12-04 1998-08-04 Bell Atlantic Network Services, Inc. Beacon system using cellular digital packet data (CDPD) communication for roaming cellular stations
US5952421A (en) * 1995-12-27 1999-09-14 Exxon Research And Engineering Co. Synthesis of preceramic polymer-stabilized metal colloids and their conversion to microporous ceramics
US5854994A (en) * 1996-08-23 1998-12-29 Csi Technology, Inc. Vibration monitor and transmission system
US6078269A (en) * 1997-11-10 2000-06-20 Safenight Technology Inc. Battery-powered, RF-interconnected detector sensor system
US6255942B1 (en) * 1998-03-19 2001-07-03 At&T Corp. Wireless communications platform
US6385174B1 (en) * 1999-11-12 2002-05-07 Itt Manufacturing Enterprises, Inc. Method and apparatus for transmission of node link status messages throughout a network with reduced communication protocol overhead traffic
US7492248B1 (en) * 2000-01-14 2009-02-17 Symbol Technologies, Inc. Multi-tier wireless communications architecture, applications and methods
US6735448B1 (en) * 2000-11-07 2004-05-11 Hrl Laboratories, Llc Power management for throughput enhancement in wireless ad-hoc networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192230B1 (en) * 1993-03-06 2001-02-20 Lucent Technologies, Inc. Wireless data communication system having power saving function
US6208247B1 (en) * 1998-08-18 2001-03-27 Rockwell Science Center, Llc Wireless integrated sensor network using multiple relayed communications
US6304556B1 (en) * 1998-08-24 2001-10-16 Cornell Research Foundation, Inc. Routing and mobility management protocols for ad-hoc networks
US6255800B1 (en) * 2000-01-03 2001-07-03 Texas Instruments Incorporated Bluetooth enabled mobile device charging cradle and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1474935A2 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007520923A (en) * 2003-12-19 2007-07-26 ソニー ドイチュラント ゲゼルシャフト ミット ベシュレンクテル ハフツング Wireless communication network architecture
JP2007517416A (en) * 2003-12-30 2007-06-28 ノキア コーポレイション Communication system using relay base station with asymmetric data link
US7656829B2 (en) 2004-01-06 2010-02-02 Samsung Electronics Co., Ltd. System and method for determining data transmission path in communication system consisting of nodes
JP2005278185A (en) * 2004-03-23 2005-10-06 Agilent Technol Inc Method for operating sensor network and sensor device
EP1626532B2 (en) 2004-08-09 2021-05-19 CF Magnus LLC Wireless building control architecture
WO2006068675A1 (en) 2004-09-10 2006-06-29 Nivis, Llc System and method for message consolidation in a mesh network
EP1787134A4 (en) * 2004-09-10 2011-04-27 Nivis Llc System and method for message consolidation in a mesh network
US7676195B2 (en) 2004-09-10 2010-03-09 Nivis, Llc System and method for communicating messages in a mesh network
EP1787404A2 (en) * 2004-09-10 2007-05-23 Nivis, LLC System and method for communicating messages in a mesh network
EP1787134A1 (en) * 2004-09-10 2007-05-23 Nivis, LLC System and method for message consolidation in a mesh network
EP1787404A4 (en) * 2004-09-10 2009-10-28 Nivis Llc System and method for communicating messages in a mesh network
EP1803309A2 (en) * 2004-09-30 2007-07-04 Smartvue Corporation Wireless video surveillance system and method
EP1803309A4 (en) * 2004-09-30 2012-04-25 Smartvue Corp Wireless video surveillance system and method
WO2006058966A1 (en) * 2004-12-01 2006-06-08 Selmic Oy Wireless measuring arrangement and a measuring device utilized in it
EP1829291A1 (en) * 2004-12-22 2007-09-05 Mikko Kohvakka Energy efficient wireless sensor network, node devices for the same and a method for arranging communications in a wireless sensor network
EP1829291A4 (en) * 2004-12-22 2014-03-12 Wirepas Oy Energy efficient wireless sensor network, node devices for the same and a method for arranging communications in a wireless sensor network
US9888393B2 (en) 2005-03-10 2018-02-06 Qualocmm Incorporated Method and apparatus for automatic configuration of wireless communication networks
WO2006121114A1 (en) * 2005-05-12 2006-11-16 Yokogawa Electric Corporation Field device control system
WO2007035561A1 (en) * 2005-09-20 2007-03-29 Robert Bosch Gmbh Method and apparatus for adding wireless devices to a security system
AU2006292464B2 (en) * 2005-09-20 2010-12-09 Robert Bosch Gmbh Method and apparatus for adding wireless devices to a security system
EP1770667A2 (en) 2005-09-30 2007-04-04 Robert Bosch GmbH Method for synchronizing devices in wireless networks
EP1770667B1 (en) * 2005-09-30 2018-03-21 Robert Bosch GmbH Method for synchronizing devices in wireless networks
CN101331719B (en) * 2005-12-14 2012-11-07 西门子公司 Method for operating a radio network and subscriber device for said type of network
WO2007068710A1 (en) * 2005-12-14 2007-06-21 Siemens Aktiengesellschaft Method for operating a radio network and subscriber device for said type of network
US10542093B2 (en) 2005-12-22 2020-01-21 The Boeing Company Surveillance network system
WO2007078422A2 (en) * 2005-12-22 2007-07-12 The Boeing Company Surveillance network system
WO2007078422A3 (en) * 2005-12-22 2007-09-27 Boeing Co Surveillance network system
US8681754B2 (en) 2007-09-20 2014-03-25 Yokogawa Electric Corporation Wireless control system
WO2010030346A1 (en) * 2008-09-09 2010-03-18 Dow Agrosciences, Llc Networked pest control system
US10085133B2 (en) 2008-09-09 2018-09-25 Dow Agrosciences Llc Networked pest control system
US8830071B2 (en) 2008-09-09 2014-09-09 Dow Agrosciences, Llc. Networked pest control system
EP3207797B1 (en) * 2008-09-09 2023-06-07 Corteva Agriscience LLC Networked pest control system
US8026822B2 (en) 2008-09-09 2011-09-27 Dow Agrosciences Llc Networked pest control system
US9542835B2 (en) 2008-09-09 2017-01-10 Dow Agrosciences Llc Networked pest control system
EP3207797A1 (en) * 2008-09-09 2017-08-23 Dow AgroSciences LLC Networked pest control system
US8774080B2 (en) 2008-12-12 2014-07-08 Yokogawa Electric Corporation Gateway devices and wireless control network management system using the same
CN104025523A (en) * 2011-10-31 2014-09-03 德国弗劳恩霍夫应用研究促进协会 Apparatus And Method For Transmitting A Message To Multiple Receivers
US9166908B2 (en) 2011-12-20 2015-10-20 Cisco Technology, Inc. Assisted intelligent routing for minimalistic connected object networks
WO2013096618A1 (en) * 2011-12-20 2013-06-27 Cisco Technology, Inc. Assisted intelligent routing for minimalistic connected object networks
DE102012207858A1 (en) 2012-05-11 2013-11-14 Continental Automotive Gmbh Method for activating deactivated control devices of a vehicle and vehicle network and nodes of the vehicle network
WO2013167618A1 (en) 2012-05-11 2013-11-14 Continental Automotive Gmbh Method for activating de-activated control units of a vehicle and vehicle network and node of said vehicle network
WO2016050993A1 (en) * 2014-09-30 2016-04-07 Casiopea Esm2M, S.L. Method and system for the mass management of wireless devices by means of self-organising protocol
US9854653B1 (en) 2017-01-31 2017-12-26 Crestron Electronics Inc. Scalable building control system, method, and apparatus

Also Published As

Publication number Publication date
WO2003061175A3 (en) 2003-10-16
JP4230917B2 (en) 2009-02-25
AU2003209207A1 (en) 2003-07-30
JP2005515695A (en) 2005-05-26
EP1474935A2 (en) 2004-11-10
US20030151513A1 (en) 2003-08-14
EP1474935A4 (en) 2010-09-29

Similar Documents

Publication Publication Date Title
US20030151513A1 (en) Self-organizing hierarchical wireless network for surveillance and control
US8194571B2 (en) Protocol for reliable, self-organizing, low-power wireless network for security and building automation systems
US7995467B2 (en) Apparatus and method for adapting to failures in gateway devices in mesh networks
US7983685B2 (en) Method and apparatus for management of a global wireless sensor network
CN101529403B (en) Power management system for a field device on a wireless network
EP2140615B1 (en) Increasing reliability and reducing latency in a wireless network
US20040121792A1 (en) Multi-protocol network and method of switching protocols
JP2005515695A5 (en)
JP6948320B2 (en) Mesh network connectivity
Nieberg et al. Collaborative algorithms for communication in wireless sensor networks
EP2529562B1 (en) I/o driven node commissioning in a sleeping mesh network
EP2165468A1 (en) Method for managing the transfer of information packets across a wireless network and routing nodes implementing it
Chang Wireless sensor networks and applications
US20230044362A1 (en) Decentralized home sensor network
KR100953056B1 (en) Apparatus for selecting working time slot in mesh network
CN112423364A (en) Wireless mobile ad hoc communication method and system
Mishra et al. WIRELESS SENSOR NETWORKS DESIGN ISSUES
Ferrari et al. On improving reliability of star-topology Wireless Sensor Network
Sharma et al. BALANCED ENERGY CONSUMPTION MODEL USING CLUSTERING AND ENERGY CONSUMPTION TRADE-OFF IN WIRELESS SENSOR NETWORKS
Willig Drahtlose Sensornetze: Konzept, Herausforderungen und Methoden
Shiral et al. Adaptive Duty-Cycle-Aware using multihopping in WSN

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AU CN IN JP US

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1989/DELNP/2004

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2003561140

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2003707354

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2003707354

Country of ref document: EP