WO2014102797A1 - Distributed business intelligence system and method of operation thereof - Google Patents

Distributed business intelligence system and method of operation thereof Download PDF

Info

Publication number
WO2014102797A1
WO2014102797A1 PCT/IL2013/051080 IL2013051080W WO2014102797A1 WO 2014102797 A1 WO2014102797 A1 WO 2014102797A1 IL 2013051080 W IL2013051080 W IL 2013051080W WO 2014102797 A1 WO2014102797 A1 WO 2014102797A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
image
video
display
actuator
Prior art date
Application number
PCT/IL2013/051080
Other languages
French (fr)
Inventor
Shalom Nakdimon
Original Assignee
Wiseye Video System Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wiseye Video System Ltd. filed Critical Wiseye Video System Ltd.
Publication of WO2014102797A1 publication Critical patent/WO2014102797A1/en

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • G07C2011/04Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere related to queuing systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A distributed Business Intelligence (BI) system and method in a store is disclosed. The system includes multiple BI devices interconnected over one or more networks. Each BI device includes a sensor for capturing data, and a BI insight is generated based on analysis of the captured data, such as counting people in a queue at the Point-of-Sale (PoS). The BI insight triggers a BI command sent to a BI device having an actuator, for generating an action in response to the BI command. The sensor may be a microphone or a video camera, and the system may include voice or image processing (such as video analytics) for generating the BI insight. A redundancy may be used by using multiple sensors or actuators, or by using multiple data paths. The network may be wired or wireless, and may be BAN, PAN, LAN, or WAN.

Description

DISTRIBUTED BUSINESS INTELLIGENCE SYSTEM
AND METHOD OF OPERATION THEREOF
FIELD
The present invention relates to the field of distributed computing devices, and in particular distributed business systems, such as distributed Business Intelligence (BI) systems.
BACKGROUND
The Internet is a global system of interconnected computer networks that use the standardized Internet Protocol Suite (TCP/IP), including Transmission Control Protocol (TCP) and the Internet Protocol (IP), to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic and optical networking technologies. The Internet carries a vast range of information resources and services, such as the interlinked hypertext documents on the World Wide Web (WWW) and the infrastructure to support electronic mail. The Internet backbone refers to the principal data routes between large, strategically interconnected networks and core routers in the Internet. These data routes are hosted by commercial, government, academic and other high-capacity network centers, the Internet exchange points and network access points that interchange Internet traffic between the countries, continents and across the oceans of the world. Traffic interchange between Internet service providers (often Tier 1 networks) participating in the Internet backbone exchange traffic by privately negotiated interconnection agreements, primarily governed by the principle of settlement-free peering.
The Internet Protocol (IP) is the principal communications protocol used for relaying datagrams (packets) across a network using the Internet Protocol Suite. Responsible for routing packets across network boundaries, it is the primary protocol that establishes the Internet. IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering datagrams from the source host to the destination host based on their addresses. For this purpose, IP defines addressing methods and structures for datagram encapsulation. Internet Protocol Version 4 (IPv4) is the dominant protocol of the Internet. IPv4 is described in Internet Engineering Task Force (IETF) Request for Comments (RFC) 791 and RFC 1349, and the successor, Internet Protocol Version 6 (IPv6), is currently active and in growing deployment worldwide, IPv4 uses 32-bit addresses (providing 4 billion: 4.3>< 109 addresses), while IPv6 uses 128-bit addresses (providing 340 undecillion or 3.4x 1038 addresses), as described in RFC 2460.
The Internet Protocol is responsible for addressing hosts and routing datagrams (packets) from a source host to the destination host across one or more IP networks. For this purpose the Internet Protocol defines an addressing system that has two functions. Addresses identify hosts and provide a logical location service. Each packet is tagged with a header that contains the meta-data for the purpose of delivery. This process of tagging is also called encapsulation. IP is a connectionless protocol for use in a packet-switched Link Layer network, and does not need circuit setup prior to transmission. The aspects of delivery guaranteeing, proper sequencing, avoidance of duplicate delivery, and data integrity are addressed by an upper transport layer protocol (e.g., TCP - Transmission Control Protocol and UDP - User Datagram Protocol).
A Wireless Mesh Network (WMN) and Wireless Distribution Systems (WDS) are known in the art to be a communication network made up of clients, mesh routers and gateways organized in a mesh topology and connected using radio. Such wireless networks may be based on DSR as the routing protocol. WMNs are standardized in IEEE 802.1 1s and described in a slide-show by W. Steven Conner, Intel Corp. et al. entitled: "IEEE 802.1 1s Tutorial" presented at the IEEE 802 Plenary, Dallas on Nov. 13, 2006, in a slide-show by Eugen Borcoci of University Politehnica Bucharest, entitled: "Wireless Mesh Networks Technologies: Architectures, Protocols, Resource Management and Applications", presented in INFOWARE Conference on August 22-29* 2009 in Cannes, France, and in an IEEE Communication magazine paper by Joseph D. Camp and Edward W. Knightly of Electrical and Computer Engineering, Rice University, Houston, TX, USA, entitled: "The IEEE 802.11s Extended Service Set Mesh Networking Standard", which are incorporated in their entirety for all purposes as if fully set forth herein. The arrangement described herein can be equally applied to such wireless networks, wherein two clients exchange information using different paths by using mesh routers as intermediate and relay servers. Commonly in wireless networks, the routing is based on MAC addresses. Hence, the above discussion relating to IP addresses applies in such networks to using the MAC addresses for identifying the client originating the message, the mesh routers (or gateways) serving as the relay servers, and the client serving as the ultimate destination computer.
The Internet architecture employs a client-server model, among other 'arrangements. The terms 'server' or 'server computer' relates herein to a device or computer (or a plurality of computers) connected to the Internet and is used for providing facilities or services to other computers or other devices (referred to in this context as 'clients') connected to the Internet. A server is commonly a host that has an IP address and executes a 'server program', and typically operates as a socket listener. Many servers have dedicated functionality such as web server, Domain Name System (DNS) server (described in RFC 1034 and RFC 1035), Dynamic Host Configuration Protocol (DHCP) server (described in RFC 2131 and RFC 3315), mail server, File Transfer Protocol (FTP) server and database server. Similarly, the term 'client' herein refers to a program or to a device or a computer (or a series of computers) executing this program, which accesses a server over the Internet for a service or a resource. Clients commonly initiate connections that a server may accept. For non- limiting example, web browsers are clients that connect to web servers for retrieving web pages, and email clients connect to mail storage servers for retrieving mails.
Software as a Service (SaaS) is a Software Application (SA) supplied by a service provider, namely, a SaaS Vendor. The service is supplied and consumed over the internet, thus eliminating requirements to install and run applications locally on a site of a customer as well as simplifying maintenance and support. Particularly it is advantageous in massive business applications. Licensing is a common form of billing for the service and it is paid periodically. SaaS is becoming ever more common as a form of SA delivery over the Internet and is being facilitated in a technology infrastructure called "Cloud Computing". In this form of SA delivery, where the SA is controlled by a service provider, a customer may experience stability and data security issues. In many cases the customer is a business organization that is using the SaaS for business purposes such as business software, hence, stability and data security are primary requirements.
The term "Cloud computing" as used herein is defined as a technology infrastructure facilitating supplement, consumption and delivery of IT services. The IT services are internet based and may involve elastic provisioning of dynamically scalable and time virtualized resources. The term "Software as a Service (SaaS)" as used herein in this application, is defined as a model of software deployment whereby a provider licenses an SA to customers for use as a service on demand. The term "customer" as used herein in this application, is defined as a business entity that is served by an SA, provided on the SaaS platform. A customer may be a person or an organization and may be represented by a user that responsible for the administration of the application in aspects of permissions configuration, user related configuration, and data security policy.
The term "SaaS Platform" as used herein in this application is defined as a computer program that acts as a host to SAs that reside on it. Essentially, a SaaS platform can be considered as a type of specialized SA server. The platform manages underlying computer hardware and software resources and uses these resources to provide hosted SAs with multi- tenancy and on-demand capabilities, commonly found in SaaS applications. Generally, the hosted SAs are compatible with SaaS platform and support a single group of users. The platform holds the responsibility for distributing the SA as a service to multiple groups of users over the internet. The SaaS Platform can be considered as a layer of abstraction above the traditional application server, creating a computing platform that parallels the value offered by the traditional operating system, only in a web-centric fashion. The SaaS platform responds to requirements of software developers. The requirements are to reduce time and difficulty involved in developing highly available SAs, and on-demand enterprise grade business SAs.
ZigBee is a specification for a suite of high level communication protocols using small, low- power digital radios based on an IEEE 802 standard for personal area networks. Applications include wireless light switches, electrical meters with in-home-displays, and other consumer and industrial equipment that require short-range wireless transfer of data at relatively low rates. The technology defined by the ZigBee specification is intended to be simpler and less expensive than other WPANs, such as Bluetooth. ZigBee is targeted at radio-frequency (RF) applications that require a low data rate, long battery life, and secure networking. ZigBee has a defined rate of 250 kbps suited for periodic or intermittent data or a single signal transmission from a sensor or input device.
ZigBee builds upon the physical layer and medium access control defined in IEEE standard 802.15.4 (2003 version) for low-rate WPANs. The specification goes on to complete the standard by adding four main components: network layer, application layer, ZigBee Device Objects (ZDOs) and manufacturer-defined application objects which allow for customization and favor total integration. Besides adding two high-level network layers to the underlying structure, the most significant improvement is the introduction of ZDOs. These are responsible for a number of tasks, which include keeping of device roles, management of requests to join a network, device discovery and security. Because ZigBee nodes can go from sleep to active mode in 30 ms or less, the latency can be low and devices can be responsive, particularly compared to Bluetooth wake-up delays, which are typically around three seconds. ZigBee nodes can sleep most of the time, thus average power consumption can be lower, resulting in longer battery life.
There are three different types of ZigBee devices: ZigBee coordinator (ZC), which are the most capable device, the coordinator forms the root of the network tree and might bridge to other networks. There is exactly one ZigBee coordinator in each network since it is the device that started the network originally. It is able to store information about the network, including acting as the Trust Center & repository for security keys. ZigBee Router (ZR) may be running an application function as well as can acting as an intermediate router, passing on data from other devices. ZigBee End Device (ZED) contains functionality to talk to the parent node (either the coordinator or a router). This relationship allows the node to be asleep a significant amount of the time thereby giving long battery life. A ZED requires the least amount of memory, and therefore can be less expensive to manufacture than a ZR or ZC.
The protocols build on recent algorithmic research (Ad-hoc On-demand Distance Vector, neuRFon) to automatically construct a low-speed ad-hoc network of nodes. In most large network instances, the network will be a cluster of clusters. It can also form a mesh or a single cluster. The current ZigBee protocols support beacon and non-beacon enabled networks. In non-beacon-enabled networks, an unslotted CSMA/CA channel access mechanism is used. In this type of network, ZigBee Routers typically have their receivers continuously active, requiring a more robust power supply. However, this allows for heterogeneous networks in which some devices receive continuously, while others only transmit when an external stimulus is detected.
In beacon-enabled networks, the special network nodes called ZigBee Routers transmit periodic beacons to confirm their presence to other network nodes. Nodes may sleep between the beacons, thus lowering their duty cycle and extending their battery life. Beacon intervals depend on the data rate; they may range from 15.36 milliseconds to 251.65824 seconds at 250 Kbit/s, from 24 milliseconds to 393.216 seconds at 40 Kbit/s and from 48 milliseconds to 786;432 seconds at 20 Kbit/s. In general, the ZigBee protocols minimize the time the radio is on, so as to reduce power use. In beaconing networks, nodes only need to be active while a beacon is being transmitted. In non- beacon-enabled networks, power consumption is decidedly asymmetrical: some devices are always active, while others spend most of their time sleeping.
Except for the Smart Energy Profile 2.0, current ZigBee devices conform to the IEEE 802.15.4-2003 Low-Rate Wireless Personal Area Network (LR-WPAN) standard. The standard specifies the lower protocol layers— the PHYsical layer (PHY), and the Media Access Control (MAC) portion of the Data Link Layer (DLL). The basic channel access mode is "Carrier Sense, Multiple Access / Collision Avoidance" (CSMA/CA). That is, the nodes talk in the same way that people converse; they briefly check to see that no one is talking before they start. There are three notable exceptions to the use of CSMA. Beacons are sent on a fixed timing schedule, and do not use CSMA. Message acknowledgments also do not use CSMA. Finally, devices in Beacon Oriented networks that have low latency real-time requirements may also use Guaranteed Time Slots (GTS), which by definition do not use CSMA.
Z-Wave is a wireless communications protocol by the Z-Wave Alliance (http://www.z- wave.com) designed for home automation, specifically for remote control applications in residential and light commercial environments. The technology uses a low-power RF radio embedded or retrofitted into home electronics devices and systems, such as lighting, access control, home entertainment systems and household appliances. Z-Wave communicates using a low-power wireless technology designed specifically for remote control applications. Z-Wave operates in the sub-gigahertz frequency range, around 900 MHz. This band competes with some cordless telephones and other consumer electronics devices, but avoids interference with WiFi and other systems that operate on the crowded 2.4 GHz band. Z-Wave is designed to be easily embedded in consumer electronics products, including battery operated devices such as remote controls, smoke alarms and security sensors.
Z-Wave is a mesh networking technology where each node or device on the network is capable of sending and receiving control commands through walls or floors and use intermediate nodes to route around household obstacles or radio dead spots that might occur in the home. Z-Wave devices can work individually or in groups, and can be programmed into scenes or events that trigger multiple devices, either automatically or via remote control. The Z-wave radio specifications include bandwidth of 9,600 bit/s or 40 Kbit/s, fully interoperable, GFSK modulation, and a range of approximately 100 feet (or 30 meters) assuming "open air" conditions, with reduced range indoors depending on building materials, etc. The Z-Wave radio uses the 900 MHz ISM band: 908.42 MHz (United States); 868.42 MHz (Europe); 919.82 MHz (Hong Kong); 921.42 MHz (Australia/New Zealand).
Z-Wave uses a source-routed mesh network topology and has one or more master controllers that control routing and security. The devices can communicate to another by using intermediate nodes to actively route around and circumvent household obstacles or radio dead spots that might occur. A message from node A to node C can be successfully delivered even if the two nodes are not within range, providing that a third node B can communicate with nodes A and C. If the preferred route is unavailable, the message originator will attempt other routes until a path is found to the "C" node. Therefore a Z-Wave network can span much farther than the radio range of a single unit; however with several of these hops a delay may be introduced between the control command and the desired result. In order for Z-Wave units to be able to route unsolicited messages, they cannot be in sleep mode. Therefore, most battery-operated devices are not designed as repeater units. A Z-Wave network can consist of up to 232 devices with the option of bridging networks if more devices are required.
A popular approach to networking in an office or enterprise environment is communication via radio frequency (RF) distribution system that transports RF signals throughout a building to and from data devices. Commonly referred to as Wireless Local Area Network (WLAN), such communication makes use of the Industrial, Scientific and Medical (ISM) frequency spectrum. In the US, three of the bands within the ISM spectrum are the A band, 902-928 MHz; the B band, 2.4- 2.484 GHz (a.k.a. 2.4 GHz); and the C band, 5.725-5.875 GHz (a.k.a. 5 GHz). Overlapping and / or similar bands are used in different regions such as Europe and Japan. In order to allow interoperability between equipment manufactured by different vendors, few WLAN standards have evolved, as part of the IEEE 802.11 standard group, branded as WiFi (www.wi-fi.org). IEEE 802.11b describes a communication using the 2.4GHz frequency band and supporting a communication rate of 1 IMb/s, IEEE 802.1 la uses the 5GHz frequency band to carry 54MB/s and IEEE 802.11 g uses the 2.4 GHz band to support 54Mb/s.
A node / client with a WLAN interface is commonly referred to as STA (Wireless Station / Wireless client). The STA functionality may be embedded as part of the data unit, or alternatively be a dedicated unit, referred to as a bridge, coupled to the data unit. While STAs may communicate without any additional hardware (ad-hoc mode), such network usually involves Wireless Access Point (a.k.a. WAP or AP) as a mediation device. The WAP implements the Basic Stations Set (BSS) and / or ad-hoc mode based on Independent BSS (IBSS). STA, client, bridge and WAP will be collectively referred to herein as WLAN unit.
Bandwidth allocation for IEEE 802.1 lg wireless in the U.S. allows multiple communication sessions to take place simultaneously, where eleven overlapping channels are defined spaced 5MHz apart, spanning from 2412 MHz as the center frequency for channel number 1, via channel 2 centered at 2417 MHz and 2457 MHz as the center frequency for channel number 10, up to channel 11 centered at 2462 MHz. Each channel bandwidth is 22MHz, symmetrically (+/-U MHz) located around the center frequency. In the transmission path, first the baseband signal (IF) is generated based on the data to be transmitted,: using 256 QAM (Quadrature Amplitude Modulation) based OFDM (Orthogonal Frequency Division Multiplexing) modulation technique, resulting a 22 MHz (single channel wide) frequency band signal. The signal is then up converted to the 2.4 GHz (RF) and placed in the center frequency of required channel, and transmitted to the air via the antenna. Similarly, the receiving path comprises a received channel in the RF spectrum, down converted to the baseband (IF) wherein the data is then extracted.
FIG. 1A schematically shows a block diagram that illustrates a system 100 including a computer system 110 and the associated Internet 113 connection. Such configuration is typically used for computers (hosts) connected to the Internet 113 and executing a server or a client (or a combination) software. The system 110 may be used as a portable electronic device such as a notebook / laptop computer, a media player (e.g., MP3 based or video player), a desktop computer, a laptop computer, a cellular phone, a Personal Digital Assistant (PDA), an image processing device (e.g., a digital camera or video recorder), and / or any other handheld or fixed location computing devices, or a combination of any of these devices. Note that while FIG.l illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components; as such details are not germane. It will also be appreciated that network computers, handheld computers, cell phones and other data processing systems which have fewer components or perhaps more components may also be used. The computer system of FIG.1 may, for example, be an Apple Macintosh computer or Power Book, or an IBM compatible PC. Computer system 110 includes a bus 120, an interconnect, or other communication mechanism for communicating information, and a processor 117, commonly in the form of an integrated circuit, coupled with bus 120 for processing information and for executing the computer executable instructions. Computer system 110 also includes a main memory 122, such as a Random Access Memory (RAM) or other dynamic storage device, coupled to bus 120 for storing information and instructions to be executed by processor 117. Main memory 122 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 117. Computer system 110 further includes a Read Only Memory (ROM) 121 (or other non-volatile memory) or other static storage device coupled to bus 120 for storing static information and instructions for processor 117. A storage device 123, such as a magnetic disk or optical disk, a hard disk drive (HDD) for reading from and writing to a hard disk, a magnetic disk drive for reading from and writing to a magnetic disk, and/or an optical disk drive (such as DVD) for reading from and writing to a removable optical disk, is coupled to bus 120 for storing information and instructions. The hard disk drive, magnetic disk drive, and optical disk drive may be connected to the system bus by a hard disk drive interface, a magnetic disk drive interface, and an optical disk drive interface, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the general purpose computing devices. Typically computer system 110 includes an Operating System (OS) stored in a non-volatile storage for managing the computer resources and provides the applications and programs with an access to the computer resources and interfaces. An operating system commonly processes system data and user input, and responds by allocating and managing tasks and internal system resources, such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating networking and managing files. Non- limiting examples of operating systems are Microsoft Windows, Mac OS X, and Linux.
Computer system 120 may be coupled via bus 140 to a display 134, such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), a flat screen monitor, a touch screen monitor or similar means for displaying text and graphical data to a user. The display may be connected via a video adapter for supporting the display. The display allows a user to view, enter, and/or edit information that is relevant to the operation of the system. An input device 135, including alphanumeric and other keys, is coupled to bus 140 for communicating information and command selections to processor 137. Another type of user input device is cursor control 136, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 137 and for controlling cursor movement on display 134. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
The computer system 130 may be used for implementing the methods and techniques described herein. According to one embodiment, those methods and techniques are performed by computer system 130 in response to processor 137 executing one or more sequences of one or more instructions contained in main memory 142. Such instructions may be read into main memory 142 from another computer-readable medium, such as storage device 143. Execution of the sequences of instructions contained in main memory 142 causes processor 137 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the arrangement. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term "computer-readable medium" (or "machine-readable medium") as used herein is an extensible term that refers to any medium or any memory, that participates in providing instructions to a processor, (such as processor 137) for execution, or any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). Such a medium may store computer-executable instructions to be executed by a processing element and/or control logic, and data which is manipulated by a processing element and/or control logic, and may take many forms, including but not limited to, non-volatile medium, volatile medium, and transmission medium. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 140. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infrared data communications, or other form of propagating signals (e.g., carrier waves, infrared signals, digital signals, etc.). Common forms of computer- readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch-cards, paper-tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to processor 137 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 130 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector can receive the data carried in the infrared signal and appropriate circuitry can place the data on bus 140. Bus 140 carries the data to main memory 122, from which processor 117 retrieves and executes the instructions. The instructions received by main memory 142 may optionally be stored on storage device 143 either before or after execution by processor 137.
Computer system 130 commonly includes a communication interface 139 coupled to bus 140. Communication interface 139 provides a two-way data communication coupling to a network link 138 that is connected to a local network 131. For example, communication interface 139 may be an Integrated Services Digital Network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another non-limiting example, communication interface 139 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. For example, Ethernet based connection based on IEEE802.3 standard may be used such as 10/lOOBaseT, lOOOBaseT (gigabit Ethernet), 10 gigabit Ethernet (10GE or !OGbE or 10 GigE per IEEE Std 802.3ae-2002as standard), 40 Gigabit Ethernet (40GbE), or 100 Gigabit Ethernet (lOOGbE as per Ethernet standard IEEE P802.3ba), as described in Cisco Systems, Inc. Publication number 1-587005-001-3 (6/99), "Internetworking Technologies Handbook", Chapter 7: "Ethernet Technologies", pages 7-1 to 7-38, which is incorporated ifi its entirety for all purposes as if fully set forth herein. In such a case, the communication interface 139 typically include a LAN transceiver or a modem, such as Standard Microsystems Corporation (SMSC) LAN91C111 10/100 Ethernet transceiver described in the Standard Microsystems Corporation (SMSC) data-sheet "LAN91C111 10/100 Non-PCI Ethernet Single Chip MAC + PHY" Data-Sheet, Rev. 15 (02-20-04), which is incorporated in its entirety for all purposes as if fully set forth herein.
In one non-limiting example, the communication is based on a LAN communication, such as Ethernet, and may be partly or in full in accordance with the IEEE802.3 standard. For example, Gigabit Ethernet (GbE or 1 GigE) may be used, describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second (1 ,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard. There are five physical layer standards for gigabit Ethernet using optical fiber (1000BASE-X), twisted pair cable (1000BASE-T), or balanced copper cable (lOOOBASE-CX). The IEEE 802.3z standard includes 1000BASE-SX for transmission over multi-mode fiber, 1000BASE-LX for transmission over single-mode fiber, and the nearly obsolete 1000BASE-CX for transmission over balanced copper cabling. These standards use 8b/10b encoding, which inflates the line rate by 25%, from 1000 Mbit/s to 1250 Mbit s, to ensure a DC balanced signal. The symbols are then sent using NRZ. The IEEE 802.3ab, which defines the widely used 1000BASE-T interface type, uses a different encoding scheme in order to keep the symbol rate as low as possible, allowing transmission over twisted pair. Similarly, The 10 gigabit Ethernet (10GE or lOGbE or 10 GigE may be used, which is a version of Ethernet with a nominal data rate of 10 Gbit/s (billion bits per second), ten times faster than gigabit Ethernet. The 10 gigabit Ethernet standard defines only full duplex point to point links which are generally connected by network switches. The 10 gigabit Ethernet standard encompasses a number of different physical layers (PHY) standards. A networking device may support different PHY types through pluggable PHY modules, such as those based on SFP+.
Any communication or connection herein, such as the connection of peripherals in general, and memories in particular to a processor, may use a bus. A communication link (such as Ethernet, or any other LAN, PAN or WAN communication links may also be regarded as buses herein. A bus may be an internal bus, an external bus or both. A bus may be a parallel or a bit-serial bus. A bus may be based on a single or on multiple serial links or lanes. The bus medium may electrical conductors based such as wires or cables, or may be based on a fiber-optic cable. The bus topology may use point-to-point, multi-drop (electrical parallel) and daisy-chain, and may be based on hubs or switches. A point-to-point bus may be full-duplex, or half-duplex. Further, a bus may use proprietary specifications, or may be based on, similar to, substantially or fully compliant to an industry standard (or any variant thereof), and may be hot-pluggable. A bus may be defined to carry only digital data signals, or may also defined to carry a power signal (commonly DC voltages), either in separated and dedicated cables and connectors, or may carry the power and digital data together over the same cable. A bus may support master / slave configuration. A bus may carry a separated and dedicated timing signal or may use self-clocking line-code.
A 'memory' herein may be a random-accessed or a sequentially-accessed memory, and may be location-based, randomly-accessed, and can be written multiple times. The memory may be volatile and based on semiconductor storage medium, such as: RAM, SRAM, DRAM, TTRAM and Z-RAM. The memory may be non-volatile and based on semiconductor storage medium, such as ROM, PROM, EPROM or EEROM, and may be Flash-based, such as SSD drive or USB 'Thumb' drive. The memory may be based on non-volatile magnetic storage medium, such as HDD. The memory may be based on an optical storage medium that is recordable and removable, and may include an optical disk drive. The storage medium may be: CD-RW, DVD-R W, DVD+RW, DVD- RAM BD-RE, CD-ROM, BD-ROM or DVD-ROM. The memory form factor may be an IC, a PCB on which one or more ICs are mounted, or a box-shaped enclosure.
The communication may be based on a PAN, a LAN or a WAN communication link, may use private or public networks, and may be packet-based or circuit-switched. The first bus or the second bus (or both) may each be based on Ethernet and may be substantially compliant with IEEE 802.3 standard, and be based on one out of: 100BaseT/TX, 1000BaseT/TX, 10 gigabit Ethernet substantially (or in full) according to IEEE Std 802.3ae-2002as standard, 40 Gigabit Ethernet, and 100 Gigabit Ethernet substantially according to IEEE P802.3ba standard. The first bus or the second bus (or both) may each be based on a multi-drop, a daisy-chain topology, or a point-to-point connection, use half-duplex or full-duplex, and may employs a master / slave scheme. The first bus or the second bus (or both) may each be a wired-based, point-to-point, and bit-serial bus, where a timing, clocking, or strobing signal is carried over dedicated wires, or using a self-clocking scheme. Each of the buses (or both) may use a fiber-optic cable as the bus medium, and the adapter may comprise a fiber-optic connector for connecting to the fiber-optic cable. The networks or the data paths described herein may be similar, identical or different geographical scale or coverage types and data rates, such as NFCs, PANs, LANs, MANs, or WANs, or any combination thereof. The networks or the data paths may be similar, identical or different types of modulation, such as Amplitude Modulation (AM), a Frequency Modulation (FM), or a Phase Modulation (PM), or any combination thereof. The networks or the data paths may be similar, identical or different types of duplexing such half- or full-duplex, or any combination thereof. The networks or the data paths may be based on similar, identical or different types of switching such as circuit-switched or packet-switched, or any combination thereof. The networks or the data paths may have similar, identical or different ownership or operation, such as private or public networks, or any combination thereof.
Business Intelligence (BI) is a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful and useful information for business purposes. BI may handle large amounts of unstructured data to help identify and develop new opportunities. Making use of new opportunities and implementing an effective strategy may provide a competitive market advantage and long-term stability. Generally, Business Intelligence includes a number of components such as: Multidimensional aggregation and allocation, denormalization, tagging and standardization, reporting with analytical alert, interface with unstructured data source, group consolidation, budgeting and rolling forecast, statistical inference and probabilistic simulation, key performance indicators optimization, version control and process management, and open item management. BI technologies provide historical, current and predictive views of business operations. Common functions of business intelligence technologies are reporting, online analytical processing, analytics, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, predictive analytics and prescriptive analytics. BI typically uses technologies, processes, and applications to analyze mostly internally, structured data and business processes while competitive intelligence gathers, analyzes and disseminates information with a topical focus on company competitors. Further, business intelligence can include the subset of competitive intelligence.
Business intelligence can be applied to the various business purposes, in order to drive business value. In the measurement aspect, the BI may include a program that creates a hierarchy of performance metrics and benchmarking that informs business leaders about progress towards business goals (business process management). In the analytics aspect, the BI may include a program that builds quantitative processes for a business to arrive at optimal decisions and to perform business knowledge discovery. Such analytics frequently involve data mining, process mining, statistical analysis, predictive analytics, predictive modeling, business process modeling, complex event processing, and prescriptive analytics. The BI may include reporting functionality that builds infrastructure for strategic reporting to serve the strategic management of a business, not operational reporting. Such functionality commonly involves data visualization, executive information system and OLAP. A BI system may include a collaboration platform that gets different areas (both inside and outside the business) and work together through data sharing and electronic data interchange. Some BI systems include knowledge management program, that makes the company data driven through strategies and practices to identify, create, represent, distribute, and enable adoption of insights and experiences that are true business knowledge. Such knowledge management typically supports learning management and regulatory compliance.
BI commonly may use Business Analytics (BA) techniques. BA refers to the skills, technologies, applications and practices for continuous iterative exploration and investigation of past business performance to gain insight and drive business planning. Business analytics focuses on developing new insights and understanding of business performance based on data and statistical methods. In contrast, business intelligence traditionally focuses on using a consistent set of metrics to both measure past performance and guide business planning, which is also based on data and statistical methods. Business analytics makes extensive use of data, statistical and quantitative analysis, explanatory and predictive modeling, and fact-based management to drive decision making. Analytics may be used as input for human decisions or may drive fully automated decisions.
BA commonly includes descriptive analytics for gaining insight from historical data with reporting, scorecards, clustering etc., predictive analytics based on predictive modeling using statistical and machine learning techniques, prescriptive analytics for recommending decisions using optimization, simulation etc., and decisive analytics for supporting human decisions with visual analytics the user models to reflect reasoning.
A queue management system is used to control queues. Commonly in a retail environment (such as supermarkets or banks), queues of people form in various situations and locations in a queue area. There are variety of measurement technologies which predict and measure queue lengths and waiting times and provide management information to help service levels and resource deployment. Automatic queue measurement systems are designed to help managers through enhanced customer service, and by improving efficiency and reducing costs. In one example, people counting sensors are used at entrances and above checkout lanes / queue areas to accurately detect the number and behavior of people in the queue. Built-in predictive algorithms can provide advance notice on how many checkouts or service points will be needed to meet demand. Dashboards, available on a computer monitor or mobile PDA device, are often used to provide a range of information, such as dynamic queue length, waiting time data, and checkout performance on the shop floor. In the event that performance falls towards a minimum service level, the management teams can be automatically alerted beforehand, allowing them time to proactively manage the situation. Key measurements of queue management system may include the number of people entering the store, queue length, average wait time, operator or teller idle time, and total wait time.
A Business Intelligence (BI) system may include one or more computer able to collect information from one or more sources and able to analyze the collected data and generate business insights. For example, a server of a store BI system may collect data from one or more cash registers or Point of Sale (PoS) terminals, from video cameras across the store, from a storage room or warehouse storing inventory items, or other units. The BI server may organize and store the data in a central database and may query the database in order to generate BI insights. For example, a user interface may allow the store manager to obtain statistical data related to the age or gender of customers, to correlate among returning customers and a particular salesperson who served them, or the like.
Some BI systems may suffer from inaccurate analysis methods. For example, two parents and their two children may visit a shoe store in order to purchase shoes for one of the two children. The BI system may count four visitors and may incorrectly deduce four customers (or two adult customers), whereas this family of four should have been counted as a single customer. Similarly, when this family of four persons stands in line at the cash register of the store, behind one more customer, the BI system may incorrectly determine that there is a current queue of five persons in line for paying, whereas there are, in fact, only two customers in line. In another example, a crowding of six persons near the cash register may be incorrectly interpreted as a line of six customers waiting for service, whereas four out of those six persons may be that family of four who already paid for the merchandise and are merely packing their purchased items in bags. Inaccurate BI insights may lead to inaccurate human or automated decisions.
Conventional BI systems utilize a centralized architecture, in which multiple devices capture raw data and transmit it to a central server. The central server collects the raw data, analyzes it, and generates a BI insight or BI command, which is then transmitted from the central server to the relevant BI device(s). This architecture may be costly and inefficient, may suffer from latency problems, and may unnecessarily require transmission and processing of large volumes of information, thereby consuming bandwidth, storage space, and processing efforts.
Methods and systems for characterizing customers' behavior are described, for example, in U.S. Patent Application Publication 2006/0111961 to McQuivey entitled: "Passive Consumer Survey System" disclosing a passive tracking system that uses RFID tags carried by the participants, in U.S. Patent Application Publication 2007/0219866 to Wolf et al. entitled: "Passive Shopper Identification Systems Utilized to Optimize Advertising" disclosing utilizing a mobile communication device to track consumer shopping patterns and purchases in association with a loyalty tracking device", in U.S. Patent 8,195,499 to Angell et al. entitled: "Identifying Customer Behavioral Types from a Continuous Video Stream for Use in Optimizing Loss Leader Merchandizing" disclosing a method, apparatus, and computer program for optimizing loss leader merchandizing, in U.S. Patent Application Publication 2007/0296817 to Ebrahimi et al. entitled: "Smart Video Surveillance System Ensuring Privacy" disclosing IP connected video surveillance system designed to protect the privacy of people and goods under surveillance, in U.S. Patent Application Publication 2008/0004748 to Butler et al. entitled: "System and Methods Monitoring Devices, Systems, Users and User Activity at Remote Locations" disclosing systems and methods for monitoring remotely located devices, systems, user and user activities, and in U.S. Patent Application Publication 2006/0010027 to Redman entitled: "Method, System and Program Product for Measuring Customer Preferences and Needs with Traffic Pattern Analysis" disclosing the determining movement data of a customer or customers in a store to analyze customer decisions and optimize product presentation and customer service in response to the analysis, which are all incorporated in their entirety for all purposes as if fully set forth herein.
Methods and systems for BI are described, for example, in WIPO Patent Application Publication WO 2002/076005A3 to Nwabueze entitled: "Methods for Dynamically Accessing, Processing, and Presenting Data Acquired from Disparate Data Sources" disclosing a method for acquiring and transforming data for business analysis, in U.S. Patent Application Publication 2003/0033179 to Katz et al. entitled: "Method for Generating Customized Alerts to the Procurement, Sourcing, Strategic Sourcing and/or Sale of One or More Items by an Enterprise" disclosing a method for generating customized alerts a Value Chain Intelligence (VCI) system that enables suppliers and procurement professionals to leverage enterprise and marketplace data in order to potentially improve decision making in business enterprise, in U.S. Patent 6,965,886 to Govrin et al. entitled: "System and Method for Analyzing and Utilizing Data, by Executing Complex Analytical Models in Real Time" disclosing the collecting, filtering, analyzing, distributing and effectively utilizing highly relevant events in real time from huge quantities of data, in U.S. Patent 8,175,991 to Narayanaswamy et al. entitled: "Business Optimization Engine that Extracts Process Life Cycle Information in Real Time by Inserting Stubs into Business Applications" disclosing optimizing enterprise applications driven by business processes, and in U.S. Patent Application Publication 2005/0055289 to Mehldahl entitled: "Multi-Dimensional Business Information Accounting Software Engine" disclosing a software which accepts input of user defined business specific textual or numeric information and conventional financial accounting data and converts them to indexed star schema multi-dimensional computer data as a journal entry, which are all incorporated in their entirety for all purposes as if fully set forth herein.
Methods and systems for tracking are described, for example, in WIPO Patent Application Publication WO 2012/024516 A2 to Jamtgaard et al. entitled: "Target Localization Utilizing Wireless and Camera Sensor Fusion" disclosing the estimation of a target's location by calculating the correlation of Wi-Fi and video location measurements, in WIPO Patent Application Publication WO 2004/034347 Al to Nemes entitled: "Security System and Process for Monitoring and Controlling the movement of People and Goods" disclosing a system for surveillance and security control by EAS and RFID and utilizing a camera for linking information, in U.S. Patent Application Publication 2007/0182818 to Buehler entitled: "Object Tracking and Alerts" disclosing an integrated surveillance system combining video surveillance and data from other sensor-based security networks to identify activities that may require attention, in U.S. Patent 7,049,965 to Kelliher et al. entitled: "Surveillance Systems and Methods" disclosing surveillance systems and methods having both a radio frequency component and a video image, and in U.S. Patent 8,570,373 to Variyath et al. entitled: "Tracking an Object Utilizing Location Information Associated with a Wireless Device" disclosing a method of tracking an object carrying a wireless location device that comprises recording and storing images from a plurality of cameras corresponding to respective coverage areas having predetermined locations, which are all incorporated in their entirety for all purposes as if fully set forth herein.
Reference is made to FIG. IB, which is a schematic block diagram illustration of a prior art BI system 100 having a centralized topology. System 100 may include a central BI server 101, a central BI database 102, and multiple BI devices 111-113. Each one of BI devices 111-113 may be, for example, a device able to sense, capture or measure BI information. In a demonstrative embodiment, for example, BI device 111 may be or may include a video camera able to capture images and/or videos; BI device 112 may be or may include a microphone able to capture audio; and BI device 113 may be or may include an inventory tracking module.
Typically microphones are based on converting audible or inaudible (or both) incident sound to an electrical signal by measuring the vibration of a diaphragm or a ribbon. The microphone may be a condenser microphone, an electret microphone, a dynamic microphone, a ribbon microphone, a carbon microphone, or a piezoelectric microphone.
A sensor may be an image sensor for providing digital camera functionality, allowing an image (either as still images or as a video) to be captured, stored, manipulated and displayed. The image capturing hardware integrated with the sensor unit may contain a photographic lens (through a lens opening) focusing the required image onto a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens, for capturing the image and producing electronic image information representing the image. The image sensor may be based on Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS). The image may be converted into a digital format by an image sensor AFE (Analog Front End) and an image processor, commonly including an analog to digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. The unit may contain a video compressor, coupled between the analog to digital (A/D) converter and the transmitter for compressing the digital data video before transmission to the communication medium. The compressor may be used for lossy or non-lossy compression of the image information, for reducing the memory size and reducing the data rate required for the transmission over the communication medium. The compression may be based on a standard compression algorithm such as JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264, or ITU-T CCIR 601.
The digital data video signal carrying a digital data video according to a digital video format, and a transmitter coupled between the port and the image processor for transmitting the digital data video signal to the communication medium. The digital video format may be based on one out of: TIFF (Tagged Image File Format), RAW format, AVI (Audio Video Interleaved), DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards. Each one of BI devices 111-113 may capture raw data, and may transmit the raw data to central BI server 101, over wired and/or wireless links. Central server 101 may store the raw data in the central database 102. Central server 101 may include a data processor 103 able to process the raw data and generate a BI insight (e.g., "the line at cash register number 3 is longer than a pre-defined threshold value"). Based on the generated BI insight, a BI command generator 104 of central BI server 101 may generate a BI command (e.g., "open cash register number 4 now"), instructing one or more of BI devices 111-113 to modify its operation, to take an action or to stop a current action. The BI command may be transmitted from central BI server 101 to the relevant BI device(s) 111-113.
The centralized topology of prior art system 100, particularly in an organization having branches in multiple cities or countries, may require an unnecessary transfer or large volumes of data from BI devices to remote BI servers, as well as unnecessary long-term storage of such data. Rather, short-term storage and local data processing may be utilized in a distributed topology to increase efficiency, reduce storage usage, reduce and distribute processing efforts, reduce latency, and reduce bandwidth usage.
The term 'retail' typically refers to the the sale of goods and services from individuals or businesses to the end-user. Retailers are part of an integrated system called the supply chain. A retailer purchases goods or products in large quantities from manufacturers directly or through a wholesale, and then sells smaller quantities to the consumer for a profit. Retailing can be done in either fixed locations like stores or markets, door-to-door or by delivery. Retailing includes subordinated services, such as delivery. The term "retailer" is also applied where a service provider services the needs of a large number of individuals, such as for the public. Shops may be on residential streets, streets with few or no houses or in a shopping mall. Online retailing is a type of electronic commerce used for business-to-consumer (B2C) transactions and mail order, are forms of non-shop retailing. Shopping generally refers to the act of buying products. Sometimes this is done to obtain necessities such as food and clothing; sometimes it is done as a recreational activity. Recreational shopping often involves window shopping (just looking, not buying) and browsing and does not always result in a purchase.
Retail is usually classified by type of products, such as food products, typically require cold storage facilities, hard goods or durable goods ("hardline retailers") such as appliances, electronics, furniture, sporting goods, etc, where the goods do not quickly wear out and provide utility over time, and soft goods or consumables, such as clothing, apparel, and other fabrics, which are consumed after one use or have a limited usage period (typically under three years).
There are various types of retailers categorized by marketing strategy, such as Department stores which are typically very large stores offering a huge assortment of "soft" and "hard goods; often bear a resemblance to a collection of specialty stores. A retailer of such store carries variety of categories and has broad assortment at average price, and offers considerable customer service. Other types include Discount stores which tend to offer a wide array of products and services, but they compete mainly on price offers extensive assortment of merchandise at affordable and cut- rate prices, Warehouse stores that offer low-cost, often high-quantity goods piled on pallets or steel shelves; and offering warehouse clubs that charge a membership fee, Variety stores these offer extremely low-cost goods, with limited selection; demographic-based retailers that target one particular segment (e.g., high-end retailers focusing on wealthy individuals), and Specialty stores that give attention to a particular category and provides high level of service to the customers, such as a pet store that specializes in selling dog food. Other types are Boutiques or concept stores that are similar to specialty stores, but are very small in size, and only ever stock one brand and are run by the brand that controls them, Convenience stores that provide limited amount of merchandise at more than average prices with a speedy checkout, and are ideal for emergency and immediate purchases as it often works with extended hours, stocking on a daily basis, and Supermarkets that offer a self-service store consisting mainly of grocery and limited products on non food items. Malls commonly include a range of retail shops at a single outlet, and endow with products, food and entertainment under a roof.
Point-of-sale (also called PoS or checkout) is the place where a retail transaction is completed, at which a customer makes a payment to the merchant in exchange for goods or services. At the point of sale the retailer would calculate the amount owed by the customer and provide options for the customer to make payment. The merchant will also normally issue a receipt for the transaction. The PoS in various retail industries uses customized hardware and software as per their requirements. Retailers may utilize weighing scales, scanners, electronic and manual cash registers, EFTPOS terminals, touch screens and any other wide variety of hardware and software available for use with PoS. For example, a grocery or candy store uses a scale at the point of sale, while bars and restaurants use software to customize the item or service sold when a customer has a special meal or drink request. The modern point of sale is often referred to as the point of service because it is not just a point of sale but also a point of return or customer order. Additionally it includes advanced features to cater to different functionality, such as inventory management, CRM, financials, warehousing, etc., all built into the PoS software. The retailing industry is one of the predominant users of PoS terminals.
A retail point of sale system typically includes a cash register (which in recent times comprises a computer, monitor, cash drawer, receipt printer, customer display and a barcode scanner) and the majority of retail PoS systems also include a debit/credit card reader. It can also include a conveyor belt, weight scale, integrated credit card processing system, a signature capture device and a customer pin pad device. PoS monitors use touch-screen technology for ease of use and a computer is built into the monitor chassis for what is referred to as an all-in-one unit, which liberate counter space for the retailer. The PoS system software can typically handle myriad customer based functions such as sales, returns, exchanges, layaways, gift cards, gift registries, customer loyalty programs, promotions, discounts and much more. A PoS software can also allow for functions such as pre-planned promotional sales, manufacturer coupon validation, foreign currency handling and multiple payment types.
In consideration of the foregoing, it would be an advancement in the art to provide an improved networking or distributed functionality method and system that is simple, secure, cost- effective, reliable, low-latency, easy to use, has a minimum part count, minimum hardware, and / or uses existing and available components, protocols, programs and applications for providing better and additional functionalities, and provides a better user experience, in particular in a BI environment.
SUMMARY
A device is disclosed for generating a command in response to counting items using a sensor responsive to a phenomenon, for use with a communication network. The device may comprise a sensor for producing a sensor data in response to the phenomenon; a software and a processor for executing the software, the processor coupled to the sensor to receive the sensor data therefrom, and to produce a command in response to the sensor data; a transceiver coupled to the processor and operative for transmitting digital data to, and receiving digital data from, the network; and a single enclosure housing the sensor, the processor, and the transceiver. The device may be operative to produce the command in response to the count of items recognized in the sensor data, and may be addressable in the network. The items may be people and the device may be operative to count people in a queue. Alternatively or in addition, the items may be objects and the device may be operative to count objects on a shelf.
A Business Intelligence (BI) system is disclosed for commanding an actuator operation in response to first and second sensor outputs respectively associated with first and second phenomena, for use with a network. The system may comprise a first device comprising, or connectable to, the first sensor that responds to the first phenomenon, the first device is operative to transmit a first command corresponding to the first phenomenon over the network; a second device that may comprise, or may be connectable to, the second sensor that responds to the second phenomenon, the second device is operative to transmit a second command corresponding to the second phenomenon over the network; and a third device that may comprise, or may be connectable to, the actuator that affects a third phenomenon, the third device is operative to receive the first and second commands from the network and to activate the actuator in response to the first and second sensors data. The devices are addressable in the network, and each of the first and second devices may further comprise software and a processor for executing the software, the processor may be coupled to the respective sensor to receive the sensor data therefrom, and to produce a respective command in response to the sensor data. The first and second sensors may be of the same type or of distinct types, and may be operative to sense the same phenomenon. The first device, the second device, or both, may further be operative for counting objects on a shelf or people in a queue at a Point-of-Sale (PoS) of a store.
The first device, the second device, or the third device may be integrated in part or entirely in a fixed location, mobile, or hand-held, PoS terminal. The PoS terminal may be a battery-operated portable electronic device that may be a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), an image processing device, a digital camera, a video recorder, or a handheld computing device. The PoS may be a terminal cash drawer, a receipt printer, a customer display, a barcode scanner, a debit/credit card reader, a conveyor belt, a weight scale, an integrated credit card processing system, a signature capture device, or a customer pin pad device. The integration may involve sharing a component, housing in the same enclosure, sharing same processor, or mounting onto the same surface, or sharing a same connector such as a power connector. The sensor may be a piezoelectric sensor that includes single crystal material or a piezoelectric ceramics and uses a transverse, longitudinal, or shear effect mode of the piezoelectric effect. Further, the sensor may comprise multiple sensors arranged as a directional sensor array operative to estimate the number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the phenomenon impinging the sensor array.
The sensor may be a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, and wherein the thermoelectric sensor consists of, or comprises, a Positive Temperature Coefficient (PTC) thermistor, a Negative Temperature Coefficient (NTC) thermistor, a thermocouple, a quartz crystal, or a Resistance Temperature Detector (RTD). Further, the sensor may consist of, or comprise, a nanosensor, a crystal, or a semiconductor, and the sensor may be an ultrasonic based. Further, the sensor may be an eddy-current sensor a proximity sensor, a bulk or surface acoustic sensor, or an atmospheric or an environmental sensor.
The sensor may be a radiation sensor that responds to radioactivity, nuclear radiation, alpha particles, beta particles, or gamma rays, and is based on gas ionization. Further, the sensor may be a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, and the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell. The photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element.
The sensor may be a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and the device further may further comprise one or more optical lens for focusing the received light and to guide the image, and the image sensor may be disposed approximately at an image focal point plane of the one or more optical lens for properly capturing the image. An image processor may be coupled to the image sensor for providing a digital video data signal according to a digital video format, the digital video signal carrying digital data video based on the captured images, and the digital video format may be based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261 , ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards. A intraframe or interframe compression based video compressor may be coupled to the image sensor for lossy or non-lossy compressing the digital data video, and the compression may be based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264 and ITU-T CCIR 601.
The sensor may be an electrochemical sensor that responds to an object chemical structure, properties, composition, or reactions. The electrochemical sensor may be a pH meter or a gas sensor responding to a presence of radon, hydrogen, oxygen, or Carbon-Monoxide (CO), or the electrochemical sensor may be based on optical detection or on ionization and is a smoke, a flame, or a fire detector, or is responsive to combustible, flammable, or toxic gas. The sensor may be a physiological sensor that responds to parameters associated with a live body, and may be external to the sensed body, implanted inside the sensed body, attached to the sensed body, or wear on the sensed body. The physiological sensor may be responding to body electrical signals and is an EEG Electroencephalography (EEG) or an Electrocardiography (ECG) sensor, and may be responding to oxygen saturation, gas saturation, or a blood pressure in the sensed body.
The sensor may be an electroacoustic sensor that responds to an audible or inaudible sound, and may be an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing the incident sound based motion of a diaphragm or a ribbon, and the microphone may consist of, or comprises, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
The sensor may be an image sensor for capturing still or video image, and the device may further comprise an image processor for processing the captured image to count the items in the captured image. The image sensor may be a digital video sensor for capturing digital video content, and may further be operative for enhancing the video content using image stabilization, unsharp masking, or super-resolution. The image sensor may be a digital video sensor for capturing digital video content, and the image processor may further be operative for Video Content Analysis (VCA). The VCA may include Video Motion Detection (VMD), video tracking, egomotion estimation, identification, behavior analysis, situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, or pattern recognition.
The actuator may be a light source that emits visible or non-visible light for illumination or indication, the non-visible light may be infrared, ultraviolet, X-rays, or gamma rays, and the light source may be an electric light source for converting electrical energy into light. The electric light source may consist of, or comprise, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
The actuator may be a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves. The sound may be audible, and may be an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or a planar magnetic loudspeaker, or a bending wave loudspeaker. The sounder may be operative to emit a single or multiple tones, in a continuous or intermittent operation. The sounder may be an electromechanical or a ceramic-based, and may be an electric bell, a buzzer (or beeper), a chime, a whistle, or a ringer. The sounder may be a loudspeaker, and may be operative to store and play one or more digital audio content files, including a pre-recorded audio that is stored entirely or in part in the device or the system. The device or system may comprise a synthesizer for producing the digital audio content, and may comprise music content, the sound of an acoustical musical instrument such as a piano, a tuba, a harp, a violin, a flute, or a guitar, and may be a male or female human voice sounding a human voice saying a syllable, a word, a phrase, a sentence, a short story or a long story. A speech synthesizer may be used for producing a human speech, which may be Text-To-Speech (TTS) based, a concatenate type using unit selection, diphone synthesis, or domain-specific synthesis, or a formant type that is articulatory synthesis or hidden Markov models (HMM) based.
The actuator may be an electric thermoelectric actuator that is a heater or a cooler, may be operative for affecting the temperature of a solid, a liquid, or a gas object, and may be coupled to the object by conduction, convection, force convention, thermal radiation, or by the transfer of energy by phase changes. The cooler may be based on a heat pump driving a refrigeration cycle using a compressor-based electric motor. The thermoelectric actuator may be an electric heater that is a resistance heater or a dielectric heater, such as an induction heater, may be solid-state based, or may be an active heat pump system based on the Peltier effect.
The actuator is a display for visually presenting information, the display may be a monochrome, grayscale or color display and may consist of an array of light emitters or light reflectors. The display may be a projector based on an Eidophor, Liquid Crystal on Silicon (LCoS or LCOS), LCD, MEMS or Digital Light Processing (DLP™) technology, and the projector is a virtual retinal display. The display may be a 2D or 3D video display supporting Standard-Definition (SD) or High-Definition (HD) standards, and may be capable of scrolling, static, bold or flashing the presented information. The display may be an analog display having an analog input interface supporting NTSC, PAL or SECAM formats, and the analog input interface includes RGB, VGA (Video Graphics Array), SVGA (Super Video Graphics Array), SCART or S-video interface. The display may be a digital display having a digital input interface that includes IEEE1394, Fire Wire™, USB, SDI (Serial Digital Interface), HDMI (High-Definition Multimedia Interface), DVI (Digital Visual Interface), UDI (Unified Display Interface), DisplayPort, Digital Component Video or DVB (Digital Video Broadcast) interface. Further, the display may be a Cathode-Ray Tube (CRT), a Field Emission Display (FED), an Electroluminescent Display (ELD), a Vacuum Fluorescent Display (VFD), or an Organic Light-Emitting Diode (OLED) display, a passive-matrix (PMOLED) display, an active-matrix OLEDs (AMOLED) display, a Liquid Crystal Display (LCD) display, a Thin Film Transistor (TFT) display, an LED-backlit LCD display, or an Electronic Paper Display (EPD) display that is based on Gyricon technology, Electro-Wetting Display (EWD), or Electrofluidic display technology. Further, the display may be a laser video display that is based on a Vertical- External-Cavity Surface-Emitting-Laser (VECSEL) or a Vertical-Cavity Surface-Emitting Laser (VCSEL). Alternatively or in addition, the display may be a segment display based on a seven- segment display, a fourteen-segment display, a sixteen-segment display, or a dot matrix display, and may be operative to only display digits, alphanumeric characters, words, characters, arrows, symbols, ASCII, non-ASCII characters, or any combination thereof.
The actuator may be a motion actuator that causes linear or rotary motion, and the system further comprising a conversion mechanism for converting to rotary or linear motion based on a screw, a wheel and axle, or a cam. The conversion mechanism may be based on a screw, and wherein the system may further include a leadscrew, a screw jack, a ball screw or a roller screw, or may be based on a wheel and axle, and wherein the system may further include a hoist, a winch, a rack and pinion, a chain drive, a belt drive, a rigid chain, or a rigid belt, or wherein the motion actuator further comprising a lever, a ramp, a screw, a cam, a crankshaft, a gear, a pulley, a constant- velocity joint, or a ratchet, for affecting the motion.
The motion actuator may be a pneumatic, hydraulic, or electrical actuator. The electrical motor may be a brushed, a brushless, or an uncommutated DC motor, which may be a stepper motor that is a Permanent Magnet (PM) motor, a Variable reluctance (VR) motor, or a hybrid synchronous stepper. The electrical motor may be an AC motor that is an induction motor, a synchronous motor, or an eddy current motor, and may be a single-phase AC induction motor, a two-phase AC servo motor, or a three-phase AC synchronous motor, and the AC motor may be a split-phase motor, a capacitor-start motor, or a Permanent-Split Capacitor (PSC) motor. The electrical motor may be an electrostatic motor, a piezoelectric actuator, or is a MEMS-based motor. The motion actuator may be a linear hydraulic actuator, a linear pneumatic actuator, a linear induction electric motor (LIM), or a Linear Synchronous electric Motor (LSM), and may be based on a piezoelectric motor, a Surface Acoustic Wave (SAW) motor, a Squiggle motor, an ultrasonic motor, or a micro- or nanometer comb-drive capacitive actuator, a Dielectric or Ionic based Electroactive Polymers (EAPs) actuator, a solenoid, a thermal bimorph, or a piezoelectric unimorph actuator.
The actuator may be a compressor or a pump and may be operative to move, force, or compress liquid, gas or slurry, and the pump may be a direct lift, an impulse, a displacement, a valveless, a velocity, a centrifugal, a vacuum, or a gravity pump. The pump may be a positive displacement pump that is a rotary lobe, a progressive cavity, a rotary gear, a piston, a diaphragm, a screw, a gear, a hydraulic, or a vane pump, may be an impulse pump that is a hydraulic ram, a pulser, or an airlift pump, or may be a rotodynamic pump that is a velocity pump, or is a centrifugal pump that is a radial flow, an axial flow, or a mixed flow pump.
The network may be a Body Area Network (BAN), the transceiver may be a BAN transceiver, and the device may further comprise a BAN port coupled to the BAN transceiver. The BAN may be a Wireless BAN (WBAN), the BAN port may be an antenna, the BAN transceiver may be a WBAN modem, and the BAN may be according to, or based on, IEEE 802.15.6 standard.
The network may be a Personal Area Network (PAN), the transceiver may be a PAN transceiver, and the device may further comprise a PAN port coupled to the PAN transceiver. The PAN may be a Wireless PAN (WPAN), the PAN port may be an antenna, and the PAN transceiver may a WPAN modem, and the WPAN may be according to, or based on, Bluetooth™ or IEEE 802.15.1-2005standards, or may be a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards.
The network may be a Local Area Network (LAN); the transceiver may be a LAN transceiver, and the device may further comprise a LAN port coupled to the LAN transceiver. The LAN may be a wired LAN using a wired LAN medium; the LAN port may be a LAN connector; the LAN transceiver may be a LAN modem; the LAN may be Ethernet based; and the wired LAN is according to, or based on, IEEE 802.3-2008 standard. Further, the wired LAN medium may be based on twisted-pair copper cables; the LAN interface is 10Base-T, 100Base-T, 100Base-TX, 100Base-T2, 100Base-T4, 1000Base-T, 1000Base-TX, 10GBase-CX4, or 10GBase-T; and the LAN connector may be RJ-45 type, or the wired LAN medium may be based on an optical fiber; where the LAN interface is lOBase-FX, lOOBase-SX, 100Base-BX, lOOBase-LX lO, lOOOBase-CX, 1000Base-SX, lOOOBase-LX, lOOOBase-LXlO, 1000Base-ZX, 1000Base-BX10, lOGBase-SR, lOGBase-LR, lOGBase-LRM, lOGBase-ER, lOGBase-ZR, or 10GBase-LX4; and the LAN connector may be a fiber-optic connector. Similarly, the LAN may be a Wireless LAN (WLAN); the LAN port may be a WLAN antenna; the LAN transceiver may be a WLAN modem, and the WLAN may be according to, or based on, IEEE 802.1 1-2012, IEEE 802.1 la, IEEE 802.1 lb, IEEE 802.1 lg, IEEE 802.1 In, or IEEE 802.1 lac.
The network is a packet-based or a circuit-switched-based Wide Area Network (WAN), the transceiver is a WAN transceiver, and the device further comprising a WAN port coupled to the WAN transceiver. The WAN may be a wired WAN that uses a wired WAN medium, the WAN port may be a WAN connector, and the WAN transceiver may be a WAN modem, and the wired WAN medium may comprise a wiring primarily installed for carrying a service signal to a building. The wired WAN medium may comprise one or more telephone wire pairs primarily designed for carrying an analog telephone signal, and the network may be using Digital Subscriber Line / Loop (DSL). The network may be based on Asymmetric Digital Subscriber Line (ADSL), ADSL2 or on ADSL2+, according to, or based on, ANSI T1.413, ITU-T Recommendation G.992.1, ITU-T Recommendation G.992.2, ITU-T Recommendation G.992.3, ITU-T Recommendation G.992.4, or ITU-T Recommendation G.992.5, or the network may be based on Very-high-bit-rate Digital Subscriber Line (VDSL), according to, or based on, ITU-T Recommendation G.993.1 or ITU-T Recommendation G.993.2.
The WAN may be a wireless broadband network over a licensed or unlicensed radio frequency band, the WAN port may be an antenna, and the WAN transceiver may be a wireless modem, and the unlicensed radio frequency band may be an Industrial, Scientific and Medical (ISM) radio band. Alternatively or in addition, the wireless network may a satellite network, the antenna may be a satellite antenna, and the wireless modem may be a satellite modem. Further, the wireless network may be a WiMAX network, the antenna may be a WiMAX antenna and the wireless modem may be a WiMAX modem, and the WiMAX network may further be according to, or based on, IEEE 802.16-2009. Furthermore, the wireless network may be a cellular telephone network, the antenna may be a cellular antenna, and the wireless modem may be a cellular modem, and the cellular telephone network may be a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 lxRTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, and the cellular telephone network may be a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or may be based on IEEE ; 802.20-2008. The network may be a wireless network using a wireless communication over a licensed or an unlicensed radio frequency band, such as an Industrial, Scientific and Medical (ISM) radio band.
Any communication or connection herein, such as the connection of peripherals in general, and memories in particular to a processor, may use a bus. A communication link (such as Ethernet, or any other LAN, PAN or WAN communication links may also be regarded as buses herein. A bus may be an internal bus, an external bus or both. A bus may be a parallel or a bit-serial bus. A bus may be based on a single or on multiple serial links or lanes. The bus medium may electrical conductors based such as wires or cables, or may be based on a fiber-optic cable. The bus topology may use point-to-point, multi-drop (electrical parallel) and daisy-chain, and may be based on hubs or switches. A point-to-point bus may be full-duplex, or half-duplex. Further, a bus may use proprietary specifications, or may be based on, similar to, substantially or fully compliant to an industry standard (or any variant thereof), and may be hot-pluggable. A bus may be defined to carry only digital data signals, or may also defined to carry a power signal (commonly DC voltages), either in separated and dedicated cables and connectors, or may carry the power and digital data together over the same cable. A bus may support master / slave configuration. A bus may carry a separated and dedicated timing signal or may use self-clocking line-code.
The communication may be based on a PAN, a LAN or a WAN communication link, may use private or public networks, and may be packet-based or circuit-switched. The first bus or the second bus (or both) may each be based on Ethernet and may be substantially compliant with IEEE 802.3 standard, and be based on one out of: 100BaseT/TX, 1000BaseT/TX, 10 gigabit Ethernet substantially (or in full) according to IEEE Std 802.3ae-2002as standard, 40 Gigabit Ethernet, and 100 Gigabit Ethernet substantially according to IEEE P802.3ba standard. The first bus or the second bus (or both) may each be based on a multi-drop, a daisy-chain topology, or a point-to-point connection, use half-duplex or full-duplex, and may employs a master / slave scheme. The first bus or the second bus (or both) may each be a wired-based, point-to-point, and bit-serial bus, where a timing, clocking, or strobing signal is carried over dedicated wires, or using a self-clocking scheme. Each of the buses (or both) may use a fiber-optic cable as the bus medium, and the adapter may comprise a fiber-optic connector for connecting to the fiber-optic cable.
The networks or the data paths described herein may be similar, identical or different geographical scale or coverage types and data rates, such as NFCs, PANs, LANs, MANs, or WANs, or any combination thereof. The networks or the data paths may be similar, identical or different types of modulation, such as Amplitude Modulation (AM), a Frequency Modulation (FM), or a Phase Modulation (PM), or any combination thereof. The networks or the data paths may be similar, identical or different types of duplexing such half- or full-duplex, or any combination thereof. The networks or the data paths may be based on similar, identical or different types of switching such as circuit-switched or packet-switched, or any combination thereof. The networks or the data paths may have similar, identical or different ownership or operation, such as private or public networks, or any combination thereof.
The above summary is not an exhaustive list of all aspects of the present invention. Indeed, the inventor contemplates that his invention includes all systems and methods that can be practiced from all suitable combinations and derivatives of the various aspects summarized above, as well as those disclosed in the detailed description below and particularly pointed out in the claims filed with the application. Such combinations have particular advantages not specifically recited in the above summary.
BRIEF DESCRIPTION OF THE DRAWINGS
For simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity of presentation. Furthermore, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. The invention is herein described, by way of non-limiting examples only, with reference to the accompanying drawings, wherein like designations denote like elements. Understanding that these drawings only provide information concerning typical embodiments of the invention and are not therefore to be considered limiting in scope:
FIG. 1 A illustrates a schematic electrical diagram of an Internet-connected computer system;
FIG. IB is a schematic block diagram illustration of a prior art BI system having centralized topology;
FIG. 2 is a schematic block diagram illustration of a distributed BI system;
FIG. 3 is a schematic block diagram illustration of a BI device capable of locally processing BI data;
FIG. 4 is a schematic block diagram illustration of a distributed BI system implemented in a store or a supermarket; FIG. 5 is a schematic block diagram illustration of a distributed BI system implemented in a chain of multiple stores;
FIG. 6 is a flow chart of a method of configuring of a distributed BI system;
FIG. 7A is a schematic illustration of a User Interface (UI) for configuring a BI device in a distributed BI system;
FIG. 7B is a schematic illustration of another User Interface (UI) for configuring a master BI device in a distributed BI system; and
FIG. 8 is a flow chart of analyzing and handling a line of customers at a Point-of-Sale (PoS) terminal.
DETAILED DESCRIPTION
The principles and operation of an apparatus according to the present invention may be understood with reference to the figures and the accompanying description wherein similar components appearing in different figures are denoted by identical reference numerals. The drawings and descriptions are conceptual only. In actual practice, a single component can implement one or more functions; alternatively or in addition, each function can be implemented by a plurality of components and devices. In the figures and descriptions, identical reference numerals indicate those components that are common to different embodiments or configurations. Identical numerical references (even in the case of using different suffix, such as 5, 5a, 5b and 5c) refer to functions or actual devices that are either identical, substantially similar, or having similar functionality. It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the apparatus, system, and method of the present invention, as represented in the figures herein, is not intended to limit the scope of the invention, as claimed, but is merely representative of embodiments of the invention. It is to be understood that the singular forms "a," "an," and "the" herein include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more of such surfaces. By the term "substantially" it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
People counting is a tool commonly used as part of the BA in the retail environment, where a device is used to measure the number and direction of people traversing a certain passage or entrance per unit time. The resolution of the measurement is entirely dependent on the sophistication of the technology employed. The device is often used at the entrance of a building so that the total number of visitors can be recorded. Many different technologies are used in people counter devices, such as infrared beams, computer vision, thermal imaging and pressure- sensitive mats. There are various reasons for counting people in a retail environment. In retail stores, counting is done as a form of intelligence-gathering. The use of people counting systems in the retail environment is necessary to calculate the conversion rate, i.e., the percentage of a store's visitors that makes purchases. This is the key performance indicator of a store's performance and is superior to traditional methods, which only take into account sales data. Together, traffic counts and conversion rates how a store arrived at sales. Accurate visitor counting is also useful in the process of optimizing staff shifts; Staff requirements are often directly related to density of visitor traffic and services such as cleaning and maintenance are typically done when traffic is at its lowest. More advanced People Counting technology can also be used for queue management and customer tracking. Shopping mall marketing professionals rely on visitor statistics to measure their marketing. Often, shopping mall owners measure marketing effectiveness with sales, and also use visitor statistics to scientifically measure marketing effectiveness. Marketing metrics such as CPM (Cost Per Thousand) and SSF (Shoppers per Square Foot) are performance indicators that shopping mall owners monitor to determine rent according to the total number of visitors to the mall or according to the number of visitors to each individual store in the mall.
The basic people counting device is a Tally Counter, a hand-held device (a.k.a. clicker- counter), having a counter based on one press per person. Another simple people counter type is based on infrared beams, where the horizontal infrared beam is formed across an entrance which is typically linked to a small display unit at the side of the doorway or can also be linked to a PC or send data via wireless links and GPRS. Such a beam counts a 'tick' when the beam is broken, therefore it is normal to divide the 'ticks' by two to get visitor numbers. Dual beam units are also available from some suppliers and can provide low cost directional flow 'in' and Out' data. Accuracy depends highly on the width of the entrance monitored and the volume of traffic.
People counting technique may be based computer vision systems. Computer vision systems typically use either a Closed-Circuit Television (CCTV) camera or IP camera to feed a signal into a computer or embedded device. Some computer vision systems have been embedded directly into standard IP network cameras. This allows for distributed, cost efficient and highly scalable systems where all image processing is done on the camera using the standard built in CPU. This also dramatically reduces bandwidth requirements as only the counting data have to be sent over the Ethernet. Accuracy varies between systems and installations as background information needs to be digitally removed from the scene in order to recognize, track and count people. This means that CCTV based counters can be vulnerable to light level changes and shadows, which can lead to inaccurate counting. Lately, robust and adaptive algorithms have been developed that can compensate for this behavior and excellent counting accuracy can today be obtained for both outdoor and indoor counting using computer vision.
Thermal imaging systems for people counting are using array sensors which detect heat sources, rather than using cameras as in computer vision systems. These systems are typically implemented using embedded technology and are mounted overhead for high accuracy. Since thermal imaging systems detect the heat emitted by people, they can be susceptible to external weather conditions that reduce the amount of heat emitted from a person walking in from an outdoor environment.
Another people technology uses 3D imaging. Various technologies exist for acquiring a 3D image of the scene, such as 2 video cameras in order to reproduce the human 3D vision (also known as stereo-vision), and Time-of-Flight (TOF) sensors use pulsed or modulated emitted IR light to acquire the 3D image of the scene. Similarly, structured light sensors may be used to acquire the 3D image of the scene using projected light pattern.
A BI system may typically generate two types of BI insights. A first type of BI insights is a statistical insight based on long-term data, for example, product sale during an entire month or year. For this type of BI insights, the BI server may operate efficiently as more processing may be required and real-time results are not important. A second type of BI insights is an insight based on short-term data or temporal data, or data which is time-sensitive, or BI insights which are relevant or useful only for a short time-frame or only for immediate attention, for example, detection of a long queue of customers at a cash register or a PoS terminal. This type of data and insight may require reduced processing efforts but immediate and faster response. The distributed BI system may adequately handle and generate both types of BI insights.
Accordingly, a BI system may be a distributed Decision Support System (DSS) or a distributed Business Intelligence (BI) system. Multiple BI devices may be able to communicate directly or indirectly among themselves, using a communication protocol, over wired or wireless links, as a Peer to Peer (P2P) network or other suitable network topology allowing multiple parallel communication sessions. Each BI device may capture or sense or measure data, may send data to other BI devices, may receive data from other BI devices, may locally process BI data, and may locally generate BI insights. Based on the BI insights, a first BI device may command a second BI device to modify its operation, and the second BI device may report back to the first BI device, indicating performance or non-performance of a BI command, or the second BI device may provide to the first BI device the requested information.
Reference is made to FIG.2, which is a schematic block diagram illustration of a distributed BI system 200. The system 200 may include multiple BI devices, for example, BI devices 211-214. Optionally, system 200 may further include a BI server 201 and a central database 202 which may optionally be implemented as a Network Attached Database (NADB). The BI devices 211-214, the BI server 201 and the database 202 may communicate with each other over one or more networks.
Each one of BI devices 211-214 may be, for example, a device able to sense, capture or measure BI information using a sensor. Alternatively or in addition, each one of BI devices 211- 214 may be, for example, a device able to activate or act using an actuator. In a non-limiting example, BI device 211 may include a video camera 221 able to capture images and/or videos; BI device 212 may include an audio loudspeaker 222; BI device 213 may include a workforce allocator 223; and BI device 214 may be or may include a Point of Sale (PoS) terminal 224. Each one of BI devices 211-214 may be able to communicate, directly or indirectly, with the other three of BI devices 211-214, over wired and/or wireless communication links and by utilizing a suitable communication protocol (e.g., TCP/IP).
A speaker may be a sounder which converts electrical energy to sound waves transmitted through the air, an elastic solid material, or a liquid, usually by means of a vibrating or moving ribbon or diaphragm. The sound may be audible or inaudible (or both), and may be omnidirectional, unidirectional, bidirectional, or provide other directionality or polar patterns. A sounder may be an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker.
An example of a BI device structure is shown in FIG. 2 regarding BI device 211, illustrated expanded to show the elements. Thus, BI device 211 comprises, in addition to video camera 221 as the BI element, a local short-term database 231, a data processor 232, a command- sending module 233 and a communication module 234.
In a demonstrative example, the video camera 221 may capture one or more images or a video of a store area surrounding the PoS terminal 224. The captured images or video may be stored in local short-term database 231, and may be processed locally by the data processor 232, which may generate a BI insight (e.g., "the line of customers at cash register 4 exceeds a threshold value"). The BI insight may optionally include a BI command (e.g., ''open an additional PoS terminal). The BI command may be sent by command-sending module 233 of BI device 211 to other BI devices 212-214, which may act upon the BI insight. For example, workforce allocator 223 may determine that the BI command requires that an additional PoS terminal be open, and may allocate a suitable employee to operate the additional PoS terminal; and audio loudspeaker 222 may announce an audio message to summon that employee to the additional PoS terminal. All these operations may be performed by BI devices 211-214, without the need to transmit or receive data to or from a remote or local BI server, and without the need to wait for a remote or local BI server to receive and process the data. Communication module 234 of BI device 211 may make the BI insight available to other BI devices, to allow such other BI devices to utilize the BI insight in order to generate additional BI insights.
The image from camera 221 may be processed using image processing techniques. The image processing may further include a face detection (also known as face localization), which includes an algorithm for identifying a group of pixels within a digitally-acquired image that relates to the existence, locations and sizes of human faces. Common face-detection algorithms focused on the detection of frontal human faces, and other algorithms attempt to solve the more general and difficult problem of multi-view face detection. That is, the detection of faces that are either rotated along the axis from the face of the observer (in-plane rotation), or rotated along the vertical or left-right axis (out-of-plane rotation), or both. Various face-detection techniques and devices (e.g., cameras) having face detection features are disclosed in U.S. Patent No. 5,870,138 to Smith et al., entitled: "Facial Image Processing", in U.S. Patent No. 5,987,154 to Gibbon et al., entitled: "Method and Means for Detecting People in Image Sequences", in U.S. Patent No. 6,128,397 to Baluja et al., entitled: "Method for Finding All Frontal Faces in Arbitrarily Complex Visual Scenes", in U.S. Patent No. 6,188,777 to Darrell et al, entitled: "Method and Apparatus for Personnel Detection and Tracking ", in U.S. Patent No. 6,282,317 to Luo et al., entitled: "Method for Automatic Determination of Main Subjects in Photographic Images", in U.S. Patent No. 6,301,370 to Steffens et al., entitled: "Face Recognition from Video Images", in U.S. Patent No. 6,332,033 to Qian entitled: "System for Detecting Skin-Tone Regions within an Image", in U.S. Patent No. 6,404,900 to Qian et al, entitled: "Method for Robust Human Face Tracking in Presence of Multiple Persons", in U.S. Patent No. 6,407,777 to DeLuca entitled: "Red-Eye Filter Method and Apparatus", in U.S. Patent No. 7,508,961 to Chen et al., entitled: "Method and System for Face Detection in Digital Images", in U.S. Patent No. 7,317,815 to Steinberg et al., entitled: "Digital Image Processing Composition Using Face Detection Information", in U.S. Patent No. 7,315,630 to Steinberg et al., entitled: "Perfecting a Digital Image Rendering Parameters within Rendering Devices using Face Detection", in U.S. Patent No. 7,110,575 to Chen et al., entitled: "Method for Locating Faces in Digital Color Images", in U.S. Patent No. 6,526,161 to Yan entitled: "System and Method for Biometrics-Based Facial Feature Extraction", in U.S. Patent No. 6,516,154 to Parulski et al., entitled: "Image Revising Camera and Method", in U.S. Patent No. 6,504,942 to Hong et al., entitled: "Method and Apparatus for Detecting a Face-Like Region and Observer Tracking Display", in U.S. Patent No. 6,501,857 to Gotsman et al., entitled: "Method and System for Detecting and Classifying Objects in an Image", and in U.S. Patent No. 6,473,199 to Gilman et al., entitled: "Correcting Exposure and Tone Scale of Digital Images Captured by an Image Capture Device", which are all incorporated in their entirety for all purposes as if fully set forth herein. Another camera with human face detection means is disclosed in U.S. Patent No. 6,940,545 to Ray et al, entitled: "Face Detecting Camera and Method", which is incorporated in its entirety for all purposes as if fully set forth herein. The image processing may use algorithms and techniques described in the book entitled: "The Image Processing Handbook", Sixth Edition, by John C. Russ, from CRC Press ISBN: 978-1 -4398- 4063-4, as well as algorithms and techniques described in U.S. patents RE 33,682, RE 31,370, 4,047,187, 4,317,991, 4,367,027, 4,638,364, 5,291,234, 5,386, 103, 5,488,429, 5,638,136, 5,642,431, 5,710,833, 5,724,456, 5,781,650, 5,812,193, 5,818,975, 5,835,616, 5,870,138, 5,978,519, 5,991,456, 6,097,470, 6,101,271, 6,148,092, 6,151,073, 6,192,149, 6,249,315, 6,263,113, 6,268,939, 6,393,148, 6,421,468, 6,438,264, 6,456,732, 6,459,436, 6,504,951, 7,466,866 and 7,508,961, which are all incorporated in their entirety for all purposes as if fully set forth herein.
The Image Processing may further include an algorithm for motion detection by comparing the current image with a reference image and counting the number of different pixels, where the image sensor or the digital camera are assumed to be in a fixed location and thus assumed to capture the same image. Since images will naturally differ due to factors such as varying lighting, camera flicker, and CCD dark currents, pre-processing is useful to reduce the number of false positive alarms. More complex algorithms are necessary to detect motion when the camera itself is moving, or when the motion of a specific object must be detected in a field containing other movement which can be ignored.
The image processing may further include video enhancement such as video denoising, image stabilization, unsharp masking, and super-resolution. Further, the image processing may include a Video Content Analysis (VCA), where the video content is analyzed to detect and determine temporal events based on multiple images, and is commonly used for entertainment, healthcare, retail, automotive, transport, home automation, safety and security. VCA functionalities include Video Motion Detection (VMD), video tracking, and egomotion estimation, as well as identification, behavior analysis and other forms of situation awareness. A dynamic masking functionality involves the blocking a part of the video signal based on the signal itself, for example because of privacy concerns. An egomotion estimation functionality involves the determining of the location of a camera or estimating the camera motion relative to a rigid scene, by analyzing its output signal. Motion detection is used to determine the presence of a relevant motion in the observed scene, while object detection is used to determine the presence of a type of object or entity, for example a person or car, as well as fire and smoke detection. Similarly, Face recognition and Automatic Number Plate Recognition may be used to recognize, and therefore possibly identify persons or cars. Tamper detection is used to determine whether the camera or the output signal is tampered with, and video tracking is used to determine the location of persons or objects in the video signal, possibly with regard to an external reference grid. A pattern is defined as any form in an image having discernible characteristics that provide a distinctive identity when contrasted with other forms. Pattern recognition may also be used, for ascertaining differences, as well as similarities, between patterns under observation and partitioning the patterns into appropriate categories based on these perceived differences and similarities; and may include any procedure for correctly identifying a discrete pattern, such as an alphanumeric character, as a member of a predefined pattern category. Further, the video or image processing may use, or be based on, other suitable algorithms and techniques.
Optionally, BI devices 211-214 may send BI insights also to BI server 201, which may store them in the central database 202 for subsequent, offline processing, for example, in order to generate further insights with regard to the efficiency of system 200 or other characteristics of the operation of system 200.
Reference is made to FIG. 3, which is a schematic block diagram illustration of a BI device 300 capable of locally processing BI data. BI device 300 may include, for example, a sensor 301, a receiver 302, a transmitter 303, a short-term database 304, a decision maker 305, an event identifier 306, a requestor module 307, a data provider 308, an acknowledgement module 309, an operations modifier 310, an actuator 312, and a data converter 311. The BI device 300 may be implemented using suitable hardware components and/or software modules, for example, a processor, an Integrated Circuit (IC), a memory unit, or the like.
Sensor 301 may be able to sense, capture, acquire or measure one or more types of data, for example, images, video, audio, audio/video, odor(s), movement, data related to the location of a cellular phone, or the like. The sensor 301 may transfer such data to a short-term database 304, which may store the sensed data.
Optionally, the receiver 302 may be able to receive raw data, processed data and/or BI insights from an external source, for example, from other BI device. Such data may then be stored locally in short-term database 304.
A decision maker 305 may analyze the data stored in the short-term database 304 and may generate BI decisions. In a demonstrative example, the sensor 301 may capture images of a region surrounding a PoS terminal; the images may be stored locally in the short-term database 304; and the decision maker 305 may analyze the images and, using a suitable video analytics algorithm, may determine that the number of customers waiting in line is greater than a predefined or user-configurable threshold value. The decision maker 305 may thus generate a BI decision, for example, "open an additional PoS terminal and summon a store employee to operate it". The decision may be transferred from decision maker to the transmitter 303, which may send the decision (or a command or request representing the decision) to other BI devices, for example, to an additional PoS terminal to be opened, to a resource allocator which may allocate an employee to the additional PoS terminal, and to an audio loudspeaker able to announce a message to summon the employee to the additional PoS terminal. Additionally or alternatively, an event identifier 306 may analyze the data stored in the short-term database 304 and may identify an event, a condition, a problem which may require a solution, or an opportunity which may be fulfilled. In a demonstrative example, the sensor 301 may capture a video of a bottle of liquid spilling in aisle number four; the event identifier 306 (or a video analytics algorithm embedded within the video camera, or associated with the video camera) may analyze the video to recognize the problem, for example, by utilizing a video analytics algorithm as known in the art, or by utilizing a video analytics algorithm which identifies a change across a video stream or across multiple images of the same region (e.g., a change persistent over a pre-defined time period, or a change in an amount greater than a threshold value), the change being reported to the database 304. In response, the event identifier 306 may generate a BI command to summon a suitable employee to clean up that area. The BI insight or command may be transmitted to other BI devices by the transmitter 303, for example, to a workforce allocator able to select a suitable and/or available employee for the cleaning task, and to an audio loudspeaker which may summon that employee for a cleaning task at isle number four.
It is noted that the event identifier 306 may not necessarily analyze the captured video. For example, a video analytics algorithm may operate within the video camera(s), analyzing the captured video and creating a new line or a new record in the database when a threshold is exceeded. Then, the event identifier 306 may determine whether the event is an event which requires attention or handling. Multiple lines or records may be generated at the same event, or multiple lines or records may be generated or on different events, for example; in a short period; and if a customer queue was identified and handled, then the event identifier 306 may refrain from identifying or acting on the same customer queue which is again identified 40 seconds later (e.g., while the store personnel are already attending to an announcement).
In another demonstrative example, the event identifier 306 may identify an opportunity. For example, the sensor 301 may capture audio uttered by two customers in aisle number two, and the event identifier 306 may analyze the captured audio and may recognize that the audio is spoken in a particular foreign language (e.g. by utilizing a voice recognition or speech recognition algorithm as is known in the art, by utilizing a suitable speech-to-text converter, a Natural Language Processing algorithm, or the like), thereby leading the event identifier 306 to generate a BI command that summons to that aisle in the store an employee who is known to be fluent in that foreign language, in order to assist those customers. In a similar example, the event identifier 306 may analyze the captured audio and may recognize that a first customer asks a second customer where a particular product may be found; and in response, the event identifier 306 may generate a BI command that summons a suitable employee to approach that store area in order to assist the first customer. Optionally, the system may utilize a Radio Frequency ID (RFID) tagging sub-system (e.g., store employees may wear or carry RFID tags, and RFID sensors or readers or transceivers may be scattered around the store) in order to determine whether the persons talking in a store area are customers or employees; and only after determining that the persons are indeed customers and not employees (or, that there are no employees in the group of persons talking), then, the system may proceed to identify their needs and to act upon such needs.
Optionally, a requestor module 307 may request or obtain additional data from external sources or from other BI devices, and the data may be used within the BI device 300 in order to support decisions or operations of the decision maker 305 and/or the event identifier 306. In a demonstrative example, the decision maker 305 may determine that it may be beneficial to open an additional PoS terminal in order to shorten a line of customers at a currently-open PoS terminal. The requestor module 307 may dispatch a query to one or more other PoS terminals, to inquire whether any PoS terminal is unmanned or inactive and thus may be opened. Similarly, the requestor module 307 may send a query to a workforce allocator, to inquire whether there is a suitable employee (e.g., an employee who is able and authorized to operate a PoS terminal) who is currently available. The requestor module 307 may thus obtain, from external sources and/or from other BI devices, an identification of a particular PoS terminal which may be activated, and an identification of a particular employee who may be summoned to that PoS terminal; and this data may be incorporated into the particular decision generated by the decision maker 305.
A data provider 308 may be a module able to respond to an information request received from a requestor module of another BI device (e.g., similar to the requestor module 307). For example, the BI device 300 may be implemented as a PoS terminal, and may receive from another BI device a request to indicate whether or not that PoS terminal is currently active (e.g., whether a cashier is logged-in). The data provider 308 may send back, to the requesting BI device, a signal indicating whether or not that PoS terminal is currently active.
An acknowledgement module 309 may generate an acknowledgement (ACK) signal indicating that an incoming BI command will be performed by the BI device 300. Alternatively, the acknowledgement module 309 may generate a negative acknowledgement (NACK) signal indicating that an incoming BI command will not be performed by the BI device 300, optionally indicating also a cause for the non-performance (e.g., an error code selected or a pre-defined list of error codes).
An operations modifier 310 may include a module able to modify the Operation of one or more modules or components of the BI device 300, in response to a BI insight generated internally or autonomously within the BI device 300 (e.g., by the decision maker 305 or by the event identifier 306), or in response to a BI command received from other BI device. In a demonstrative example, if the sensor 301 captured images which inconclusively indicate a spill in aisle number two of the store; the decision maker 305 may require additional images or an improved view in order to reach a conclusive decision; and the operations modifier 310 may command the sensor 301 to pan or zoom or tilt in order to obtain additional images, which may be utilized by the decision maker 305 to reach a conclusive BI decision.
A data converter / re formatter 311 may be able to reformat data or BI insights into a format or data-structure that is common to multiple BI devices, thereby allowing inter-operability of multiple BI devices, and allowing efficient information exchange among multiple BI devices. This may ensure that BI commands, BI insights, or BI queries or information requests may be acted-upon or responded-to in an efficient and rapid manner.
The BI device 300 may operate as a "master" device, capable of performing all or some of the operations described above, and further capable of commanding other BI devices to modify their operation, to take action(s), or to stop taking action(s). Alternatively, the device 300 may operate as a "slave" device, which may primarily sense or collect data, optionally perform local processing of data, and then transfer data or BI insights to the other BI device (e.g., to a "master" BI device) for decision-making or action-taking. In some demonstrative embodiments, in a group of interconnected BI devices (e.g., in a store), one BI device may operate as a master BI device, while the other BI devices may operate as slave BI devices. Optionally, a BI device implemented as a slave device may have reduced features relative to a master device, for example, by omitting from a slave device decision-making units or data processing modules.
A BI device 300 may be integrated, partially or in whole, with another device, being for example, a video camera, an audio capture device, a cellular geo-location module, a printer, a cordless phone, a corded phone, an elevator, a passenger elevator, a freight elevator, an escalator, a digital weight scale, a security camera, a cable box or set-top box, a streamer, a smart television, a smell detector or odor detector, a washing machine, a drier, an oven, a microwave oven, a refrigerator, a cash register, a PoS terminal, a cash register, an amplifier, a speaker, or a loudspeaker, a home or office automation system, and/or other appliances, or devices able to capture data and/or process data. By utilizing the same principles, such appliances or devices may be used as components in a distributed BI system. Further, the BI device 300 may be integrated, in whole or in part, in an electrically powered home, commercial, or industrial appliance. The appliance may be major or small appliance, and its main function may be food storage or preparation, cleaning (such as clothes cleaning), or temperature control (environmental, food or water) such as heating or cooling. Examples of appliances are water heaters, HVAC systems, air conditioner, heaters, washing machines, clothes dryers, vacuum cleaner, microwave oven, electric mixers, stoves, ovens, refrigerators, freezers, food processors, dishwashers, food blenders, beverage makers such as coffeemakers and iced-tea makers, answering machines, telephone sets, home cinema systems, HiFi systems, CD and DVD players, induction cookers, electric furnaces, trash compactors, and dehumidifiers. The BI device 300 may consist of, or be integrated with, a battery-operated portable electronic device such as a notebook / laptop computer, a media player (e.g., MP3 based or video player), a cellular phone, a Personal Digital Assistant (PDA), an image processing device (e.g., a digital camera or a video recorder), and / or any other handheld computing devices, or a combination of any of these devices. Alternatively or in addition, the BI device 300 may be integrated, in whole or in part, in furniture or clothes.
In one aspect, one of the sensors is an image sensor, for capturing an image (still or video).The controller responds to the characteristics or events extracted by image processing of the captured image or video. For example, the image processing may be face detection, face recognition, gesture recognition, compression or de-compression, or motion sensing. The image processing functionality may be in the BI device 300, in the BI server 201, or any combination thereof. In another aspect, one of the sensors may be a microphone for capturing a human voice. The controller responds to the characteristics or events extracted by voice processing of the captured audio. The voice processing functionality may include compression or de-compression, and may be in the BI device 300, in the BI server 201, or any combination thereof.
Any element capable of measuring or responding to a physical phenomenon may be used as a sensor. An appropriate sensor may be adapted for a specific physical phenomenon, such as a sensor responsive to temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, and electrical current. A sensor may be an analog sensor having an analog signal output such as analog voltage or current, or may have continuously variable impedance. Alternatively on in addition, a sensor may have a digital signal output. A sensor may serve as a detector, notifying only the presence of a phenomenon, such as by a switch, and may use a fixed or settable threshold level. A sensor may measure time-dependent or space-dependent parameters of a phenomenon. A sensor may measure time-dependencies or a phenomenon such as the rate of change, time-integrated or time-average, duty-cycle, frequency or time period between events. A sensor may be a passive sensor, or an active sensor requiring an external source of excitation. The sensor may be semiconductor-based, and may be based on MEMS technology.
A sensor may measure the amount of a property or of a physical quantity or the magnitude relating to a physical phenomenon, body or substance. Alternatively or in addition, a sensor may be used to measure the time derivative thereof, such as the rate of change of the amount, the quantity or the magnitude. In the case of space related quantity or magnitude, a sensor may measure the linear density, surface density, or volume density, relating to the amount of property per volume. Alternatively or in addition, a sensor may measure the flux (or flow) of a property through a cross- section or surface boundary, the flux density, or the current. In the case of a scalar field, a sensor may measure the quantity gradient. A sensor may measure the amount of property per unit mass or per mole of substance. A single sensor may be used to measure two or more phenomena.
The sensor may be thermoelectric sensor, for measuring, sensing or detecting the temperature (or the temperature gradient) of an object, which may be solid, liquid or gas. Such sensor may be a thermistor (either PTC or NTC), a thermocouple, a quartz thermometer, or an RTD. The sensor may be based on a Geiger counter for detecting and measuring radioactivity or any other nuclear radiation. Light, photons, or other optical phenomena may be measured or detected by a photosensor or photodetector, used for measuring the intensity of visible or invisible light (such as infrared, ultraviolet, X-ray or gamma rays). A photosensor may be based on the photoelectric or the photovoltaic effect, such as a photodiode, a phototransistor, solar cell or a photomultiplier tube. A photosensor may be a photoresistor based on photoconductivity, or a CCD where a charge is affected by the light. The sensor may be an electrochemical sensor used to measure, sense or detect a matter structure, properties, composition, and reactions, such as pH meters, gas detector, or gas sensor. Using semiconductors, oxidation, catalytic, infrared or other sensing or detection mechanisms, gas detector may be used to detect the presence of a gas (or gases) such as hydrogen, oxygen or CO. The sensor may be a smoke detector for detecting smoke or fire, typically by an optical detection (photoelectric) or by a physical process (ionization).
The sensor may be a physiological sensor for measuring, sensing or detecting parameters of a live body, such as animal or human body. Such a sensor may involve measuring of body electrical signals such as an EEG or ECG sensor, a gas saturation sensor such as oxygen saturation sensor, mechanical or physical parameter sensors such as a blood pressure meter. A sensor (or sensors) may be external to the sensed body, implanted inside the body, or may be wearable. The sensor may be an electracoustic sensor for measuring, sensing or detecting sound, such as a microphone. Typically microphones are based on converting audible or inaudible (or both) incident sound to an electrical signal by measuring the vibration of a diaphragm or a ribbon. The microphone may be a condenser microphone, an electret microphone, a dynamic microphone, a ribbon microphone, a carbon microphone, or a piezoelectric microphone.
A sensor may be an image sensor for providing digital camera functionality, allowing an image (either as still images or as a video) to be captured, stored, manipulated and displayed. The image capturing hardware integrated with the sensor unit may contain a photographic lens (through a lens opening) focusing the required image onto a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens, for capturing the image and producing electronic image information representing the image. The image sensor may be based on Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS). The image may be converted into a digital format by an image sensor AFE (Analog Front End) and an image processor, commonly including an analog to digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. The unit may contain a video compressor, coupled between the analog to digital (A/D) converter and the transmitter for compressing the digital video data before transmission to the communication medium. The compressor may be used for lossy or non-lossy compression of the image information, for reducing the memory size and reducing the data rate required for the transmission over the communication medium. The compression may be based on a standard compression algorithm such as JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264, or ITU-T CCIR 601.
The digital data video signal carrying a digital data video according to a digital video format, and a transmitter coupled between the port and the image processor for transmitting the digital data video signal to the communication medium. The digital video format may be based on one out of: TIFF (Tagged Image File Format), RAW format, AVI (Audio Video Interleaved), DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601 , ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards.
A sensor may be an electrical sensor used to measure electrical quantities or electrical properties. The electrical sensor may be conductively connected to the measured element. Alternatively or in addition, the electrical sensor may use non-conductive or non-contact coupling to the measured element, such as measuring a phenomenon associated with the measured quantity or property. The electric sensor may be a current sensor or an ampmeter (a.k.a. ampermeter) for measuring DC or AC (or any other waveform) electric current passing through a conductor or wire. The current sensor may be connected such that part or entire of the measured electric current may be passing through the ampermeter, such as a galvanometer or a hot-wire ampermeter. An ampermeter may be a current clamp or current probe, and may use the 'Hall effect' or a current transformer concept for non-contact or non-conductive current measurement. The electrical sensor may be a voltmeter for measuring the DC or AC (or any other waveform) voltage, or any potential difference between two points. The voltmeter may be based on the current passing a resistor using the Ohm's law, may be based on a potentiometer, or may be based on a bridge circuit.
A sensor may be a wattmeter measuring the magnitude of the active AC or DC power (or the supply rate of electrical energy). The wattmeter may be a bolometer, used for measuring the power of incident electromagnetic radiation via the heating of a material with a temperature-dependent electrical resistance. A sensor may be an electricity AC (single or multi-phase) or DC type meter (or electrical energy meter), that measures the amount of electrical energy consumed by a load. The electricity meter may be based on a wattmeter which accumulate or average the readings, may be based on induction, or may be based on multiplying measured voltage and current .
An electrical sensor may be an ohmmeter for measuring the electrical resistance (or conductance), and may be a megohmmeter or a microohmeter. The ohmmeter may use the Ohm's law to derive the resistance from voltage and current measurements, or may use a bridge such as a Wheatstone bridge. A sensor may be a capacitance meter for measuring capacitance. A sensor may be an inductance meter for measuring inductance. A sensor may be an impedance meter for measuring an impedance of a device or a circuit. A sensor may be an LCR meter, used to measure inductance (L), capacitance (C), and resistance (R). A meter may use sourcing a DC or an AC voltage, and use the ratio of the measured voltage and current (and their phase difference) through the tested device according to Ohm's law to calculate the resistance, the capacitance, the inductance, or the impedance (R=V/1). Alternatively or in addition, a meter may use a bridge circuit (such as Wheatstone bridge), where variable calibrated elements are adjusted to detect a null. The measurement may be using DC, using a single frequency or over a range of frequencies.
The sensor may be a Time-Domain Reflectometer (TDR) used to characterize and locate faults in transmission-lines such as conductive or metallic lines, based on checking the reflection of a transmitted short rise time pulse. Similarly, an optical TDR may be used to test optical fiber cables.
A sensor may be a scalar or a vector magnetometer for measuring an H or B magnetic fields. The magnetometer may be based on a Hall effect sensor, magneto-diode, magneto-transistor, AMR magnetometer, GMR magnetometer, magnetic tunnel junction magnetometer, magneto-optical sensor, Lorentz force based MEMS sensor, Electron Tunneling based MEMS sensor, MEMS compass, Nuclear precession magnetic field sensor (a.k.a. Nuclear Magnetic Resonance - NMR), optically pumped magnetic field sensor, fluxgate magnetometer, search coil magnetic field sensor, or Superconducting Quantum Interference Device (SQUID) magnetometer.
A sensor may be a strain gauge, used to measure the strain, or any other deformation, of an object. The sensor may be based on deforming a metallic foil, semiconductor strain gauge (such as piezoresistors), measuring the strain along an optical fiber, capacitive strain gauge, and vibrating or resonating of a tensioned wire. A sensor may be a tactile sensor, being sensitive to force or pressure, or being sensitive to a touch by an object, typically a human touch. A tactile sensor may be based on a conductive rubber, a lead zirconate titanate (PZT) material, a polyvinyl idene fluoride (PVDF) material, a metallic capacitive element, or any combination thereof. A tactile sensor may be a tactile switch, which may be based on the human body conductance, using measurement of conductance or capacitance.
A sensor may be a piezoelectric sensor, where the piezoelectric effect is used to measure pressure, acceleration, strain or force, and may use transverse, longitudinal, or shear effect mode. A thin membrane may be used to transfer and measure pressure, while mass may be used for acceleration measurement. A piezoelectric sensor element material may be a piezoelectric ceramics (such as PZT ceramic) or a single crystal material. A single crystal material may be gallium phosphate, quartz, tourmaline, or Lead Magnesium Niobate-Lead Titanate (PMN-PT).
A sensor may be a motion sensor, and may include one or more accelerometers, which measures the absolute acceleration or the acceleration relative to freefall. The accelerometer may be piezoelectric, piezoresistive, capacitive, MEMS or electromechanical switch accelerometer, measuring the magnitude and the direction the device acceleration in a single-axis, 2-axis or 3 -axis (omnidirectional). Alternatively or in addition, the motion sensor may be based on electrical tilt and vibration switch or any other electromechanical switch.
A sensor may be a force sensor, a load cell, or a force gauge (a.k.a. force gage), used to measure a force magnitude and / or direction, and may be based on a spring extension, a strain gauge deformation, a piezoelectric effect, or a vibrating wire. A sensor may be a driving or passive dynamometer, used to measure torque or any moment of force.
A sensor may be a pressure sensor (a.k.a. pressure transducer or pressure transmitter / sender) for measuring a pressure of gases or liquids, and for indirectly measuring other parameters such as fluid/gas flow, speed, water-level, and altitude. A pressure sensor may be a pressure switch. A pressure sensor may be an absolute pressure sensor, a gauge pressure sensor, a vacuum pressure sensor, a differential pressure sensor, or a sealed pressure sensor. The changes in pressure relative to altitude may be used for an altimeter, and the Venturi effect may be used to measure flow by a pressure sensor. Similarly, the depth of a submerged body or the fluid level on contents in a tank may be measured by a pressure sensor.
A pressure sensor may be of a force collector type, where a force collector (such as a diaphragm, piston, Bourdon tube, or bellows) is used to measure strain (or deflection) due to applied force (pressure) over an area. Such sensor may be a based on the piezoelectric effect (a piezoresistive strain gauge), may be of a capacitive or of an electromagnetic type. A pressure sensor may be based on a potentiometer, or may be based on using the changes in resonant frequency or the thermal conductivity of a gas, or may use the changes in the flow of charged gas particles (ions).
A sensor may be a position sensor for measuring linear or angular position (or motion). A position sensor may be an absolute position sensor, or may be a displacement (relative or incremental) sensor, measuring a relative position, and may be an electromechanical sensor. A position sensor may be mechanically attached to the measured object, or alternatively may use a non-contact measurement.
A position sensor may be an angular position sensor, for measuring involving an angular position (or the rotation or motion) of a shaft, an axle, or a disk. Absolute angular position sensor output indicates the current position (angle) of the shaft, while incremental or displacement sensor provides information about the change, the angular speed or the motion of the shaft. An angular position sensor may be of optical type, using reflective or interruption schemes, or may be of magnetic type, such as based on a variable-reluctance (VR), Eddy-current killed oscillator (ECKO), Wiegand sensing, or Hall-effect sensing, or may be based on a rotary potentiometer. An angular position sensor may be transformer based such as an RVDT, a resolver or a synchro. An angular position sensor may be based on an absolute or incremental rotary encoder, and may be a mechanical or an optical rotary encoder, using binary or gray encoding schemes.
A sensor may be an angular rate sensor, used to measure the angular rate, or the rotation speed, of a shaft, an axle or a disc, and may be electromechanical (such as centrifugal switch), MEMS based, Laser based (such as Ring Laser Gyroscope - RLG), or a gyroscope (such as fiberoptic gyro) based. Some gyroscopes use the measurement of the Coriolis acceleration to determine the angular rate. An angular rate sensor may be a tachometer, which may be based on measuring the centrifugal force, or based on optical, electric, or magnetic sensing a slotted disk.
A position sensor may be a linear position sensor, for measuring a linear displacement or position typically in a straight line, and may use a transformer principle such as such as LVDT, or may be based on a resistive element such as linear potentiometer. A linear position sensor may be an incremental or absolute linear encoder, and may employ optical, magnetic, capacitive, inductive, or eddy-current principles.
A sensor may be a mechanical or electrical motion detector (or an occupancy sensor), for discrete (on/off) or magnitude-based motion detection. A motion detector may be based on sound (acoustic sensors), opacity (optical and infrared sensors and video image processors), geomagnetism (magnetic sensors, magnetometers), reflection of transmitted energy (infrared laser radar, ultrasonic sensors, and microwave radar sensors), electromagnetic induction (inductive-loop detectors), or vibration (triboelectric, seismic, and inertia-switch sensors). Acoustic sensors may use electric effect, inductive coupling, capacitive coupling, triboelectric effect, piezoelectric effect, fiber optic transmission, or radar intrusion sensing. An occupancy sensor is typically a motion detector that may be integrated with hardware or software-based timing device.
A motion sensor may be a mechanically-actuated switch or trigger, or may use passive or active electronic sensors, such as passive infrared sensors, ultrasonic sensors, microwave sensor or tomographic detector. Alternatively or in addition, motion can be electronically identified using infrared (PIR) or laser optical detection or acoustical detection, or may use a combination of the technologies disclosed herein.
A sensor may be a humidity sensor, such as a hygrometer or a humidistat, and may respond to an absolute, relative, or specific humidity. The measurement may be based on optically detecting condensation, or may be based on changing the capacitance, resistance, or thermal conductivity of materials subjected to the measured humidity.
A sensor may be a clinometer for measuring angle (such as pitch or roll) of an object, typically with respect to a plane such as the earth ground plane. A clinometer may be based on an accelerometer, a pendulum, or on a gas bubble in liquid, or may be a tilt switch such as a mercury tilt switch for detecting inclination or declination with respect to a determined tilt angle.
A sensor may be a gas or liquid flow sensor, for measuring the volumetric or mass flow rate via a defined area or a surface. A liquid flow sensor typically involves measuring the flow in a pipe or in an open conduit. A flow measurement may be based on a mechanical flow meter, such as a turbine flow meter, a Woltmann meter, a single jet meter, or a paddle wheel meter. Pressure-based meters may be based on measuring a pressure or a pressure differential based on Bernoulli's principle, such as a Venturi meter. The sensor may be an optical flow meter or be based on the Doppler-effect.
A flow sensor may be an air flow sensor, for measuring the air or gas flow, such as through a surface (e.g., through a tube) or a volume, by actually measuring the air volume passing, or by measuring the actual speed or air flow. In some cases, a pressure, typically differential pressure, may be measured as an indicator for the air flow measurements. An anemometer is an air flow sensor primarily for measuring wind speed, and may be cup anemometer, a windmill anemometer, hot-wire anemometer such as CCA (Constant-Current Anemometer), CVA (Constant- Voltage Anemometer) and CTA (Constant-Temperature Anemometer). Sonic anemometers use ultrasonic sound waves to measure wind velocity. Air flow may be measured by a pressure anemometer that may be a plate or tube class.
A sensor may be a gyroscope, for measuring orientation in space, such as the conventional mechanical type, a MEMS gyroscope, a piezoelectric gyroscope, a FOG, or a VSG type. A sensor may be a nanosensor, a solid-state, or an ultrasonic based sensor. A sensor may be an eddy-current sensor, where the measurement may be based on producing and / or measuring eddy-currents. The sensor may be a proximity sensor, such as a metal detector. A sensor may be a bulk or surface acoustic sensor, or may be an atmospheric sensor.
In one example, multiple sensors may be used arranged as a sensor array (such as linear sensor array), for improving the sensitivity, accuracy, resolution, and other parameters of the sensed phenomenon. The sensor array may be directional, and better measure the parameters of the impinging signal to the array, such as the number, magnitudes, frequencies, Direction-Of-Arrival (DOA), distances, and speeds of the signals. The processing of the entire sensor array outputs, such as to obtain a single measurement or a single parameter, may be performed by a dedicated processor, which may be part of the sensor array assembly, may be performed in the processor of the BI device 300 or in the BI server 201, or any combination thereof. The same component may serve both as a sensor and as an actuator, such as during different times, and may be associated with the same or different phenomenon. A sensor operation may be based on an external or integral mechanism for generating a stimulus or an excitation to generate influence or create a phenomenon. The mechanism may be controlled as an actuator or as part of the sensor.
Any element designed for or capable of directly or indirectly affecting, changing, producing, or creating a physical phenomenon under an electric signal control may be used as an actuator. An appropriate actuator may be adapted for a specific physical phenomenon, such as an actuator responsive to temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, and electrical current. Typically a sensor may be used to measure a phenomenon affected by an actuator.
An actuator may be an analog actuator having an analog signal input such as analog voltage or current, or may have continuously variable impedance. Alternatively on in addition, an actuator may have a digital signal input. An actuator may affect time-dependent or space-dependent parameters of a phenomenon. An actuator may affect the time-dependencies or a phenomenon such as the rate of change, time-integrated or time-average, duty-cycle, frequency or time period between events. The actuator may be semiconductor-based, and may be based on MEMS technology.
An actuator may affect the amount of a property or of a physical quantity or the magnitude relating to a physical phenomenon, body or substance. Alternatively or in addition, an actuator may be used to affect the time derivative thereof, such as the rate of change of the amount, the quantity or the magnitude. In the case of space related quantity or magnitude, an actuator may affect the linear density, surface density, or volume density, relating to the amount of property per volume. Alternatively or in addition, an actuator may affect the flux (or flow) of a property through a cross- section or surface boundary, the flux density, or the current. In the case of a scalar field, an actuator may affect the quantity gradient. An actuator may affect the amount of property per unit mass or per mole of substance. A single actuator may be used to measure two or more phenomena.
An actuator may be a light source used to emit light by converting electrical energy into light, and where the luminous intensity may be fixed or may be controlled, commonly for illumination or indication purposes. An actuator may be used to activate or control the light emitted by a light source, being based on converting electrical energy or other energy to a light. The light emitted may be a visible light, or invisible light such as infrared, ultraviolet, X-ray or gamma rays. A shade, reflector, enclosing globe, housing, lens, and other accessories may be used, typically as part of a light fixture, in order to control the illumination intensity, shape or direction. Electrical sources of illumination commonly use a gas, a plasma (such as in an arc and fluorescent lamps), an electrical filament, or Solid-State Lighting (SSL), where semiconductors are used. An SSL may be a Light- Emitting Diode (LED), an Organic LED (OLED), Polymer LED (PLED), or a laser diode.
A light source may consist of, or comprises, a lamp which may be an arc lamp, a fluorescent lamp, a gas-discharge lamp (such as a fluorescent lamp), or an incandescent light (such as a halogen lamp). An arc lamp is the general term for a class of lamps that produce light by an electric arc voltaic arc. Such a lamp consists of two electrodes, first made from carbon but ty pically made today of tungsten, which are separated by a noble gas.
A motion actuator may be a rotary actuator that produces a rotary motion or torque, commonly to a shaft or axle. The motion produced by a rotary motion actuator may be either continuous rotation, such as in common electric motors, or movement to a fixed angular position as for servos and stepper motors. A motion actuator may be a linear actuator that creates motion in a straight line. A linear actuator may be based on an intrinsically rotary actuator, by converting from a rotary motion created by a rotary actuator, using a screw, a wheel and axle, or a cam. A screw actuator may be a leadscrew, a screw jack, a ball screw or roller screw. A wheel-and-axle actuator operates on the principle of the wheel and axle, and may be hoist, winch, rack and pinion, chain drive, belt drive, rigid chain, or rigid belt actuator. Similarly, a rotary actuator may be based on an intrinsically linear actuator, by converting from a linear motion to a rotary motion, using the above or other mechanisms. Motion actuators may include a wide variety of mechanical elements and/or prime movers to change the nature of the motion such as provided by the actuating / transducing elements, such as levers, ramps, screws, cams, crankshafts, gears, pulleys, constant-velocity joints, or ratchets. A motion actuator may be part of a servomotor system.
A motion actuator may be a pneumatic actuator that converts compressed air into rotary or linear motion, and may comprise a piston, a cylinder, valves or ports. Motion actuators are commonly controlled by an input pressure to a control valve, and may be based on moving a piston in a cylinder. A motion actuator may a hydraulic actuator using a pressure of the liquid in a hydraulic cylinder to provide force or motion. A hydraulic actuator may be a hydraulic pump, such as a vane pump, a gear pump, or a piston pump. A motion actuator may be an electric actuator where electrical energy may be converted into motion, such as an electric motor. A motion actuator may be a vacuum actuator producing a motion based on vacuum pressure.
An electric motor may be a DC motor, which may be a brushed, brushless, or uncommutated type. An electric motor may be a stepper motor, and may be a Permanent Magnet (PM) motor, a Variable reluctance (VR) motor, or a hybrid synchronous stepper.An electric motor may be an AC motor, which may be an induction motor, a synchronous motor, or an eddy current motors. An AC motor may be a two-phase AC servo motor, a three-phase AC synchronous motor, or a single-phase AC induction motor, such as a split-phase motor, a capacitor start motor, or a Permanent-Split Capacitor (PSC) motor. Alternatively or in addition, an electric motor may be an electrostatic motor, and may be MEMS based.
A rotary actuator may be a fluid power actuator, and a linear actuator may be a linear hydraulic actuator or a pneumatic actuator. A linear actuator may be a piezoelectric actuator, based on the piezoelectric effect, may be a wax motor, or may be a linear electrical motor, which may be a DC brush, a DC brushless, a stepper, or an induction motor type. A linear actuator may be a telescoping linear actuator. A linear actuator may be a linear electric motor, such as a linear induction motor (LIM), or a Linear Synchronous Motor (LSM).
A motion actuator may be a linear or rotary piezoelectric motor based on acoustic or ultrasonic vibrations. A piezoelectric motor may use piezoelectric ceramics such as Inchworm or PiezoWalk motors, may use Surface Acoustic Waves (SAW) to generate the linear or the rotary motion, or may be a Squiggle motor. Alternatively or in addition, an electric motor may be an ultrasonic motor. A linear actuator may be a micro- or nanometer comb-drive capacitive actuator. Alternatively or in addition, a motion actuator may be a Dielectric or Ionic based Electroactive Polymers (EAPs) actuator. A motion actuator may also be a solenoid, thermal bimorph, or a piezoelectric unimorph actuator.
An actuator may be a pump, typically used to move (or compress) fluids or liquids, gasses, or slurries, commonly by pressure or suction actions, and the activating mechanism is often reciprocating or rotary. A pump may be a direct lift, impulse, displacement, valveless, velocity, centrifugal, vacuum pump, or gravity pump. A pump may be a positive displacement pump, such as a rotary-type positive displacement type such as internal gear, screw, shuttle block, flexible vane or sliding vane, circumferential piston, helical twisted roots or liquid ring vacuum pumps, a reciprocating-type positive displacement type, such as piston or diaphragm pumps, and a linear-type positive displacement type, such as rope pumps and chain pumps, a rotary lobe pump, a progressive cavity pump, a rotary gear pump, a piston pump, a diaphragm pump, a screw pump, a gear pump, a hydraulic pump, and a vane pump. A rotary positive displacement pumps may be a gear pump, a screw pump, or a rotary vane pumps. Reciprocating positive displacement pumps may be plunger pumps type, diaphragm pumps type, diaphragm valves type, or radial piston pumps type.
A pump may be an impulse pump such as hydraulic ram pumps type, pulser pumps type, or airlift pumps type. A pump may be a rotodynamic pump such as a velocity pump or a centrifugal pump. A centrifugal pump may be a radial flow pump type, an axial flow pump type, or a mixed flow pump.
An actuator may be an electrochemical or chemical actuator, used to produce, change, or otherwise affect a matter structure, properties, composition, process, or reactions, such as oxidation/reduction or an electrolysis process.
An actuator may be a sounder which converts electrical energy to sound waves transmitted through the air, an elastic solid material, or a liquid, usually by means of a vibrating or moving ribbon or diaphragm. The sound may be audible or inaudible (or both), and may be omnidirectional, unidirectional, bidirectional, or provide other directionality or polar patterns. A sounder may be an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker.
A sounder may an electromechanical type, such as an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer and may be either electromechanical or ceramic-based piezoelectric sounders. The sounder may emit a single or multiple tones, and can be in continuous or intermittent operation.
The system may use the sounder to play digital audio content, either stored in, or received by, the sounder, the actuator unit, the router, the control server, or any combination thereof. The audio content stored may be either pre-recorded or using a synthesizer. Few digital audio files may be stored, selected by the control logic. Alternatively or in addition, the source of the digital audio may a microphone serving as a sensor. In another example, the system uses the sounder for simulating the voice of a human being or generates music. The music produced can emulate the sounds of a conventional acoustical music instrument, such as a piano, tuba, harp, violin, flute, guitar and so forth. A talking human voice may be played by the sounder, either pre-recorded or using human voice synthesizer, and the sound may be a syllable, a word, a phrase, a sentence, a short story or a long story, and can be based on speech synthesis or pre-recorded, using male or female voice. A human speech may be produced using a hardware, software (or both) speech synthesizer, which may be Text-To-Speech (TTS) based. The speech synthesizer may be a concatenative type, using unit selection, diphone synthesis, or domain-specific synthesis. Alternatively or in addition, the speech synthesizer may be a formant type, and may be based on articulatory synthesis or hidden Markov models (HMM) based.
An actuator may be used to generate an electric or magnetic field, and may be an electromagnetic coil or an electromagnet.
An actuator may be a display for presentation of visual data or information, commonly on a screen, and may consist of an array (e.g., matrix) of light emitters or light reflectors, and may present text, graphics, image or video. A display may be a monochrome, gray-scale, or color type, and may be a video display. The display may be a projector (commonly by using multiple reflectors), or alternatively (or in addition) have the screen integrated. A projector may be based on an Eidophor, Liquid Crystal on Silicon (LCoS or LCOS), or LCD, or may use Digital Light Processing (DLP™) technology, and may be MEMS based or be a virtual retinal display. A video display may support Standard-Definition (SD) or High-Definition (HD) standards, and may support 3D. The display may present the information as scrolling, static, bold or flashing. The display may be an analog display, such as having NTSC, PAL or SECAM formats. Similarly, analog RGB, VGA (Video Graphics Array), SVGA (Super Video Graphics Array), SCART or S-video interface, or may be a digital display, such as having IEEE1394 interface (a.k.a. Fire Wire™), may be used. Other digital interfaces that can be used are USB, SDI (Serial Digital Interface), HDM1 (High-Definition Multimedia Interface), DVI (Digital Visual Interface), UDI (Unified Display Interface), DisplayPort, Digital Component Video or DVB (Digital Video Broadcast) interface. Various user controls may include an on/off switch, a reset button and others. Other exemplary controls involve the display associated settings such as contrast, brightness and zoom.
A display may be a Cathode-Ray Tube (CRT) display, or a Liquid Crystal Display (LCD) display. The LCD display may be passive (such as CSTN or DSTN based) or active matrix, and may be Thin Film Transistor (TFT) or LED-backlit LCD display. A display may be a Field Emission Display (FED), Electroluminescent Display (ELD), Vacuum Fluorescent Display (VFD), or may be an Organic Light-Emitting Diode (OLED) display, based on passive-matrix (PMOLED) or active- matrix OLEDs (AMOLED).
A display may be based on an Electronic Paper Display (EPD), and be based on Gyricon technology, Electro-Wetting Display (EWD), or Electrofluidic display technology. A display may be a laser video display or a laser video projector, and may be based on a Vertical-External-Cavity Surface-Emitting-Laser (VECSEL) or a Vertical-Cavity Surface-Emitting Laser (VCSEL).
A display may be a segment display, such as a numerical or an alphanumerical display that can show only digits or alphanumeric characters, words, characters, arrows, symbols, ASCII and non-ASCII characters. Examples are Seven-segment display (digits only), Fourteen-segment display, and Sixteen-segment display, and a dot matrix display.
An actuator may be a thermoelectric actuator such as a cooler or a heater for changing the temperature of a solid, liquid or gas object, and may use conduction, convection, thermal radiation, or by the transfer of energy by phase changes. A heater may be a radiator using radiative heating, a convector using convection, or a forced convection heater. A thermoelectric actuator may be a heating or cooling heat pump, and may be electrically powered, compression-based cooler using an electric motor to drive a refrigeration cycle. A thermoelectric actuator may be an electric heater, converting electrical energy into heat, using resistance, or a dielectric heater. A thermoelectric actuator may be a solid-state active heat pump device based on the Peltier effect. A thermoelectric actuator may be an air cooler, using a compressor-based refrigeration cycle of a heat pump. An electric heater may be an induction heater.
An actuator unit may include a signal generator serving as an actuator for providing an electrical signal (such as a voltage or current), or may be coupled between the processor and the actuator for controlling the actuator. A signal generator an analog or digital signal generator, and may be based on software (or firmware) or may be a separated circuit or component. A signal may generate repeating or non-repeating electronic signals, and may include a digital to analog converter (DAC) to produce an analog output. Common waveforms are a sine wave, a saw-tooth, a step (pulse), a square, and a triangular waveforms. The generator may include some sort of modulation functionality such as Amplitude Modulation (AM), Frequency Modulation (FM), or Phase Modulation (PM). A signal generator may be an Arbitrary Waveform Generators (AWGs) or a logic signal generator.
An actuator unit may include an electrical switch (or multiple switches) coupled between the processor and the actuator for activating the actuator. Two or more switches may be used, connected in series or in parallel. The switch may be integrated with the actuator (if separated from the actuator unit), with the actuator unit, or any combination thereof. In the above examples, a controller can affect the actuator (or load) activation by sending the actuator unit a message to activate the actuator by powering it, or to deactivate the actuator operation by breaking the current floe thereto, or shifting the actuator between states. A switch is typically designed to open (breaking, interrupting), close (making), or change one or more electrical circuits under some type of external control, and may be an electromechanical device with one or more sets of electrical contacts having two or more states. The switch may be a 'normally open', 'normally closed' type, or a changeover switch, that may be either a 'make-before-break' or 'break-before-make' type. The switch contacts may have one or more poles and one or more throws, such as Single-Pole-Single-Throw (SPST), Single-Pole-Double- Throw (SPDT), Double-Pole-Double-Throw (DPDT), Double-Pole-Single-Throw (DPST), and Single-Pole-Changeover (SPCO). The switch may be an electrically operated switch such as an electromagnetic relay, which may be a non-latching or a latching type. The relay may be a reed relay, or a solid-state or semiconductor based relay, such as a Solid State Relay (SSR). A switch may be implemented using an electrical circuit, such as an open collector or open drain based circuit, a thyristor, a TRIAC or an opto-isolator.
The image processing may include video enhancement such as video denoising, image stabilization, unsharp masking, and super-resolution. The image processing may include a Video Content Analysis (VCA), such as Video Motion Detection (VMD), video tracking, and egomotion estimation, as well as identification, behavior analysis and other forms of situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, and pattern recognition.
The image processing may be used for non-verbal human control of the system, such as by hand posture or gesture recognition. The recognized hand posture or gesture may be used as input by the control logic in the controller, and thus enables humans to interface with the machine in ways sometimes described as Man-Machine Interfaces (MMI) or Human-Machine Interfaces (HMI) and interact naturally without any mechanical devices, and thus to impact the system operation and the actuators commands and operation. An image-based recognition may use a single camera or 3-D camera. A gesture recognition may be based on 3-D information of key elements of the body parts or may be a 2-D appearance-based. A 3-D model approach can use volumetric or skeletal models, or a combination of the two.
The sensor may provide a digital output, and the sensor output may include an electrical switch, and the electrical switch state may be responsive to the phenomenon magnitude measured versus a threshold, which may be set by the actuator. The sensor may provide an analog output, and the first device may comprise an analog to digital converter coupled to the analog output, for converting the sensor output to a digital data. The first device may comprise a signal conditioning circuit coupled to the sensor output, and the signal conditioning circuit may comprise an amplifier, a voltage or current limiter, an attenuator, a delay line or circuit, a level translator, a galvanic isolator, an impedance transformer, a linearization circuit, a calibrator, a passive filter, an active filter, an adaptive filter, an integrator, a deviator, an equalizer, a spectrum analyzer, a compressor or a de-compressor, a coder, a decoder, a modulator, a demodulator, a pattern recognizer, a smoother, a noise remover, an average circuit, or an RMS circuit. The sensor may be operative to sense time-dependent characteristic of the sensed phenomenon, and may be operative to respond to a time-integrated, an average, an RMS (Root Mean Square) value, a frequency, a period, a duty-cycle, a time-integrated, or a time- derivative, of the sensed phenomenon. The first device, the router, or the control server may be operative to calculate or provide a time-dependent characteristic such as time-integrated, an average, an RMS (Root Mean Square) value, a frequency, a period, a duty-cycle, a time-integrated, or a time- derivative, of the sensed phenomenon. The sensor may be operative to sense space-dependent characteristic of the sensed phenomenon, such as to a pattern, a linear density, a surface density, a volume density, a flux density, a current, a direction, a rate of change in a direction, or a flow, of the sensed phenomenon. The first device, the router, or the control server may be operative to calculate or provide a space-dependent characteristic of the sensed phenomenon, such as a pattern, a linear density, a surface density, a volume density, a flux density, a current, a direction,- a rate of change in a direction, or a flow, of the sensed phenomenon.
The actuator may affect, create, or change a phenomenon associated with an object, and the object may be gas, air, liquid, or solid. The actuator may be controlled by a digital input, and may be an electrical actuator powered by an electrical energy. The actuator may be controlled by an analog input, and the second device may comprise a digital to analog converter coupled to the analog input, for converting a digital data to an actuator input signal. The second device may comprise a signal conditioning circuit coupled to the actuator input, the signal conditioning circuit may comprise an amplifier, a voltage or current limiter, an attenuator, a delay line or circuit, a level translator, a galvanic isolator, an impedance transformer, a linearization circuit, a calibrator, a passive filter, an active filter, an adaptive filter, an integrator, a deviator, an equalizer, a spectrum analyzer, a compressor or a de-compressor, a coder, a decoder, a modulator, a demodulator, a pattern recognizer, a smoother, a noise remover, an average circuit, or an RMS circuit. The actuator may be operative to affect time-dependent characteristic such as a time-integrated, an average, an RMS (Root Mean Square) value, a frequency, a period, a duty-cycle, a time-integrated, or a time- derivative, of the sensed phenomenon. The actuator may be operative to affect or change space- dependent characteristic of the phenomenon, such as a pattern, a linear density, a surface density, a volume density, a flux density, a current, a direction, a rate of change in a direction, or a flow, of the sensed phenomenon. The second device, the router, or the control server may be operative to affect a space-dependent characteristic such as a pattern, a linear density, a surface density, a volume density, a flux density, a current, a direction, a rate of change in a direction, or a flow, of the phenomenon.
The sensor may be a thermoelectric sensor that senses or responds to a temperature or a temperature gradient of an object using conduction, convection, or radiation, and may consist of, or comprise, a Positive Temperature Coefficient (PTC) thermistor, a Negative Temperature Coefficient (NTC) thermistor, a thermocouple, a quartz crystal, or a Resistance Temperature Detector (RTD). A radiation-based sensor may respond to radioactivity, nuclear radiation, alpha particles, beta particles, or gamma rays, and may be based on gas ionization.
The sensor may be a photoelectric sensor that responds to a visible or an invisible light or both, such as infrared, ultraviolet, X-rays, or gamma rays. The photoelectric sensor may be based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component such as a photodiode, a phototransistor, or a solar cell. The photoelectric sensor may be based on Charge- Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. The sensor may be a photosensitive image sensor array comprising multiple photoelectric sensors, and may be operative for capturing an image and producing an electronic image information representing the image, and may comprise one or more optical lens for focusing the received light and mechanically oriented to guide the image, and the image sensor may be disposed approximately at an image focal point plane of the one or more optical lens for properly capturing the image. An image processor may be coupled to the image sensor for providing a digital data video signal according to a digital video format, the digital video signal carrying digital data video based on the captured images, and the digital video format may be according to, or based on, one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format) and DPOF (Digital Print Order Format) standards. A video compressor may be coupled to the image sensor for lossy or non-lossy compressing of the digital data video, and may be based on a standard compression algorithm such as JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264, or ITU-T CCIR 601. The sensor may be an electrochemical sensor and may respond to an object chemical structure, properties, composition, or reactions. The electrochemical sensor may be a pH meter or may be a gas sensor responding to the presence of radon, hydrogen, oxygen, or Carbon-Monoxide (CO). The electrochemical sensor may be a smoke, a flame, or a fire detector, and may be based on optical detection or on ionization for responding to combustible, flammable, or toxic gas.
The sensor may be a physiological sensor and may respond to parameters associated with a live body, and may be external to the sensed body, implanted inside the sensed body, attached to the sensed body, or wearable on the sensed body. The physiological sensor may be responding to body electrical signals such as an EEG Electroencephalography (EEG) or an Electrocardiography (ECG) sensor, or may be responding to oxygen saturation, gas saturation, or blood pressure.
The sensor may be an electroacoustic sensor and may respond to a sound, such as inaudible or audible audio. The electroacoustic sensor may be a an omnidirectional, unidirectional, or bidirectional microphone, may be based on the sensing the incident sound based motion of a diaphragm or a ribbon, and may consist of, or comprise, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
The sensor may be an electric sensor and may respond to or measure an electrical characteristics or electrical phenomenon quantity, and may be conductively, non-conductively, or non-contact couplable to the sensed element. The electrical sensor may be responsive to Alternating Current (AC) or Direct Current (DC), and may be an ampermeter and respond to an electrical current passing through a conductor or wire. The ampermeter may consist of, or comprises, a galvanometer, a hot-wire ampermeter, a current clamp, or a current probe. Alternatively or in addition, the electrical sensor may be a voltmeter and may respond to or measure an electrical voltage. The voltmeter may consist of, or comprise, an electrometer, a resistor, a potentiometer, or a bridge circuit. The electrical sensor may be a wattmeter such as an electricity meter that responds to electrical energy, and may measure or respond to active electrical power. The wattmeter may be based on induction, or may be based on multiplying measured voltage and current.
The electrical sensor may be an impedance meter and may respond to the impedance of the sensed element such as bridge circuit or an ohmmeter, and may be based on supplying a current or a voltage and respectively measuring a voltage or a current. The impedance meter may be a capacitance or an inductance meter (or both) and may respond to the capacitance or the inductance of the sensed element, being measuring in a single frequency or in multiple frequencies. The electrical sensor may be a Time-Domain Reflectometer (TDR) and may respond to the impedance changes along a conductive transmission line, such as an optical TDR that may respond to the changes along an optical transmission line.
The sensor may be a magnetic sensor and may respond to an H or B magnetic field, and may consists of, or may be based on, a Hall effect sensor, a MEMS, a magneto-diode, a magneto- transistor, an AMR magnetometer, a GMR magnetometer, a magnetic tunnel junction magnetometer, a Nuclear precession magnetic field sensor, an optically pumped magnetic field sensor, a fluxgate magnetometer, a search coil magnetic field sensor, or a Superconducting Quantum Interference Device (SQUID) magnetometer. The magnetic sensor may be MEMS based, and may be a Lorentz force based MEMS sensor or may be an Electron Tunneling based MEMS.
The sensor may be a tactile sensor and may respond to a human body touch, and may be based on a conductive rubber, a lead zirconate titanate (PZT) material, a polyvinylidene fluoride (PVDF) material, a metallic capacitive element, or any combination thereof.
The sensor may be a single-axis, 2-axis, or 3-axis motion sensor and may respond to the magnitude, direction, or both, of the sensor motion. The motion sensor may be a piezoelectric, a piezoresistive, a capacitive, or a MEMS accelerometer and may respond to the absolute acceleration or the acceleration relative to freefall. The motion sensor may be an electromechanical switch and may consist of, or comprises, an electrical tilt, or a vibration switch.
The sensor may be a force sensor and may respond to the magnitude, direction, or both, of a force, and may be based on a spring extension, a strain gauge deformation, a piezoelectric effect, or a vibrating wire. The force sensor may be a dynamometer that responds to a torque or to a moment of the force.
The sensor may be a pressure sensor and may respond to a pressure of a gas or a liquid, and may consist of, or comprise, an absolute pressure sensor, a gauge pressure sensor, a vacuum pressure sensor, a differential pressure sensor, or a sealed pressure sensor. The pressure sensor may be based on a force collector, the piezoelectric effect, a capacitive sensor, an electromagnetic sensor, or a frequency resonator sensor.
The sensor may be an absolute, a relative displacement, or an incremental position sensor, and may respond to a linear or angular position, or motion, of a sensed element. The position sensor may be an optical type or a magnetic type angular position sensor, and may respond to an angular position or the rotation of a shaft, an axle, or a disk. The angular position sensor may be based on a variable-reluctance (VR), an Eddy-current killed oscillator (ECKO), a Wiegand !sensing, or a Hall- effect sensing, and may be transformer based such as an RVDT, a resolver or a synchro. The angular position sensor may be an electromechanical type such as an absolute or an incremental, mechanical or optical, rotary encoder. The angular position sensor may be an angular rate sensor and may respond to the angular rate, or the rotation speed, of a shaft, an axle, or a disc, and may consist of, or comprise, a gyroscope, a tachometer, a centrifugal switch, a Ring Laser Gyroscope (RLG), or a fiber-optic gyro. The position sensor may be a linear position sensor and may respond to a linear displacement or position along a line, and may consist of, or comprise, a transformer, an LVDT, a linear potentiometer, or an incremental or absolute linear encoder.
The sensor may be a motion detector and may respond to a motion of an element, and may based on sound, geomagnetism, reflection of a transmitted energy, electromagnetic induction, or vibration. The motion detector may consist of, or comprise, a mechanically-actuated switch.
The sensor may be a strain gauge and may respond to the deformation of an object, and may be based on a metallic foil, a semiconductor, an optical fiber, vibrating or resonating of a tensioned wire, or a capacitance meter. The sensor may be a hygrometer and may respond to an absolute, relative, or specific humidity, and may be based on optically detecting condensation, or based on changing the capacitance, resistance, or thermal conductivity of materials subjected to the measured humidity. The sensor may be a clinometer and may respond to inclination or declination, and may be based on an accelerometer, a pendulum, a gas bubble in liquid, or a tilt switch.
The sensor may be a flow sensor and may measure the volumetric or mass flow rate via a defined area, volume or surface. The flow sensor may be a liquid flow sensor and may be measuring the liquid flow in a pipe or in an open conduit. The liquid flow sensor may be a mechanical flow meter and may consist of, or comprise, a turbine flow meter, a Woltmann meter, a single jet meter, or a paddle wheel meter. The liquid flow sensor may be a pressure flow meter based on measuring an absolute pressure or a pressure differential. The flow sensor may be a gas or an air flow sensor such as an anemometer for measuring wind or air speed, and may measure the flow through a surface, a tube, or a volume, and may be based on measuring the air volume passing in a time period. The anemometer may consist of, or comprise, cup anemometer, a windmill anemometer, a pressure anemometer, a hot-wire anemometer, or a sonic anemometer.
The sensor may be a gyroscope for measuring orientation in space, and may consist of, or comprise, a MEMS, a piezoelectric, a FOG, or a VSG gyroscope, and may be based on a conventional mechanical type, a nanosensor, a crystal, or a semiconductor.
The sensor may be an image sensor for capturing an image or video, and the system may include an image processor for recognition of a pattern, and the control logic may be operative to respond to the recognized pattern such as appearance-based analysis of hand posture or gesture recognition. The system may comprise an additional image sensor, and the control logic may be operative to respond to the additional image sensor such as to cooperatively capture a 3-D image and for identifying the gesture recognition from the 3-D image, based on volumetric or skeletal models, or a combination thereof.
The sensor may be an image sensor for capturing still or video image, and the sensor or the system may comprise an image processor having an output for processing the captured image (still or video). The image processor (hardware or software based, or a hardware/software combination) may be encased entirely or in part in the first device, the router, the control server, or any combination thereof, and the control logic may respond to the image processor output. The image sensor may be a digital video sensor for capturing digital video content, and the image processor may be operative for enhancing the video content such as by image stabilization, unsharp masking, or super-resolution, or for Video Content Analysis (VCA) such as Video Motion Detection (VMD), video tracking, egomotion estimation, identification, behavior analysis, situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, or pattern recognition. The image processor may be operative for detecting a location of an element, and may be operative for detecting and counting the number of elements in the captured image, such as a human body parts (such as a human face or a human hand) in the captured image. An example of image processing for counting people is described in U.S. Patent No. 7,466,844 to Aran Ramaswamy et al, entitled: "Methods and Apparatus to Count People Appearing in an Image", which is incorporated in its entirety for all purposes as if fully set forth herein.
The actuator may be a light source that emits visible or non-visible light (infrared, ultraviolet, X-rays, or gamma rays) such as for illumination or indication. The actuator may comprise a shade, a reflector, an enclosing globe, or a lens, for manipulating the emitted light. The light source may be an electric light source for converting electrical energy into light, and may consist of, or comprise, a lamp, such as an incandescent, a fluorescent, or a gas discharge lamp. The electric light source may be based on Solid-State Lighting (SSL) such as a Light Emitting Diode (LED) which may be Organic LED (OLED), a polymer LED (PLED), or a laser diode. The actuator may be a chemical or electrochemical actuator, and may be operative in producing, changing, or affecting a matter structure, properties, composition, process, or reactions, such as producing, changing, or affecting an oxidation/reduction or an electrolysis reaction. The actuator may be a motion actuator and may cause linear or rotary motion or may comprise a conversion mechanism (may be based on a screw, a wheel and axle, or a cam) for converting to rotary or linear motion. The conversion mechanism may be based on a screw, and the system may include a leadscrew, a screw jack, a ball screw or a roller screw, or may be based on a wheel and axle, and the system may include a hoist, a winch, a rack and pinion, a chain drive, a belt drive, a rigid chain, or a rigid belt. The motion actuator may comprise a lever, a ramp, a screw, a cam, a crankshaft, a gear, a pulley, a constant-velocity joint, or a ratchet, for affecting the motion produced. The motion actuator may be a pneumatic actuator, a hydraulic actuator, or an electrical actuator. The motion actuator may be an electrical motor such as brushed, a brushless, or an uncommutated DC motor, or a Permanent Magnet (PM) motor, a Variable reluctance (VR) motor, or a hybrid synchronous stepper DC motor. The electrical motor may be an induction motor, a synchronous motor, or an eddy current AC motor. The AC motor may be a single-phase AC induction motor, a two-phase AC servo motor, or a three-phase AC synchronous motor, and may be a split-phase motor, a capacitor-start motor, or a Permanent-Split Capacitor (PSC) motor. The electrical motor may be an electrostatic motor, a piezoelectric actuator, or a MEMS-based motor.
The motion actuator may be a linear hydraulic actuator, a linear pneumatic actuator, or a linear electric motor such as linear induction motor (LIM) or a Linear Synchronous Motor (LSM). The motion actuator may be a piezoelectric motor, a Surface Acoustic Wave (SAW) motor, a Squiggle motor, an ultrasonic motor, or a micro- or nanometer comb-drive capacitive actuator, a Dielectric or Ionic based Electroactive Polymers (EAPs) actuator, a solenoid, a thermal bimorph, or a piezoelectric unimorph actuator.
The actuator may be operative to move, force, or compress liquid, gas or slurry, and may be a compressor or a pump. The pump may be a direct lift, an impulse, a displacement, a valveless, a velocity, a centrifugal, a vacuum, or a gravity pump. The pump may be a positive displacement pump such as a rotary lobe, a progressive cavity, a rotary gear, a piston, a diaphragm, a screw, a gear, a hydraulic, or a vane pump. The positive displacement pump may be a rotary-type positive displacement pump such as an internal gear, a screw, a shuttle block, a flexible vane, a sliding vane, a rotary vane, a circumferential piston, a helical twisted roots, or a liquid ring vacuum pump. The positive displacement pump may be a reciprocating-type positive displacement type such as a piston, a diaphragm, a plunger, a diaphragm valve, or a radial piston pump. The positive displacement pump may be a linear-type positive displacement type such as rope-and-chain pump. The pump may be an impulse pump such as a hydraulic ram, a pulser, or an airlift pump. The pump may be a rotodynamic pump, such as a velocity pump or a centrifugal pump, that may be a radial flow, an axial flow, or a mixed flow pump.
The actuator may be a sounder for converting an electrical energy to emitted audible or inaudible sound waves, emitted as omnidirectional, unidirectional, or bidirectional pattern. The sound may be audible, and the sounder may be an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or a planar magnetic loudspeaker, or a bending wave loudspeaker. The sounder may be electromechanical or ceramic based, and may be operative to emit a single or multiple tones, and may be operative to continuous or intermittent operation. The sounder may be an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer. The sounder may be a loudspeaker, and the system may be operative to play one or more digital audio content files (which may include a pre-recorded audio) stored entirely or in part in the second device, the router, or the control server. The system may comprise a synthesizer for producing the digital audio content. The sensor may be a microphone for capturing the digital audio content to play by the sounder. The control logic or the system may be operative to select one of the digital audio content files, and may be operative for playing the selected file by the sounder. The digital audio content may be music, and may include the sound of an acoustical musical instrument such as a piano, a tuba, a harp, a violin, a flute, or a guitar. The digital audio content may be a male or female human voice saying a syllable, a word, a phrase, a sentence, a short story or a long story. The system may comprise a speech synthesizer (such as a Text-To-Speech (TTS) based) for producing a human speech, being part of the second device, the router, the control server, or any combination thereof. The speech synthesizer may be a concatenative type, and may use unit selection, diphone synthesis, or domain-specific synthesis. Alternatively or in addition, the speech synthesizer may be a formant type, articulatory synthesis based, or hidden Markov models (HMM) based.
The actuator may be a monochrome, grayscale or color display for visually presenting information, and may consist of an array of light emitters or light reflectors. Alternatively or in addition, the display may be a visual retinal display or a projector based on an Eidophor, Liquid Crystal on Silicon (LCoS or LCOS), LCD, MEMS or Digital Light Processing (DLP™) technology. The display may be a video display that may support Standard-Definition (SD) :or High-Definition (HD) standards, and may be 3D video display. The display may be capable of scrolling, static, bold or flashing the presented information. The display may be an analog display having an analog input interface such as NTSC, PAL or SECAM formats, or analog input interface such as RGB, VGA (Video Graphics Array), SVGA (Super Video Graphics Array), SCART or S-video interface. Alternatively or in addition, the display may be a digital display having a digital input interface such as IEEE 1394, Fire Wire™, USB, SDI (Serial Digital Interface), HDMI (High-Definition Multimedia Interface), DVI (Digital Visual Interface), UDI (Unified Display Interface), DisplayPort, Digital Component Video, or DVB (Digital Video Broadcast) interface. The display may be a Liquid Crystal Display (LCD) display, a Thin Film Transistor (TFT), or an LED-backlit LCD display, and may be based on a passive or an active matrix. The display may be a Cathode-Ray Tube (CRT), a Field Emission Display (FED), Electronic Paper Display (EPD) display (based on Gyricon technology, Electro- Wetting Display (EWD), or Electrofluidic display technology), a laser video display (based on a Vertical-External-Cavity Surface-Emitting-Laser (VECSEL) or a Vertical- Cavity Surface-Emitting Laser (VCSEL)), an Electroluminescent Display (ELD), a Vacuum Fluorescent Display (VFD), or a passive-matrix (PMOLED) or active-matrix OLEDs (AMOLED) Organic Light-Emitting Diode (OLED) display. The display may be a segment display (such as Seven-segment display, a fourteen-segment display, a sixteen-segment display, or a dot matrix display), and may be operative to only display digits, alphanumeric characters, words, characters, arrows, symbols, ASCII, non-ASCII characters, or any combination thereof.
The actuator may be a thermoelectric actuator (such as an electric thermoelectric actuator) and may be a heater or a cooler, and may be operative for affecting or changing the temperature of a solid, a liquid, or a gas object. The thermoelectric actuator may be coupled to the object by conduction, convection, force convention, thermal radiation, or by the transfer of energy by phase changes. The thermoelectric actuator may include a heat pump, or may be a cooler based on an electric motor based compressor for driving a refrigeration cycle. The thermoelectric actuator may be an induction heater, may be an electric heater such as a resistance heater or a dielectric heater, or may be solid-state based such as an active heat pump device based on the Peltier effect. The actuator may be an electromagnetic coil or an electromagnet and may be operative for generating magnetic or electric field.
A redundancy may be used in order to improve the accuracy, reliability, or availability. The redundancy may be implemented where two or more components may be used for the same functionality. The components may be similar, substantially or fully the same, identical, different, substantially different, or distinct from each other, or any combination thereof. The redundant components may be concurrently operated, allowing for improved robustness and allowing for overcoming a single point of failure (SPOF), or alternatively one or more of the Components serves as a backup. The redundancy may be a standby redundancy, which may be 'Cold Standby' and 'Hot Standby'. In the case three redundant components are used, Triple Modular Redundancy (TMR) may be used, and Quadruple Modular Redundancy (QMR) may be used in the case of four components. A 1 :N Redundancy logic may be used for three or more components.
A sensor redundancy involves using two or more sensors sensing the same phenomenon. One of the two may be used, or all the sensors may be used together such as for averaging measurements for improved accuracy. Two or more data path may be available in the system between the system elements, where one of the may be only used, or alternatively all the data paths may be used together such as for improving the available bandwidth, throughput and delay.
In one example two or more sensors may be used for sensing the same (or substantially the same) phenomenon. The two (or more) sensors may be part of, associated with, or connected to the same BI device. Alternatively or in addition, each sensor may be connected to, or be part of, a distinct BI device. Similarly, two or more actuators may be used for generating or affecting the same (or substantially the same) phenomenon. The two (or more) actuators may be part of, associated with, or connected to the same BI device 300. Alternatively or in addition, each actuator may be connected to, or be part of, a distinct BI device.
In one example, multiple sensors are used arranged as a sensor array, where a set of several sensors, typically identical or similar, is used to gather information that cannot be gathered from a single sensor, or improve the measurement or sensing relating to a single sensor. A sensor array commonly improves the sensitivity, accuracy, resolution, and other parameters of the sensed phenomenon, and may be arranged as a linear sensor array. The sensor array may be directional, and better measure the parameters of the impinging signal to the array. Parameters that may be identified include the number, magnitudes, frequencies, Direction-Of-Arrival (DOA), distances and speeds of the signals. Estimation of the DOA may be improved in far-field signal applications, and may be based on Spectral-based (Non-parametric) that is based on maximizing the power of the beamforming output for a given input signal (such as Barlett beamformer, Capon beamformer and MUSIC beamformer), or may be based on Parametric approaches that is based on minimizing quadratic penalty functions. The processing of the entire sensor array outputs, such as to obtain a single measurement or a single parameter, may be performed by a dedicated processor, which may be part of the sensor array assembly, may be performed in the processor of the field unit, may be performed by the processor in the router, may be performed as part of the controller functionality (e.g., in the control server), or any combination thereof. Further, sensor array may be used to sense a phenomenon pattern in a surface or in space, as well as the phenomenon motion or distribution in a location.
Reference is made to FIG. 4, which is a schematic block diagram illustration of a distributed BI system 400 which may be implemented, for example, in a store or a supermarket, comprising various devices having BI capabilities, and where each of the devices may correspond to BI device 300. The system 400 may include devices having BI capabilities such as PoS terminal(s) 401, inventory module(s) 402, digital weight scale(s) 403, a workforce allocator module 404, an announcement sub-system 405 (e.g., a VoIP loudspeaker), a summoning unit 406 (e.g., able to send SMS messages or paging messages), microphone(s) 407, camera(s) 408, an odor detector 409, and/or other suitable BI devices which may be able to communicate among themselves using a communication protocol 420. In some embodiments, the system 400 may optionally include a BI server 491 and/or a central database 492 (e.g., optionally implemented as a NADB).
The devices shown in FIG. 4 may operate to sense data, locally process data, locally generate BI insights and BI commands, exchange BI commands and acknowledgement signals, query with other BI device(s) for data or BI insights, activate actuators, or modify their operation based on the BI insights or commands. The system 400 may be distributed across multiple scenes or sites, each scene or site corresponding to an area or a region or a location, which may be physically connected or detached or remote from each other. Once a particular event or action occurs, or once a particular condition or status is captured or sensed, system 400 may further investigate and analyze the data, may generate BI insights, and may act on such BI insight at the same scene or site and/or in other scenes or sites. In a demonstrative implementation, a scene or a site may be for example, a shop, a store, a branch, a shopping mall, a shopping center, a town or city, a chain of multiple stores or branches, a region within a store, or the like. The definition of a scene or site may be modified in order to accommodate particular system requirements and in order to match the type of actions or events which may be monitored, investigated, and acted upon.
Reference is made to FIG. 5, which is a schematic block diagram illustration of a distributed BI system 500 which may be implemented, for example, in a chain of multiple stores. As illustrated in FIG. 5, a chain of stores may include three stores 501-503, and each one of the stores (501, 502, 503) may include a distributed BI sub-system (511, 512, 513) which may be based on, or similar to, the BI system 400 of FIG. 4. For example, BI sub-system 511 may include multiple BI devices 521-522; the BI sub-system 512 may include multiple BI devices 531-533; and the BI sub-system 513 may include multiple BI devices 541-543. Optionally, each store (501, 502, 503) may include a BI server (561, 562, 563) and/or a central database (571, 572, 573) which may optionally be implemented as a NADB.
A BI device located in the store 501, may be able to exchange BI information, insights and/or commands with a BI device located in the store 502 or in the store 503, and vice versa, over one or more WANs 505 and using a suitable communication protocol. For example, the BI device 521 in store 501 may be an inventory module which may determine low inventory of a particular product. The BI device 521 may firstly check whether a local storage warehouse in the store 501 has additional units of that product; and if necessary, the BI device 521 may query an inventory module in the store 502 regarding the availability of that product in the store 502, and may request a shipment of that product from the store 502 to the store 501. Accordingly, BI queries, BI insights, BI requests, BI responses, BI commands and/or BI acknowledgment messages may be communicated across and among stores.
Only a master is involved in decision making and action commanding, as well as configuring slave devices. In contrast, a slave device may only send information to a master device, but cannot configure it. As exampled in store 501 in FIG. 5, a store 501 may include one BI device acting as master, several BI devices acting as slaves, and a BI server. Such arrangement may as well be a part of a store such as a department, a store isle, or a group of cash registers. Further, the arrangement may involve a group of stores having the same manager. Alternatively or in addition, a store may use several BI master devices each managing different area or different applications, or managing multiple areas with the same application. Alternatively or in addition, a store may use several BI devices acting as slaves and a single BI device acting as master, but without any server. Similarly, a store may use several BI devices acting as slaves and without any master or BI server devices. In another example, a store may include a master device, but no slave devices.
A master BI device may configure a slave BI device. In addition, a slave BI device may be configurable by a user manually. If such a manual configuration is needed, the user would connect the slave device manually using the UI explained, for example, in FIG. 6 for configuring the slave device. Alternatively or in addition, a BI server may configure all the devices in the system, commonly used for maintenance such as firmware upgrades. Reference is made to FIG. 6, which is a flow chart of a method of configuring a distributed BI system. The method may be implemented, for example, by one or more of the BI systems or BI devices described herein. The method may include, for example, a "Connect BI Devices to Network" step 610 involving connecting of multiple BI devices to a common communication network. The communication network may include one or more LANs, one or more WANs, wireless links, wired links, routers, switches, and/or other network elements which may be able to communicate by using a bi-directional protocol, TCP/IP, and/or other suitable communication protocol(s).
Further, the method may include, for example, a "Define Master and Slave" step 620 defining one BI device to operate as a master device, and defining all other Bl devices to operate as slave devices. A "Locate Slave Devices" step 630 involves locating by the master BI device all the slave BI devices on the network. This may include, for example, searching for slave BI devices, and registering their identity (e.g., a unique IP address or other Global Unique Identifier (GUID) of each slave BI device) from the master BI device.
In a "Define Roles" step 640, the roles of the BI devices in the network are defined. For example, a BI device may be defined to operate as a data collector (e.g., a security camera) using a sensor, as a decision maker (e.g., having a BI decision-making engine), as a data provider (e.g., an inventory module), as a BI server (e.g., to accumulate and store BI insights for further processing), or the like. This step of defining roles may be performed, for example, by the master BI device in the network, or by other suitable devices or components. In a "Send Configuration File" step 650, includes sending a configuration file by the master BI device to each slave BI device, where the configuration file defining the role of that slave BI device as determined by the BI device.
Reference is made to FIG. 7A, which is a schematic illustration of a User Interface (UI) 700 for configuring a BI device in a distributed BI system. In a demonstrative example, the BI device being configured may be integrated with a security camera (as a sensor) in a store or supermarket, and may be connected to an Internet Protocol (IP) network. Accordingly, the BI device may be configurable through a web browser, similar to a way in which a network printer or a wireless router may be configurable through a web browser. However, UI 700 may be implemented through other means, not necessarily through a web-page or web-browser, for example, by using a dedicated software application or a setup application. As demonstrated, UI 700 may include, for example, a title field 701 showing the brand, name and model of the BI device and optionally other identifying details (e.g., device manufacturer, or a user-modifiable device nickname); and a configuration menu 710 allowing a user to set or modify parameters in one or more categories, for example, a video menu 711, an applications menu 712, or the like.
The video menu 711 may allow modification of video analytics parameters, as well as parameters defining the conversion of a video analytics to a common format usable by other BI devices on the network and/or storable in a local database. The applications menu 712 may optionally be implemented as firmware, and may allow the introduction of new features, firmware upgrade, uploading or installation of a new version of firmware, or the like. The applications menu 712 may allow a user to access one or more BI application which may be relevant to that BI device, for example, an "Image Insight" application 720.
The "Image Insight" application 720 may include, for example, a setup sub-menu 721 allowing the user to establish a network of BI devices (e.g., in accordance with the method of FIG. 6), and a configuration sub-menu 722 to configure, for example, how a BI device (e.g., having a camera) operates in response to particular conditions or events. A device list 727 may include a list of all the devices found in the network search before establishing the communication among them, and may allow a user to select a device therefrom, and a role selector 723 may allow selection of a role for each such selected device (e.g., master or slave). Optionally, selection of a "master" role may allow a user to configure the device, whereas selection of a "slave" role may render some parameters non-configurable by the user as they may be determined by the master device.
The "Image Insight" application 720 may further include a network search button 724, which may trigger a search to identify all devices in the network. The search results may be presented in a device table 725, defining all devices in the network and their roles. Optionally, an IP address of a device that is not associated with a role, may not be shown. However, not all BI devices may require one or more rules to be configured in order for them to operate. A "connection definition" interface 726 may allow the manual addition of devices, such as a device that was not identified in the automated device search triggered by network search button 724. For example, an IP address and a role may be manually added by a user. Optionally, the user interface may allow a user to delete a device; to edit parameters or a device; to save the configuration data to the device being configured; and/or to indicate that the setup process is completed and that the user desires to exit the application.
Reference is made to FIG. 7B, which is a schematic illustration of a User Interface (UI) 750 for configuring a master BI device in a distributed BI system. In a demonstrative example, the BI device being configured may be integrated with a security camera in a store or supermarket, and may be connected to an Internet Protocol (IP) network. Components of UI 750 may be generally similar to respective components shown in FIG. 7A, and may be arranged, for example, as a menu tree 760 which may correspond to similar menu items and sub-menu items of FIG. 7A. Additionally, the UI 750 may include a device list 751, groups 753, and a group selection interface 752.
The "Device List" 751 may show a list of BI devices, by names or by IP addresses. The "Groups" selection interface 752 may allow manual creation of association between two or more devices, and the associated devices may then be shown as a "group" in groups 753. For example, security camera number 4 (located at IP address 192.168.17.19) may be associated with PoS terminal number 6 (located at IP address 192.168.17.35), and both of these devices may be shown as a group. The UI 750 may allow the user to create groups, delete groups, and edit groups. Optionally, a BI device may be associated with one or more other BI devices, or, may be included in one or more groups; for example, a camera may be associated with two PoS terminals as the camera may cover a waiting area which may be associated with two PoS terminals.
The UI 750 may further include "Configuration Interface" elements 755 (for example, text fields, drop-down menus, buttons, or the like) allowing the user to configure various parameters associated with the operation of the device. Such parameters may include, for example, the minimum period of time required in the analyzed video in order to identify a possible queue of customers; the period of time allocated for a PoS terminal to respond to a status query before a re-sending to it the status query; the period of time allocated for a PoS terminal to acknowledge that it has been opened and is now operational; the time which may elapse between a queue identification and an announcement; an "idle time" parameter, indicating an allowed period of time in which a queue may be resolved due to the opening of a new PoS terminal, prior to re- checking if the queue indeed resolved; and/or other modifiable or configurable parameter(s) which may be utilized by a particular implementation of the BI system.
Reference is made to FIG. 8, which is a flow chart of analyzing and handling a line of customers at a PoS terminal. The method may be used, for example, in order to identify a long line at a particular PoS terminal, and to immediately trigger in real time a process to open an additional PoS terminal. The method may operate to determine whether a video or image indeed shows a customer line rather than, for example, a coincidental gathering of several customers talking to each other in the queue area of the PoS terminal. For example, video analytics of image(s) captured by an in-store camera may indicate a possible line of customers near a PoS terminal. A query may be sent to the PoS terminal, in order to ascertain that indeed a long line of customers waits; and upon confirmation, an announcement may be made to summon an employee to open an additional PoS terminal.
The method may include, for example, a "Search in Video" step 800 in which searching in real time in one or more images or video in order to recognize a queue of customers near a PoS terminal, followed by "Identify Possible Queue" step 801 involving identifying of a possible queue. The area-of-interest for searching and recognizing may be pre-defined by a user (e.g., store manager) during a system configuration session. For example, a video analytics algorithm may observe or analyze video captured, covering an area-of-interest; at first, an image of the area-of-interest in an "empty" state may be captured and saved, and subsequent images may be captured and may be compared to the baseline image or may be compared among them to identify color changes in pixel(s) at the area-of-interest, indicating the presence of a person or an object at the area-of-interest; and once the amount of change (e.g., number of pixels changed, or, time persistency of pixels change) reaches a threshold value, a new line or record may be added to the database.
The method may include, for example, in "Check Locally Queue Persistence" step 802 the locally checking (e.g., at the camera) whether the line persists for at least a pre-defined time period (e.g., sixty seconds). A line of customers which does not persist more than sixty seconds may be discarded and not further handled; whereas a line of customer that persists for at least sixty seconds may be further handled (in step 803 and onward).
The method may optionally include, for example, an "Obtain Information from Other BI Device" step 803, followed by "Determine Queue Validity" step 804, which involve obtaining information from other BI device on the network to determine queue validity. These steps may include, for example, sending to the PoS terminal a query about the PoS terminal's current status (e.g., active or inactive); and receiving from the PoS terminal a status indication. For example, a PoS terminal status of "inactive" may mean that the PoS terminal is not operational, and therefore a group of customers observed near that PoS terminal is merely a coincidental gathering or passing of a batch of persons, rather than customers waiting in line. Optionally, obtaining the information from the BI device may include sending the query to multiple BI devices (e.g., to several PoS terminals), and optionally, re-sending the query if a rapid response is not received (e.g., within five seconds of sending).
It would be appreciated that the network may be pre-configured, in order to correlate between a particular camera and a particular PoS terminal which is covered by that camera. For example, camera number 4 may be associated in advance with PoS terminal number 3, such that a gathering of customers identified in video or images captured by camera number 4, may be interpreted as a potential queue at PoS terminal number 3.
The method may further include, for example, "check Suitability of Announcement" step
805, which includes the checking whether or not an immediate announcement is suitable. For example, in the case where the current PoS terminal or another PoS terminal triggered an announcement for help merely a few seconds ago (or, for example, less than 30 seconds ago), then an immediate announcement may not be allowed, or may be postponed to a subsequent time. This step may optionally include, for example, checking with one or more other BI devices whether an announcement process is currently in progress, or whether an announcement process has been completed in the last 30 seconds. Postponement or cancellation of a planned announcement may be logged in a log file, or may be reported to a BI server.
The method may further include, for example, "Send Announcement Command" step
806, including the sending a command to initiate an announcement, e.g., to a VoIP loudspeaker or PBX, or to other suitable summoning mechanism. Upon receiving an acknowledgment that the announcement was performed in "Receive Ack" step 807, the method may include reporting the event to the BI server in "Report the Event to BI Server" step 808, and querying with one or more PoS terminals on the network to verify that an additional PoS terminal was indeed opened in "Query PoS Terminals to Verify" step 809. It would be appreciated that if an announcement acknowledgement is not received within a predefined time period (e.g., thirty seconds), then the announcement command may be re-sent. Optionally, the method may include, for example, after checking that a new PoS terminal has been opened as commanded, re-analyzing new video or images to verify whether the queue of customers was eliminated or reduced in "verify Queue Reduction" step 810, and optionally reporting the results to the BI server for further subsequent processing. It would be appreciated that some operations (e.g., announcing, checking if a queue exists, checking if a PoS terminal was opened) may be repeated after a pre-defined waiting period. The repetition may be done for a pre-defined number of iterations. For example, a customer queue may be a temporary problem, and two or three announcements or attempts to open a PoS terminal may suffice to resolve the problem.
The BI system may generate a log of events which may be stored in a database and may be updated as events occur. For example, each row or record in the database may correspond to an event, and may include multiple fields or columns which may be populated with data. In a demonstrative example, the following fields or columns may be used for each row (event): an event ID number; a date-stamp and a time-stamp of the event; the day-of-week (e.g., from 1 to 7) in which the event occurred; the branch number or store number (or branch name), corresponding to the branch or store in which the event occurred; identification of the PoS terminal(s) related to the event; an indication whether an announcement was performed or not; an indication whether a PoS terminal was opened due to the event or not; a cause (or a code indicating a cause) for the PoS terminal not being opened due to the event; a copy of the command sent by the camera to the loudspeaker (e.g., a copy of the string command, the string indicating a branch number, a PoS terminal number, and an announcement type); the acknowledgement received from the VoIP equipment; a result of a verification test checking whether the PoS terminal is open or closed; an ID of a video analytics portion or image which depicts the event (e.g., shows the queue of customers); identification of the store region, or PoS terminal, that triggered the event; a field indicating whether the triggering PoS terminal had activity prior to the event, thereby validating the queue of customers; and/or other suitable fields or records.
A table or database may be used in order to associate between video analytics, PoS terminals, logic components and hierarchy of BI device(s). In a demonstrative implementation, each row may represent one association among devices, and may include the following fields: an ID number of the association; an event ID generated by the video analytics module and pointing to a particular event; an identifier of the PoS terminal associated with the identified queue; a PoS terminal name, imported from a PoS terminal database, showing the name of the PoS terminal corresponding the PoS identifier; a branch number; and a branch name. Other suitable fields may be used.
The system may include, for example, utilization of an odor detector, optionally in conjunction with a loyalty cards database, in order to suggest a perfume to a customer in a store. For example, an odor detector may sense a smell that a customer emits as he or she walks in a store aisle, and may identify the perfume that the customer is wearing based on a pre-defined lookup table of perfume odors. In a first demonstrative embodiment, this may suffice for the BI system in order to discreetly summon a store employee to suggest to the customer to purchase that particular perfume, or another perfume which consumer data indicates to be also preferred by customers who shopped for the perfume that the customer is wearing. In other demonstrative embodiments, a loyalty card database, or a past transaction database, may be queried in order to find additional information which may assist in suggesting the perfume to the customer, for example, a typical gender of the purchaser of the identified perfume, a typical time-of-day (e.g., morning or evening) of purchase of such perfume, a preferred quantity or size of perfume bottle for such perfume, or the like. Optionally, a salesperson may suggest to the customer to purchase the recommended perfume, and may further report back the sale result to the BI system for further analysis if the recommendation was successful or not.
The system may further include a system for reporting in real time about missing products on a store shelf. The system may include, for example, weight scale(s), a communication interface, a warehouse maintenance system, a screen for user input/output, and an optional BI server (or a BI device which may have functionalities of a BI server). The system may identify empty shelves, may report the empty shelf to the warehouse, and may initiate collection of products and their placement to refill the shelf, while reporting the process to the BI server and getting an administration index for every shift. In a demonstrative embodiment, a scale be used to identify the shelf occupancy, and upon detecting a shelf weight smaller than a threshold value, the scale may update the BI server which may send a request to the warehouse maintenance system for shelf refilling. If the warehouse has the product in stock, it may report to the BI server of the procedure started and may open a task on the screen for the stock keeper to collect the products. The stock keeper may list his/her name for collecting the products and the screen may report to the warehouse maintenance system that the product collection started. The stock keeper may put the products on the shelf and the shelf may report the warehouse maintenance system that the procedure is complete. Various parameters, for example, timeline, worker name, the number of products and the location of the shelf may be reported to the BI server for further analysis. The above-mentioned example was demonstrated in a context of a central BI server; however, a similar implementation may utilize a distributed BI system instead of utilizing the central BI server. For example, a shelf in the store may be associated with a digital weight scale, which may identify a shelf weight smaller than a threshold value, and may directly interact with an inventory sub-system, a warehouse management sub-system, a procurement unit, or other suitable device responsible for replenishing the inventory of that shelf, without the need for a central BI server, thereby allowing one device (e.g., the weight scale of the shelf) to communicate directly with another device (e.g., the inventory module) in order to inquire whether additional items are available and/or to command the inventory module to replenish the required inventory.
Multiple distinct or independent communication routes provide higher reliability such as avoiding single point of failure (SPOF), where in the case of any failure in one of the communication routes, the other routes may still provide the required connection and the system functionality is preserved, thus a therein renders the system fully functional, using a backup or failsafe scheme. The operation of the redundant communication routes may be based on standby redundancy, (a.k.a. Backup Redundancy), where one of the data paths or the associated hardware is considered as a primary unit, and the other data path (or the associated hardware) is considered as the secondary unit, serving as back up to the primary unit. The secondary unit typically does not monitor the system, but is there just as a spare. The standby unit is not usually kept in sync with the primary unit, so it must reconcile its input and output signals on the takeover of the communication. This approach does lend itself to give a "bump" on transfer, meaning the secondary operation may not be in sync with the last system state of the primary unit. Such mechanism may require a watchdog, which monitors the system to decide when a switchover condition is met, and command the system to switch control to the standby unit. Standby redundancy configurations commonly employ two basic types, namely 'Cold Standby' and 'Hot Standby'.
In cold standby, the secondary unit is either powered off or otherwise non-active in the system operation, thus preserving the reliability of the unit. The drawback of this design is that the downtime is greater than in hot standby, because the standby unit needs to be powered up or activated, and brought online into a known state.
On hot standby, the secondary unit is powered up or otherwise kept operational, and can optionally monitor the system. The secondary unit may serve as the watchdog and/or voter to decide when to switch over, thus eliminating the need for an additional hardware for this job. This design does not preserve the reliability of the standby unit as well as the cold standby design. However, it shortens the downtime, which in turn increases the availability of the system. Some flavors of Hot Standby are similar to Dual Modular Redundancy (DMR) or Parallel Redundancy. The main difference between Hot Standby and DMR is how tightly the primary and the secondary are synchronized. DMR completely synchronizes the primary and secondary units.
While a redundancy of two was exampled above, where two data paths and two hardware devices were used, a redundancy involving three or more data paths or systems may be equally used. The term TSP Modular Redundancy, (a.k.a. Parallel Redundancy) refers to the approach of having multiply units or data paths running in parallel. All units are highly synchronized and receive the same input information at the same time. Their output values are then compared and a voter decides which output values should be used. This model easily provides 'bumpless' switchovers. This model typically has faster switchover times than Hot Standby models, thus the system availability is very high, but because all the units are powered up and actively engaged with the system operation, the system is at more risk of encountering a common mode failure across all the units.
Deciding which unit is correct can be challenging if only two units are used. If more than two units are used, the problem is simpler, usually the majority wins or the two that agree win. In N Modular Redundancy, there are three main typologies: Dual Modular Redundancy, Triple Modular Redundancy, and Quadruple Redundancy. Quadruple Modular Redundancy (QMR) is fundamentally similar to TMR but using four units instead of three to increase the reliability. The obvious drawback is the 4X increase in system cost.
Dual Modular Redundancy (DMR) uses two functional equivalent units, thus either can control or support the system operation. The most challenging aspect of DMR is determining when to switch over to the secondary unit. Because both units are monitoring the application, a mechanism is needed to decide what to do if they disagree. Either a tiebreaker vote or simply the secondary unit may be designated as the default winner, assuming it is more trustworthy than the primary unit. Triple Modular Redundancy (TMR) uses three functionally equivalent units to provide a redundant backup. This approach is very common in aerospace applications where the cost of failure is extremely high. TMR is more reliable than DMR due to two main aspects. The most obvious reason is that two "standby" units are used instead of just one. The other reason is that in a technique called diversity platforms or diversity programming may be applied. In this technique, different software or hardware platforms are used on the redundant systems to prevent common mode failure. The voter decides which unit will actively control the application. With TMR, the decision of which system to trust is made democratically and the majority rules. If three different answers are obtained, the voter must decide which system to trust or shut down the entire system, thus the switchover decision is straightforward and fast. Another redundancy topology is 1 :N Redundancy, where a single backup is used for multiple systems, and this backup is able to function in the place of any single one of the active systems. This technique offers redundancy at a much lower cost than the other models by using one standby unit for several primary units. This approach only works well when the primary units all have very similar functions, thus allowing the standby to back up any of the primary units if one of them fails.
The system may further be utilizing a loyalty cards database, a microphone, and an optional BI server (or a BI device which may have functionalities of a BI server). The system may identify words spoken by a customer as he or she is walking about the store, in order to offer to the customer the most relevant product with the best chance to sell. The system may detect the voice of the customer walking by, and the voice may be compared by a processor or comparator unit to key words in the database. The system may determine, using speech analysis by a processor or using a speech-to-text converter, the brand and model of the product that the client is talking about, and a query may be sent by the processor to the loyalty card database. The processor may firstly search the customers who bought the current product in the past, and then the processor may search for additional products that the same customers bought in the past. Optionally, time analysis may be performed by the processor or by a time-analysis module, to check if such products were purchased at the current time-of-day (e.g., morning or evening, AM or PM). The BI system may then determine (using its processor) and announce (through an announcement sub-system or loudspeaker) a sale or promotion of a product having the best chance to be sold to the walking customer. The PoS terminal database may then be checked for the announced product, and may verify the sale upgrade. The system may utilize multiple databases with a suitable communication protocol between, in order to support real time decisions and operation, while reporting to a main BI server for further analysis.
The system may be used for tracking a customer path in a store, as the customer walks among fields-of-view of multiple store cameras. The system may include, for example, a camera with a BI engine, a wireless detector with BI engine, and optionally a BI server (or a BI device acting as a master device). The BI server, or the BI device operating as a master device, may identify a person across multiple cameras. Wireless detectors across the store may recognize a unique identifier of the customer (e.g., a MAC address of a cellular device held by the customer), may assign a unique ID to the customer, and may thus follow the customer as he/she walks in the store or shopping center. The tracking may be reported by the wireless detectors to multiple cameras, which in turn may be able to correlate the tracking data with a visual path of the customer within the store. The data may be further sent to the BI server or to the BI master device, for further analysis, e.g., analyzing a preferred or common path of customers in general, of customers of a particular gender or age-group, or the like. The system may be implemented by utilizing suitable modules or component, for example, a module that correlates between video data (or image data) and wireless communication signals or identifiers; a module that tracks a person across multiple cameras; a path calculator that determines a path or route of a customer; or the like.
The system may be implemented using a Network Attached Database (NADB), having a communication port (e.g., a network adapter supporting TCP/IP), a processor, an operating system, one or more LEDs indicating power and communication status, and a database (e.g., sales data, PoS terminals data, loyalty card data, or the like). The NADB may provide information in real time for BI decisions made by other BI devices on the network. Optionally, the NADB may be implemented differently from a computer server, for example, by excluding a computer monitor, a keyboard, and a mouse, and by providing a stand-alone database device which may be controlled remotely using a Web interface.
It would be appreciated that some portions of the discussion herein may relate, for demonstrative purposes, to video analysis as a step in a BI generation process, or to video analytics as demonstrative parameters which may be used in the BI generation process. However, other and/or additional sources of data may be included, and may utilize other and/or additional types of data or analytics, for example, voice data, cellular communications data, electric signals, odor information, club membership data or customer loyalty card data, credit card data (e.g., customer name, customer billing address or zip code), cash register data or point-of-sale (POS) data, stock or inventory data, planograms, textual and/or visual representations of a store's products, and/or any other type of data which may be measured and/or estimated and then utilized by a BI system.
The term "Point of Sale (PoS) Terminal", as used herein, may include any suitable type of cash register, payment collection terminal, a terminal able to process credit cards and/or debit cards, a terminal able to process cash payments, a terminal or machine operated by a cashier, a terminal able to record sale transactions, a terminal able to print out a receipt, a non-portable PoS terminal, a portable or mobile or handheld PoS terminal, or the like. It is noted that a PoS terminal may be equipped with a processor, a memory unit, and a wireless or wired communication module (e.g., transceiver) in order to function as a BI device.
The term "BI Server" as used herein may include, for example, a local BI server located in a store, a remote or external BI server located outside a store and/or in another branch and/or in a headquarters office, a standalone BI server, a set or batch of BI server, a BI device performing role(s) of a BI server, a set or multiple BI devices utilizing distributed architecture to perform one or more roles of a BI server, a BI device defined to operate as a BI server towards other BI devices, or the like.
The term "processor" is meant to include any integrated circuit or other electronic device (or collection of devices) capable of performing an operation on at least one instruction including, without limitation, Reduced Instruction Set Core (RISC) processors, CISC microprocessors, Microcontroller Units (MCUs), CISC-based Central Processing Units (CPUs), and Digital Signal Processors (DSPs). The hardware of such devices may be integrated onto a single substrate (e.g., silicon "die"), or distributed among two or more substrates. Furthermore, various functional aspects of the processor may be implemented solely as software or firmware associated with the processor.
The term "computer" is used generically herein to describe any number of computers, including, but not limited to personal computers, embedded processing elements and systems, control logic, ASICs, chips, workstations, mainframes, etc. Any computer herein may consist of, or be part of, a handheld computer, including any portable computer which is small enough to be held and operated while holding in one hand or fit into a pocket. Such a device, also referred to as a mobile device, typically has a display screen with touch input and / or miniature keyboard. Non- limiting examples of such devices include Digital Still Camera (DSC), Digital video Camera (DVC or digital camcorder), Personal Digital Assistant (PDA), and mobile phones and Smartphones. The mobile devices may combine video, audio and advanced communication capabilities, such as PAN and WLAN. A mobile phone (also known as a cellular phone, cell phone and a hand phone) is a device which can make and receive telephone calls over a radio link whilst moving around a wide geographic area, by connecting to a cellular network provided by a mobile network operator. The calls are to and from the public telephone network which includes other mobiles and fixed-line phones across the world. The Smartphones may combine the functions of a personal digital assistant (PDA), and may serve as portable media players and camera phones with high-resolution touch- screens, web browsers that can access, and properly display, standard web pages rather than just mobile-optimized sites, GPS navigation, Wi-Fi and mobile broadband access. In addition to telephony, the Smartphones may support a wide variety of other services such as text messaging, MMS, email, Internet access, short-range wireless communications (infrared, Bluetooth), business applications, gaming and photography.
Some embodiments may be used in conjunction with various devices and systems, for example, a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a cellular handset, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless Access Point (AP), a wired or wireless router, a wired or wireless modem, a wired or wireless network, a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), devices and/or networks operating substantially in accordance with existing IEEE 802.11, 802.1 la, 802.1 lb, 802.1 lg, 802.1 1k, 802.11η, 802. l lr, 802.16, 802.16d, 802.16e, 802.20, 802.21 standards and/or future versions and/or derivatives of the above standards, units and/or devices which are part of the above networks, one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi- standard radio devices or systems, a wired or wireless handheld device (e.g., BlackBerry, Palm Treo), a Wireless Application Protocol (WAP) device, or the like.
As used herein, the terms "network", "communication link" and "communications mechanism" are used generically to describe one or more networks, communications media or communications systems, including, but not limited to, the Internet, private or public telephone, cellular, wireless, satellite, cable, data networks. Data networks include, but not limited to, Metropolitan Area Networks (MANs), Wide Area Networks (WANs), Local Area Networks (LANs), Personal Area networks (PANs), WLANs (Wireless LANs), Internet, internets, NGN, intranets, Hybrid Fiber Coax (HFC) networks, satellite networks, and Telco networks. Communication media include, but not limited to, a cable, an electrical connection, a bus, and internal communications mechanisms such as message passing, interprocess communications, and shared memory. Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25, Frame Relay, 3 GPP, 3GPP2, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.). While exampled herein with regard to secured communication between a pair of network endpoint devices (host-to-host), the described method can equally be used to protect the data flow between a pair of gateways or any other networking-associated devices (network-to-network), or between a network device (e.g., security gateway) and a host (network-to-host).
A sensor herein may include one or more sensors, each providing an electrical output signal (such as voltage or current), or changing a characteristic (such as resistance or impedance) in response to a measured or detected phenomenon. The sensors may be identical, similar or different from each other, and may measure or detect the same or different phenomena. Two or more sensors may be connected in series or in parallel. In the case of a changing characteristic sensor or in the case of an active sensor, the unit may include an excitation or measuring circuits (such as a bridge) to generate the sensor electrical signal. The sensor output signal may be conditioned by a signal conditioning circuit. The signal conditioner may involve time, frequency, or magnitude related manipulations. The signal conditioner may be linear or non-linear, and may include an operation or an instrument amplifier, a multiplexer, a frequency converter, a frequency-to-voltage converter, a voltage-to-frequency converter, a current-to-voltage converter, a current loop converter, a charge converter, an attenuator, a sample-and-hold circuit, a peak-detector, a voltage or current limiter, a delay line or circuit, a level translator, a galvanic isolator, an impedance transformer, a linearization circuit, a calibrator, a passive or active (or adaptive) filter, an integrator, a deviator, an equalizer, a spectrum analyzer, a compressor or a de-compressor, a coder (or decoder), a modulator (or demodulator), a pattern recognizer, a smoother, a noise remover, an average or RMS circuit, or any combination thereof. In the case of analog sensor, an analog to digital (A/D) converter may be used to convert the conditioned sensor output signal to a digital sensor data. The unit may include a computer for controlling and managing the unit operation, processing the digital sensor data and handling the unit communication. The unit may include a modem or transceiver coupled to a network port (such as a connector or antenna), for interfacing and communicating over a network. The sensor may be a CCD or CMOS based image sensor, for capturing still or video images. The image capturing hardware integrated with the unit may contain a photographic lens (through a lens opening) focusing the required image onto an image sensor. The image may be converted into a digital format by an image sensor AFE (Analog Front End) and an image processor. An image or video compressor for compression of the image information may be used for reducing the memory size and reducing the data rate required for the transmission over the communication medium. Similarly, the sensor may be a voice sensor such as a microphone, and may similarly include a voice processor or a voice compressor (or both). The image or voice compression may be standard or proprietary, may be based on intraframe or interframe compression, and may be lossy or non-lossy compression.
An actuator herein may include one or more actuators, each affecting or generating a physical phenomenon in response to an electrical command, which can be an electrical signal (such as voltage or current), or by changing a characteristic (such as resistance or impedance) of a device. The actuators may be identical, similar or different from each other, and may affect or generate the same or different phenomena. Two or more actuators may be connected in series or in parallel. The actuator command signal may be conditioned by a signal conditioning circuit. The signal conditioner may involve time, frequency, or magnitude related manipulations. The signal conditioner may be linear or non-linear, and may include an amplifier, a voltage or current limiter, an attenuator, a delay line or circuit, a level translator, a galvanic isolator, an impedance transformer, a linearization circuit, a calibrator, a passive or active (or adaptive) filter, an integrator, a deviator, an equalizer, a spectrum analyzer, a compressor or a de-compressor, a coder (or decoder), a modulator (or demodulator), a pattern recognizer, a smoother, a noise remover, an average or RMS circuit, or any combination thereof. In the case of analog actuator, a digital to analog (D/A) converter may be used to convert the digital command data to analog signals for controlling the actuators. The unit may include a computer for controlling and managing the unit operation, processing the actuators commands and handling the unit communication. The unit may include a modem or transceiver coupled to a communication port (such as a connector or antenna), for interfacing and communicating over a network.
Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems, for example, Radio Frequency (RF), Infra Red (IR), Frequency-Division Multiplexing (FDM), Orthogonal FDM (OFDM), Time-Division Multiplexing (TDM), Time-Division Multiple Access (TDMA), Extended TDMA (E-TDMA), General Packet Radio Service (GPRS), extended GPRS, Code-Division Multiple Access (CDMA), Wideband CDMA (WCDMA), CDMA 2000, single-carrier CDMA, multi-carrier CDMA, Multi-Carrier Modulation (MDM), Discrete Multi-Tone (DMT), Bluetooth (RTM), Global Positioning System (GPS), Wi-Fi, Wi-Max, ZigBee (TM), Ultra-Wideband (UWB), Global System for Mobile communication (GSM), 2G, 2.5G, 3G, 3.5G, Enhanced Data rates for GSM Evolution (EDGE), or the like..
Discussions herein utilizing terms such as, for example, "processing," "computing," "calculating," "determining," "establishing", "analyzing", "checking", or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulate and/or transform data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information storage medium that may store instructions to perform operations and/or processes.
Throughout the description and claims of this specification, the word "couple", and variations of that word such as "coupling", "coupled", and "couplable", refer to an electrical connection (such as a copper wire or soldered connection), a logical connection (such as through logical devices of a semiconductor device), a virtual connection (such as through randomly assigned memory locations of a memory device) or any other suitable direct or indirect connections (including combination or series of connections), for example for allowing for the transfer of power, signal, or data, as well as connections formed through intervening devices or elements.
The arrangements and methods described herein may be implemented using hardware, software or a combination of both. The term "integration" or "software integration" or any other reference to the integration of two programs or processes herein refers to software components (e.g., programs, modules, functions, processes etc.) that are (directly or via another component) combined, working or functioning together or form a whole, commonly for sharing a common purpose or set of objectives. Such software integration can take the form of sharing the same program code, exchanging data, being managed by the same manager program, executed by the same processor, stored on the same medium, sharing the same GUI or other user interface, sharing peripheral hardware (such as a monitor, printer, keyboard and memory), sharing data or a database, or being part of a single package. The term "integration" or "hardware integration" or integration of hardware components herein refers to hardware components that are (directly or via another component) combined, working or functioning together or form a whole, commonly for sharing a common purpose or set of objectives. Such hardware integration can take the form of sharing the same power source (or power supply) or sharing other resources, exchanging data or control (e.g., by communicating), being managed by the same manager, physically connected or attached, sharing peripheral hardware connection (such as a monitor, printer, keyboard and memory), being part of a single package or mounted in a single enclosure (or any other physical collocating), sharing a communication port, or used or controlled with the same software or hardware. The term "integration" herein refers (as applicable) to a software integration, a hardware integration, or any combination thereof.
The term "port" refers to a place of access to a device, electrical circuit or network, where energy or signal may be supplied or withdrawn. The term "interface" of a networked device refers to a physical interface, a logical interface (e.g., a portion of a physical interface or sometimes referred to in the industry as a sub-interface - for example, such as, but not limited to a particular VLAN associated with a network interface), and/or a virtual interface (e.g., traffic grouped together based on some characteristic - for example, such as, but not limited to, a tunnel interface). As used herein, the term "independent" relating to two (or more) elements, processes, or functionalities, refers to a scenario where one does not affect nor preclude the other. For example, independent communication such as over a pair of independent data routes means that communication over one data route does not affect nor preclude the communication over the other data routes.
As used herein, the term "Integrated Circuit" (IC) shall include any type of integrated device of any function where the electronic circuit is manufactured by the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material (e.g., Silicon), whether single or multiple die, or small or large scale of integration, and irrespective of process or base materials (including, without limitation Si, SiGe, CMOS and GAs) including without limitation applications specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital processors (e.g., DSPs, CISC microprocessors, or RISC processors), so-called "system-on-a-chip" (SoC) devices, memory (e.g., DRAM, SRAM, flash memory, ROM), mixed-signal devices, and analog ICs. The circuits in an IC are typically contained in a silicon piece or in a semiconductor wafer, and commonly packaged as a unit. The solid-state circuits commonly include interconnected active and passive devices, diffused into a single silicon chip. Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip). Digital integrated circuits commonly contain many of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. Further, a multi-chip module (MCM) may be used, where multiple integrated circuits (ICs), the semiconductor dies, or other discrete components are packaged onto a unifying substrate, facilitating their use as a single component (as though a larger IC).
Some embodiments may be used in conjunction with various devices and systems, for example, a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a cellular handset, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless Access Point (AP), a wired or wireless router, a wired or wireless modem, a wired or wireless network, a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), devices and/or networks operating substantially in accordance with existing IEEE 802.1 lj 802.1 la, 802.1 lb, 802.1 lg, 802.11k, 802.11η, 802.1 lr, 802.16, 802.16d, 802.16e, 802.20, 802.21 standards and/or future versions and/or derivatives of the above standards, units and/or devices which are part of the above networks, one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi- standard radio devices or systems, a wired or wireless handheld device (e.g., BlackBerry, Palm Treo), a Wireless Application Protocol (WAP) device, or the like.
As used herein, the terms "program", "programmable", and "computer program" are meant to include any sequence or human or machine cognizable steps which perform a function. Such programs are not inherently related to any particular computer or other apparatus, and may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the likes, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.) and the likes, as well as in firmware or other implementations. Generally, program modules include routines, programs, objects, components, data structures, etc., that performs particular tasks or implement particular abstract data types.
The terms "task" and "process" are used generically herein to describe any type of running programs, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique. The steps, connections, and processing of signals and information illustrated in the figures, including, but not limited to any block and flow diagrams and message sequence charts, may typically be performed in the same or in a different serial or parallel ordering and/or by different components and/or processes, threads, etc., and/or over different connections and be combined with other functions in other embodiments, unless this disables the embodiment or a sequence is explicitly or implicitly required (e.g., for a sequence of reading the value, processing the value - the value must be obtained prior to processing it, although some of the associated processing may be performed prior to, concurrently with, and/or after the read operation). Where certain process steps are described in a particular order or where alphabetic and / or alphanumeric labels are used to identify certain steps, the embodiments are not limited to any particular order of carrying out such steps. In particular, the labels are used merely for convenient identification of steps, and are not intended to imply, specify or require a particular order for carrying out such steps. Furthermore, other embodiments may use more or less steps than those discussed herein. They may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
The corresponding structures, materials, acts, and equivalents of all means plus function elements in the claims below are intended to include any structure, or material, for performing the function in combination with other claimed elements as specifically claimed. The description of the above has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. The present invention should not be considered limited to the particular embodiments described above, but rather should be understood to cover all aspects of the invention as fairly set out in the attached claims. Various modifications, equivalent processes, as well as numerous structures to which the present invention may be applicable, will be readily apparent to those skilled in the art to which the present invention is directed upon review of the present disclosure. Functions, operations, components and/or features described herein with reference to one or more embodiments, may be combined with, or may be utilized in combination with, one or more other functions, operations, components and/or features described herein with reference to one or more other embodiments. While certain features of the have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art. Accordingly, the claims are intended to cover all such modifications, substitutions, changes, and equivalents.
All publications, standards, patents, and patent applications cited in this specification are incorporated herein by reference as if each individual publication, patent, or patent application were specifically and individually indicated to be incorporated by reference and set forth in its entirety herein.

Claims

1. A device for generating a command in response to counting items using a sensor responsive to a phenomenon, for use with a communication network, the device comprising:
a sensor for producing a sensor data in response to the phenomenon;
a software and a processor for executing the software, the processor coupled to the sensor to receive the sensor data therefrom, and to produce a command in response to the sensor data; a transceiver coupled to the processor and operative for transmitting digital data to, and receiving digital data from, the network; and
a single enclosure housing the sensor, the processor, and the transceiver;
wherein the device is operative to produce the command in response to the count of items recognized in the sensor data, and wherein the device is addressable in the network.
2. The device according to claim 1 wherein the items are people.
3. The device according to claim 1 wherein the device is operative to count people in a queue.
4. The device according to claim 1 wherein the items are objects.
5. The device according to claim 1 wherein the device is operative to count objects on a shelf.
6. The device according to claim 1, wherein the sensor is a piezoelectric sensor that includes single crystal material or a piezoelectric ceramics and uses a transverse, longitudinal, or shear effect mode of the piezoelectric effect.
7. The device according to claim 1, further comprising multiple sensors arranged as a directional sensor array operative to estimate the number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the phenomenon impinging the sensor array.
8. The device according to claim 1, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, and wherein the thermoelectric sensor consists of, or comprises, a Positive Temperature Coefficient (PTC) thermistor, a Negative Temperature Coefficient (NTC) thermistor, a thermocouple, a quartz crystal, or a Resistance Temperature Detector (RTD).
9. The device according to claim 1, wherein the sensor consists of, or comprises, a nanosensor, a crystal, or a semiconductor, or wherein: the sensor is an ultrasonic based, the sensor is an eddy- current sensor, the sensor is a proximity sensor, the sensor is a bulk or surface acoustic sensor, or the sensor is an atmospheric or an environmental sensor.
10. The device according to claim 1 further integrated in part or entirely in a fixed location, mobile, or hand-held PoS terminal.
11. The device according to claim 10, wherein the PoS terminal is a battery-operated portable electronic device that is a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), an image processing device, a digital camera, a video recorder, or a handheld computing device.
12. The device according to claim 10, wherein the PoS is a terminal cash drawer, a receipt printer, a customer display, a barcode scanner, a debit/credit card reader, a conveyor belt, a weight scale, an integrated credit card processing device, a signature capture device, or a customer pin pad device.
13. The device according to claim 12, wherein the integration involves sharing a component.
14. The device according to claim 13, wherein the integration involves housing in the same enclosure, sharing same processor, or mounting onto the same surface.
15. The device according to claim 12, wherein the integration involves sharing a same connector.
16. The device according to claim 15, wherein the connector is a power connector for connecting to a power source, and wherein the integration involves sharing the same connector for being powered from the same power source.
17. The device according to claim 15, wherein the integration involves sharing the same power supply.
18. The device according to claim 1, wherein the sensor is a radiation sensor that responds to radioactivity, nuclear radiation, alpha particles, beta particles, or gamma rays, and is based on gas ionization.
19. The device according to claim 1, wherein the sensor is a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell.
20. The device according to claim 19, wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element.
21. The device according to claim 1, wherein the sensor is a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and the device further comprising one or more optical lens for focusing the received light and to guide the image, and wherein the image sensor is disposed approximately at an image focal point plane of the one or more optical lens for properly capturing the image.
22. The device according to claim 21 , further comprising an image processor coupled to the image sensor for providing a digital data video signal according to a digital video format, the digital video signal carrying digital data video based on the captured images, and wherein the digital video format is based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards.
23. The device according to claim 22 further comprising an intraframe or interframe compression based video compressor coupled to the image sensor for lossy or non-lossy compressing the digital data video, wherein the compression is based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261 , ITU-T H.263, ITU-T H.264 and ITU-T CCIR 601.
24. The device according to claim 1, wherein the sensor is an electrochemical sensor that responds to an object chemical structure, properties, composition, or reactions.
25. The device according to claim 24, wherein the electrochemical sensor is a pH meter or a gas sensor responding to a presence of radon, hydrogen, oxygen, or Carbon-Monoxide (CO), or wherein the electrochemical sensor is based on optical detection or on ionization and is a smoke, a flame, or a fire detector, or is responsive to combustible, flammable, or toxic gas.
26. The device according to claim 1, wherein the sensor is a physiological sensor that responds to parameters associated with a live body, and is external to the sensed body, implanted inside the sensed body, attached to the sensed body, or wear on the sensed body.
27. The device according to claim 26, wherein the physiological sensor is responding to body electrical signals and is an EEG Electroencephalography (EEG) or an Electrocardiography (ECG) sensor.
28. The device according to claim 26, wherein the physiological sensor is responding to oxygen saturation, gas saturation, or a blood pressure in the sensed body.
29. The device according to claim 1, wherein the sensor is an electroacoustic sensor that responds to an audible or inaudible sound.
30. The device according to claim 29, wherein the electroacoustic sensor is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing the incident sound based motion of a diaphragm or a ribbon, and the microphone consists of, or comprising, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
31. The device according to claim 1 wherein the sensor is an image sensor for capturing still or video image, and the device further comprising an image processor for processing the captured image to count the items in the captured image.
32. The device according to claim 31 wherein the image sensor is a digital video sensor for capturing digital video content, and wherein the image processor is further operative for enhancing the video content using image stabilization, unsharp masking, or super-resolution.
33. The device according to claim 31 wherein the image sensor is a digital video sensor for capturing digital video content, and wherein the image processor is operative for Video Content Analysis (VCA).
34. The device according to claim 33 wherein the VCA includes Video Motion Detection (VMD), video tracking, egomotion estimation, identification, behavior analysis, situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, or pattern recognition.
35. The device according to claim 1 wherein the network is a Body Area Network (BAN), the transceiver is a BAN transceiver, and the device further comprising a BAN port coupled to the BAN transceiver.
36. The device according to claim 35 wherein the BAN is a Wireless BAN (WBAN), the BAN port is an antenna, the BAN transceiver is a WBAN modem, and the BAN is according to, or based on, IEEE 802.15.6 standard.
37. The device according to claim 1 wherein the network is a Personal Area Network (PAN), the transceiver is a PAN transceiver, and the device further comprising a PAN port coupled to the PAN transceiver.
38. The device according to claim 37 wherein the PAN is a Wireless PAN (WPAN), the PAN port is an antenna, and the PAN transceiver is a WPAN modem, and wherein the WPAN is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005standards, or wherein the WPAN is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards.
39. The device according to claim 1 wherein: the network is a Local Area Network (LAN); the transceiver is a LAN transceiver, and the device further comprising a LAN port coupled to the LAN transceiver.
40. The device according to claim 39 wherein: the LAN is a wired LAN that is using a wired LAN medium; the LAN port is a LAN connector; and the LAN transceiver is a LAN modem, and wherein: the LAN is an Ethernet based; and the wired LAN is according to, or based on, IEEE 802.3-2008 standard.
41. The device according to claim 40 wherein: the wired LAN medium is based on twisted-pair copper cables; the LAN interface is based on 10Base-T, 100Base-T, 100Base-TX, 100Base-T2, 100Base-T4, 1000Base-T, 1000Base-TX, 10GBase-CX4, or lOGBase-T; and the LAN connector is RJ-45 type, or wherein: the wired LAN medium is based on an optical fiber; the LAN interface is lOBase-FX, lOOBase-SX, 100Base-BX, lOOBase-LXlO, lOOOBase-CX, lOOOBase-SX, lOOOBase- LX, lOOOBase-LXlO, 1000Base-ZX, 1000Base-BX10, lOGBase-SR, lOGBase-LR, lOGBase-LRM, lOGBase-ER, lOGBase-ZR, or 10GBase-LX4; and the LAN connector is a fiber-optic connector..
42. The device according to claim 39 wherein: the LAN is a Wireless LAN (WLAN); the LAN port is a WLAN antenna; and the LAN transceiver is a WLAN modem, and wherein the WLAN is according to, or based on, IEEE 802.1 1-2012, IEEE 802.1 la, IEEE 802.1 lb, IEEE 802.1 lg, IEEE 802.11η, or IEEE 802.1 lac.
43. The device according to claim 1 wherein the network is a packet-based or a circuit-switched- based Wide Area Network (WAN), the transceiver is a WAN transceiver, and the device further comprising a WAN port coupled to the WAN transceiver.
44. The device according to claim 43 wherein the WAN is a wired WAN that is using a wired WAN medium, the WAN port is a WAN connector, and the WAN transceiver is a WAN modem, and wherein the wired WAN medium comprises a wiring primarily installed for carrying a service signal to a building.
45. The device according to claim 44 wherein the wired WAN medium comprises one or more telephone wire pairs primarily designed for carrying an analog telephone signal, and wherein the network is using Digital Subscriber Line / Loop (DSL).
46. The device according to claim 45 wherein the network is based on Asymmetric Digital Subscriber Line (ADSL), ADSL2 or on ADSL2+, according to, or based on, ANSI T1.413, ITU-T Recommendation G.992.1, ITU-T Recommendation G.992.2, ITU-T Recommendation G.992.3, ITU-T Recommendation G.992.4, or ITU-T Recommendation G.992.5, or wherein the network is based on Very-high-bit-rate Digital Subscriber Line (VDSL), according to, or based on, ITU-T Recommendation G.993.1 or ITU-T Recommendation G.993.2.
47. The device according to claim 43 wherein the WAN is a wireless broadband network over a licensed or unlicensed radio frequency band, the WAN port is an antenna, and the WAN transceiver is a wireless modem, and wherein the unlicensed radio frequency band is an Industrial, Scientific and Medical (ISM) radio band.
48. The device according to claim 47 wherein the wireless network is a satellite network, the antenna is a satellite antenna, and the wireless modem is a satellite modem.
49. The device according to claim 47 wherein the wireless network is a WiMAX network, wherein the antenna is a WiMAX antenna and the wireless modem is a WiMAX modem, and the WiMAX network is according to, or based on, IEEE 802.16-2009.
50. The device according to claim 47 wherein the wireless network is a cellular telephone network, the antenna is a cellular antenna, and the wireless modem is a cellular modem, and wherein the cellular telephone network is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 lxRTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008.
51. The device according to claim 1 wherein the network is a wireless network using a wireless communication over a licensed or an unlicensed radio frequency band.
52. The device according to claim 51 wherein the unlicensed radio frequency band is an Industrial, Scientific and Medical (ISM) radio band.
53. A Business Intelligence (BI) system for commanding an actuator operation in response to first and second sensor outputs respectively associated with first and second phenomena, for use with a network, the system comprising:
a first device comprising, or connectable to, the first sensor that responds to the first phenomenon, the first device is operative to transmit a first command corresponding to the first phenomenon over the network;
a second device comprising, or connectable to, the second sensor that responds to the second phenomenon, the second device is operative to transmit a second command corresponding to the second phenomenon over the network; and
a third device comprising, or connectable to, the actuator that affects a third phenomenon, the third device is operative to receive the first and second commands from the network and to activate the actuator in response to the first and second sensor data,
wherein the devices are addressable in the network.
54. The system according to claim 53 wherein each of the first and second devices further comprising a software and a processor for executing the software, the processor coupled to the respective sensor to receive the sensor data therefrom, and to produce a respective command in response to the sensor data.
55. The system according to claim 53 wherein the first and second sensors are of the same type.
56. The system according to claim 53 wherein the first and second sensors are of distinct types.
57. The system according to claim 56 wherein the first and second sensors are operative to sense the same phenomenon.
58. The system according to claim 53 wherein the first device, the second device, or both, further operative for items counting.
59. The system according to claim 58 wherein the items are people.
60. The system according to claim 59 operative to count people in a queue.
61. The system according to claim 59 operative to count people at a Point-of-Sale (PoS).
62. The system according to claim 58 wherein the items are objects.
63. The system according to claim 62 wherein the items are objects on a shelf.
64. The system according to claim 53, wherein the first sensor is a piezoelectric sensor that includes single crystal material or a piezoelectric ceramics and uses a transverse, longitudinal, or shear effect mode of the piezoelectric effect.
65. The system according to claim 53, further comprising multiple sensors arranged as a directional sensor array operative to estimate the number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the phenomenon impinging the sensor array.
66. The system according to claim 53, wherein the first sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation, and wherein the thermoelectric sensor consists of, or comprises, a Positive Temperature Coefficient (PTC) thermistor, a Negative Temperature Coefficient (NTC) thermistor, a thermocouple, a quartz crystal, or a Resistance Temperature Detector (RTD).
67. The system according to claim 53, wherein the first sensor consists of, or comprises, a nanosensor, a crystal, or a semiconductor, or wherein: the sensor is an ultrasonic based, the sensor is an eddy-current sensor, the sensor is a proximity sensor, the sensor is a bulk or surface acoustic sensor, or the sensor is an atmospheric or an environmental sensor.
68. The system according to claim 53, wherein the first sensor is a radiation sensor that responds to radioactivity, nuclear radiation, alpha particles, beta particles, or gamma rays, and is based on gas ionization.
69. The system according to claim 53, wherein the first sensor is a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell.
70. The system according to claim 69, wherein the photoelectric sensor is based on Charge-Coupled System (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element.
71. The system according to claim 53, wherein the first sensor is a photosensitive image sensor array comprising multiple photoelectric sensors, for capturing an image and producing electronic image information representing the image, and the system further comprising one or more optical lens for focusing the received light and to guide the image, and wherein the image sensor is disposed approximately at an image focal point plane of the one or more optical lens for properly capturing the image.
72. The system according to claim 71, further comprising an image processor coupled to the image sensor for providing a digital data video signal according to a digital video format, the digital video signal carrying digital data video based on the captured images, and wherein the digital video format is based on one out of: TIFF (Tagged Image File Format), RAW format, AVI, DV, MOV, WMV, MP4, DCF (Design Rule for Camera Format), ITU-T H.261, ITU-T H.263, ITU-T H.264, ITU-T CCIR 601, ASF, Exif (Exchangeable Image File Format), and DPOF (Digital Print Order Format) standards.
73. The system according to claim 72 further comprising an intraframe or interframe compression based video compressor coupled to the image sensor for lossy or non-lossy compressing the digital data video, wherein the compression is based on a standard compression algorithm which is one or more out of JPEG (Joint Photographic Experts Group) and MPEG (Moving Picture Experts Group), ITU-T H.261, ITU-T H.263, ITU-T H.264, and ITU-T CCIR 601.
74. The system according to claim 53, wherein the first sensor is an electrochemical sensor that responds to an object chemical structure, properties, composition, or reactions.
75. The system according to claim 74, wherein the electrochemical sensor is a pH meter or a gas sensor responding to a presence of radon, hydrogen, oxygen, or Carbon-Monoxide (CO), or wherein the electrochemical sensor is based on optical detection or on ionization and is a smoke, a flame, or a fire detector, or is responsive to combustible, flammable, or toxic gas.
76. The system according to claim 53, wherein the first sensor is a physiological sensor that responds to parameters associated with a live body, and is external to the sensed body, implanted inside the sensed body, attached to the sensed body, or wearable on the sensed body.
77. The system according to claim 76, wherein the physiological sensor is responding to body electrical signals and is an EEG Electroencephalography (EEG) or an Electrocardiography (ECG) sensor.
78. The system according to claim 76, wherein the physiological sensor is responding to oxygen saturation, gas saturation, or a blood pressure in the sensed body.
79. The system according to claim 53, wherein the first sensor is an electroacoustic sensor that responds to an audible or inaudible sound.
80. The system according to claim 79, wherein the electroacoustic sensor is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing the incident sound based motion of a diaphragm or a ribbon, and the microphone consists of, or comprising, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone.
81. The system according to claim 53 wherein the first sensor is an image sensor for capturing still or video image, and the system further comprising an image processor for processing the captured image to count the items in the captured image.
82. The system according to claim 81 wherein the image sensor is a digital video sensor for capturing digital video content, and wherein the image processor is further operative for enhancing the video content using image stabilization, unsharp masking, or super-resolution.;
83. The system according to claim 81 wherein the image sensor is a digital video sensor for capturing digital video content, and wherein the image processor is operative for Video Content Analysis (VCA).
84. The system according to claim 83 wherein the VCA includes Video Motion Detection (VMD), video tracking, egomotion estimation, identification, behavior analysis, situation awareness, dynamic masking, motion detection, object detection, face recognition, automatic number plate recognition, tamper detection, video tracking, or pattern recognition.
85. The system according to claim 53, wherein the actuator is a light source that emits visible or non-visible light for illumination or indication, the non-visible light is infrared, ultraviolet, X-rays, or gamma rays, and wherein the light source is an electric light source for converting electrical energy into light.
86. The system according to claim 85, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode.
87. The system according to claim 53, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves.
88. The system according to claim 87, wherein the sound is audible, and wherein the sounder is an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or a planar magnetic loudspeaker, or a bending wave loudspeaker.
89. The system according to claim 87, wherein the sounder is operative to emit a single or multiple tones, or wherein the sounder is operative to continuous or intermittent operation.
90. The system according to claim 87, wherein the sounder is an electromechanical or a ceramic- based, and is an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer.
91. The system according to claim 87, wherein the sound audible, and wherein the sounder is a loudspeaker, and wherein the system is operative to store and play one or more digital audio content files.
92. The system according to claim 91, wherein the one or more digital audio content files include a pre-recorded audio and are stored entirely or in part in the system.
93. The system according to claim 91, further comprising a synthesizer for producing the digital audio content.
94. The system according to claim 91, wherein the sensor is a microphone for capturing digital audio content and the system is further operative to play the captured digital audio content by the sounder.
95. The system according to claim 91, wherein the system is operative to select one of the one or more digital audio content files, and for playing the selected file by the sounder.
96. The system according to claim 91, wherein the one or more digital audio content comprises music content.
97. The system according to claim 91, wherein the music content includes the sound of an acoustical musical instrument that is a piano, a tuba, a harp, a violin, a flute, or a guitar.
98. The system according to claim 91, wherein the one or more digital audio content is a male or female human voice, and wherein the one or more digital audio content includes a human voice saying a syllable, a word, a phrase, a sentence, a short story or a long story.
99. The system according to claim 98 further comprising a speech synthesizer for producing a human speech, wherein the speech synthesizer is part of the system, and wherein the speech synthesizer is Text-To-Speech (TTS) based, is a concatenative type using unit selection, diphone synthesis, or domain-specific synthesis, or is a formant type that is articulatory synthesis or hidden Markov models (HMM) based.
100. The system according to claim 53, wherein the actuator is an electric thermoelectric actuator and is a heater or a cooler, operative for affecting the temperature of a solid, a liquid, or a gas object, and is coupled to the object by conduction, convection, force convention, thermal radiation, or by the transfer of energy by phase changes.
101. The system according to claim 100, wherein the thermoelectric actuator is a cooler based on a heat pump driving a refrigeration cycle using a compressor-based electric motor.
102. The system according to claim 100, wherein the thermoelectric actuator is an electric heater that is a resistance heater or a dielectric heater.
103. The system according to claim 102, wherein the electric heater is an induction heater or is solid- state based, and is an active heat pump system based on the Peltier effect.
104. The system according to claim 53, wherein the actuator is a display for visually presenting information.
105. The system according to claim 104, wherein the display is a monochrome, grayscale or color display and consists of an array of light emitters or light reflectors.
106. The system according to claim 104, wherein the display is a projector based on an Eidophor, Liquid Crystal on Silicon (LCoS or LCOS), LCD, MEMS or Digital Light Processing (DLP™) technology.
107. The system according to claim 106, wherein the projector is a virtual retinal display.
108. The system according to claim 104, wherein the display is a video display supporting Standard- Definition (SD) or High-Definition (HD) standards, and is capable of scrolling, static, bold or flashing the presented information.
109. The system according to claim 108, wherein the video display is a 3D video display.
110. The system according to claim 104, wherein the display is an analog display having an analog input interface supporting NTSC, PAL or SECAM formats, and the analog input interface includes RGB, VGA (Video Graphics Array), SVGA (Super Video Graphics Array), SCART or S-video interface.
111. The system according to claim 53, wherein the display is a digital display having a digital input interface that interface includes IEEE 1394, Fire Wire™, USB, SDI (Serial Digital Interface), HDMI (High-Definition Multimedia Interface), DVI (Digital Visual Interface), UDI (Unified Display Interface), DisplayPort, Digital Component Video or DVB (Digital Video Broadcast) interface.
112. The system according to claim 53, wherein the display is a Cathode-Ray Tube (CRT), a Field Emission Display (FED), an Electroluminescent Display (ELD), a Vacuum Fluorescent Display (VFD), or an Organic Light-Emitting Diode (OLED) display, a passive-matrix (PMOLED) display, an active-matrix OLEDs (AMOLED) display, a Liquid Crystal Display (LCD) display, a Thin Film Transistor (TFT) display, an LED-backlit LCD display, or an Electronic Paper Display (EPD) display that is based on Gyricon technology, Electro- Wetting Display (EWD), or Electrofluidic display technology.
113. The system according to claim 53, wherein the display is a laser video display that is based on a Vertical-External-Cavity Surface-Emitting-Laser (VECSEL) or a Vertical-Cavity Surface-Emitting Laser (VCSEL).
114. The system according to claim 53, wherein the display is a segment display based on a seven- segment display, a fourteen-segment display, a sixteen-segment display, or a dot matrix display, and is operative to only display digits, alphanumeric characters, words, characters, arrows, symbols, ASCII, non-ASCII characters, or any combination thereof.
115. The system according to claim 53, wherein the actuator is a motion actuator that causes linear or rotary motion, and the system further comprising a conversion mechanism for respectfully converting to rotary or linear motion based on a screw, a wheel and axle, or a cam.
116. The system according to claim 115, wherein the conversion mechanism is based on a screw, and wherein the system further includes a leadscrew, a screw jack, a ball screw or a roller screw, or wherein the conversion mechanism is based on a wheel and axle, and wherein the system further includes a hoist, a winch, a rack and pinion, a chain drive, a belt drive, a rigid chain, or a rigid belt, or wherein the motion actuator further comprising a lever, a ramp, a screw, a cam, a crankshaft, a gear, a pulley, a constant- velocity joint, or a ratchet, for affecting the motion.
117. The system according to claim 115, wherein the motion actuator is a pneumatic, hydraulic, or electrical actuator.
118. The system according to claim 117, wherein the motion actuator is an electrical motor.
119. The system according to claim 118, wherein the electrical motor is a brushed, a brushless, or an uncommutated DC motor.
120. The system according to claim 119, wherein the DC motor is a stepper motor that is a Permanent Magnet (PM) motor, a Variable reluctance (VR) motor, or a hybrid synchronous stepper.
121. The system according to claim 118, wherein the electrical motor is an AC motor that is an induction motor, a synchronous motor, or an eddy current motor.
122. The system according to claim 121, wherein the AC motor is a single-phase AC induction motor, a two-phase AC servo motor, or a three-phase AC synchronous motor, and the AC motor is a split-phase motor, a capacitor-start motor, or a Permanent-Split Capacitor (PSC) motor.
123. The system according to claim 118, wherein the electrical motor is an electrostatic motor, a piezoelectric actuator, or a MEMS-based motor.
124. The system according to claim 115, wherein the motion actuator is a linear hydraulic actuator, a linear pneumatic actuator, a linear induction electric motor (LIM), or a Linear Synchronous electric Motor (LSM).
125. The system according to claim 115, wherein the motion actuator is based on a piezoelectric motor, a Surface Acoustic Wave (SAW) motor, a Squiggle motor, an ultrasonic motor, or a micro- or nanometer comb-drive capacitive actuator, a Dielectric or Ionic based Electroactive Polymers (EAPs) actuator, a solenoid, a thermal bimorph, or a piezoelectric unimorph actuator.
126. The system according to claim 53, wherein the actuator is a compressor or a pump and is operative to move, force, or compress liquid, gas or slurry, and the actuator.
127. The system according to claim 126, wherein the pump is a direct lift, an impulse, a displacement, a valveless, a velocity, a centrifugal, a vacuum, or a gravity pump.
128. The system according to claim 126, wherein the pump is a positive displacement pump that is a rotary lobe, a progressive cavity, a rotary gear, a piston, a diaphragm, a screw, a gear, a hydraulic, or a vane pump.
129. The system according to claim 128, wherein the positive displacement pump is a rotary-type positive displacement pump that is an internal gear, a screw, a shuttle block, a flexible vane, a sliding vane, a rotary vane, a circumferential piston, a helical twisted roots, or a liquid ring vacuum pump.
130. The system according to claim 128, wherein the positive displacement pump is a reciprocating- type positive displacement type that is a piston, a diaphragm, a plunger, a diaphragm valve, or a radial piston pump.
131. The system according to claim 128, wherein the positive displacement pump is a linear-type positive displacement type that is a rope-and-chain pump.
132. The system according to claim 126, wherein the pump is an impulse pump that is a hydraulic ram, a pulser, or an airlift pump.
133. The system according to claim 126, wherein the pump is a rotodynamic pump that is a velocity pump, or is a centrifugal pump that is a radial flow, an axial flow, or a mixed flow pump.
134. The system according to claim 53 configured to be installed in a store.
135. The system according to claim 134 configured to manage people queue.
136. The system according to claim 135 configured to manage people queue at a Point-of-Sale (PoS).
137. The system according to claim 53, wherein the first device, the second device, or the third device, is integrated in part or entirely in a fixed location, mobile, or hand-held PoS terminal.
138. The system according to claim 137, wherein the PoS terminal is a battery-operated portable electronic device that is a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), an image processing device, a digital camera, a video recorder, or a handheld computing device.
139. The system according to claim 137, wherein the PoS is a terminal cash drawer, a receipt printer, a customer display, a barcode scanner, a debit/credit card reader, a conveyor belt, a weight scale, an integrated credit card processing system, a signature capture device, or a customer pin pad device.
140. The system according to claim 139, wherein the integration involves sharing a component.
141. The system according to claim 140, wherein the integration involves housing in the same enclosure, sharing same processor, or mounting onto the same surface.
142. The system according to claim 140, wherein the integration involves sharing a same connector.
143. The system according to claim 142, wherein the connector is a power connector for connecting to a power source, and wherein the integration involves sharing the same connector for being powered from the same power source, or wherein the integration involves sharing the same power supply.
PCT/IL2013/051080 2012-12-30 2013-12-30 Distributed business intelligence system and method of operation thereof WO2014102797A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261747314P 2012-12-30 2012-12-30
US61/747,314 2012-12-30

Publications (1)

Publication Number Publication Date
WO2014102797A1 true WO2014102797A1 (en) 2014-07-03

Family

ID=51019996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2013/051080 WO2014102797A1 (en) 2012-12-30 2013-12-30 Distributed business intelligence system and method of operation thereof

Country Status (1)

Country Link
WO (1) WO2014102797A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016183302A1 (en) * 2015-05-13 2016-11-17 Shelf Bucks, Inc. Systems and methods for dynamically transmitting content to potential customers
WO2016187648A1 (en) * 2015-05-25 2016-12-01 Kepler Analytics Pty Ltd Retail customer analytic system
CN107062788A (en) * 2016-12-30 2017-08-18 广东格兰仕集团有限公司 A kind of refrigerator and its safety protection method with home security
WO2018024506A1 (en) * 2016-08-01 2018-02-08 Philips Lighting Holding B.V. Adjustable luminaire and method using harvested nfc signals
CN109981162A (en) * 2019-03-27 2019-07-05 北京空间飞行器总体设计部 Data processing and Transmission system suitable for inertial space pointing space astronomical satellite
US10354222B2 (en) 2016-12-28 2019-07-16 Walmart Apollo, Llc Product tracking system
CN110033298A (en) * 2017-12-19 2019-07-19 佳能株式会社 Information processing equipment and its control method, system and storage medium
US10489742B2 (en) 2016-08-23 2019-11-26 Walmart Apollo, Llc System and method for managing retail products
US10565549B2 (en) 2016-08-23 2020-02-18 Walmart Apollo, Llc System and method for managing retail products
US10586205B2 (en) 2015-12-30 2020-03-10 Walmart Apollo, Llc Apparatus and method for monitoring stock information in a shopping space
US10586206B2 (en) 2016-09-22 2020-03-10 Walmart Apollo, Llc Systems and methods for monitoring conditions on shelves
CN111969703A (en) * 2020-07-22 2020-11-20 傲普(上海)新能源有限公司 User side mobile folding sunlight tracking storage system, method, terminal and storage medium
US10922145B2 (en) 2018-09-04 2021-02-16 Target Brands, Inc. Scheduling software jobs having dependencies
IT202000007789A1 (en) * 2020-04-14 2021-10-14 Sunland Optics Srl Visual system for automatic management of entry into commercial establishments and / or public offices or offices open to the public
CN115756797A (en) * 2022-11-25 2023-03-07 广州力麒智能科技有限公司 Queuing system scheduling detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5097328A (en) * 1990-10-16 1992-03-17 Boyette Robert B Apparatus and a method for sensing events from a remote location
WO2006087730A1 (en) * 2005-02-21 2006-08-24 Infosys Technologies Limited A real time business event monitoring, tracking, and execution architecture
US20070279214A1 (en) * 2006-06-02 2007-12-06 Buehler Christopher J Systems and methods for distributed monitoring of remote sites
US20080249859A1 (en) * 2007-04-03 2008-10-09 Robert Lee Angell Generating customized marketing messages for a customer using dynamic customer behavior data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5097328A (en) * 1990-10-16 1992-03-17 Boyette Robert B Apparatus and a method for sensing events from a remote location
WO2006087730A1 (en) * 2005-02-21 2006-08-24 Infosys Technologies Limited A real time business event monitoring, tracking, and execution architecture
US20070279214A1 (en) * 2006-06-02 2007-12-06 Buehler Christopher J Systems and methods for distributed monitoring of remote sites
US20080249859A1 (en) * 2007-04-03 2008-10-09 Robert Lee Angell Generating customized marketing messages for a customer using dynamic customer behavior data

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016183302A1 (en) * 2015-05-13 2016-11-17 Shelf Bucks, Inc. Systems and methods for dynamically transmitting content to potential customers
WO2016187648A1 (en) * 2015-05-25 2016-12-01 Kepler Analytics Pty Ltd Retail customer analytic system
US10586205B2 (en) 2015-12-30 2020-03-10 Walmart Apollo, Llc Apparatus and method for monitoring stock information in a shopping space
WO2018024506A1 (en) * 2016-08-01 2018-02-08 Philips Lighting Holding B.V. Adjustable luminaire and method using harvested nfc signals
US10489742B2 (en) 2016-08-23 2019-11-26 Walmart Apollo, Llc System and method for managing retail products
US10565549B2 (en) 2016-08-23 2020-02-18 Walmart Apollo, Llc System and method for managing retail products
US10586206B2 (en) 2016-09-22 2020-03-10 Walmart Apollo, Llc Systems and methods for monitoring conditions on shelves
US10354222B2 (en) 2016-12-28 2019-07-16 Walmart Apollo, Llc Product tracking system
CN107062788A (en) * 2016-12-30 2017-08-18 广东格兰仕集团有限公司 A kind of refrigerator and its safety protection method with home security
CN110033298B (en) * 2017-12-19 2024-01-09 佳能株式会社 Information processing apparatus, control method thereof, system thereof, and storage medium
CN110033298A (en) * 2017-12-19 2019-07-19 佳能株式会社 Information processing equipment and its control method, system and storage medium
US11481789B2 (en) * 2017-12-19 2022-10-25 Canon Kabushiki Kaisha Information processing apparatus, system, control method for information processing apparatus, and non-transitory computer-readable storage medium
US10922145B2 (en) 2018-09-04 2021-02-16 Target Brands, Inc. Scheduling software jobs having dependencies
CN109981162B (en) * 2019-03-27 2022-03-04 北京空间飞行器总体设计部 Data processing and transmission system suitable for inertial space pointing space astronomical satellite
CN109981162A (en) * 2019-03-27 2019-07-05 北京空间飞行器总体设计部 Data processing and Transmission system suitable for inertial space pointing space astronomical satellite
IT202000007789A1 (en) * 2020-04-14 2021-10-14 Sunland Optics Srl Visual system for automatic management of entry into commercial establishments and / or public offices or offices open to the public
CN111969703B (en) * 2020-07-22 2022-03-11 傲普(上海)新能源有限公司 User side mobile folding sunlight tracking storage system, method, terminal and storage medium
CN111969703A (en) * 2020-07-22 2020-11-20 傲普(上海)新能源有限公司 User side mobile folding sunlight tracking storage system, method, terminal and storage medium
CN115756797A (en) * 2022-11-25 2023-03-07 广州力麒智能科技有限公司 Queuing system scheduling detection method and device

Similar Documents

Publication Publication Date Title
WO2014102797A1 (en) Distributed business intelligence system and method of operation thereof
US11240311B2 (en) System and method for server based control
US10641863B2 (en) Power saving intelligent locator
US20220180268A1 (en) Coordinated delivery of dining experiences
US20190043064A1 (en) Real-time qualitative analysis
EP4104134A1 (en) Coordinated delivery of dining experiences
Devare Analysis and design of IoT based physical location monitoring system
WO2023244527A1 (en) Coordinated delivery of dining experiences
AU2020356637A1 (en) Platform for soliciting, processing and managing commercial activity across a plurality of disparate commercial systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13867935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13867935

Country of ref document: EP

Kind code of ref document: A1