US20080316315A1 - Methods and systems for alerting by weighing data based on the source, time received, and frequency received - Google Patents

Methods and systems for alerting by weighing data based on the source, time received, and frequency received Download PDF

Info

Publication number
US20080316315A1
US20080316315A1 US12/203,613 US20361308A US2008316315A1 US 20080316315 A1 US20080316315 A1 US 20080316315A1 US 20361308 A US20361308 A US 20361308A US 2008316315 A1 US2008316315 A1 US 2008316315A1
Authority
US
United States
Prior art keywords
data
video
weights
input
input data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/203,613
Other versions
US7876351B2 (en
Inventor
John J. Donovan
Daniar Hussain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIERRA VISTA GROUP LLC
Original Assignee
KD Secure LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KD Secure LLC filed Critical KD Secure LLC
Priority to US12/203,613 priority Critical patent/US7876351B2/en
Assigned to KD SECURE, LLC reassignment KD SECURE, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DONOVAN, JOHN J, HUSSAIN, DANIAR
Publication of US20080316315A1 publication Critical patent/US20080316315A1/en
Application granted granted Critical
Publication of US7876351B2 publication Critical patent/US7876351B2/en
Assigned to TIERRA VISTA GROUP, LLC reassignment TIERRA VISTA GROUP, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KD SECURE, LLC
Assigned to SECURENET SOLUTIONS GROUP, LLC reassignment SECURENET SOLUTIONS GROUP, LLC CORRECTIVE ASSIGNMENT TO CORRECT THE THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 032948 FRAME 0401. ASSIGNOR(S) HEREBY CONFIRMS THE CORRECT ASSIGNEE IS SECURENET SOLUTIONS GROUP, LLC. Assignors: KD SECURE, LLC
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention is generally related to security and safety systems. More specifically, this invention relates to processing of data from various systems, including video data, and generating alerts based on relative weights attributed to the data, the source of the data, as well as weights attributed to external events.
  • the present invention may be used for various security and safety purposes, including fighting crime and ensuring safety procedures are followed.
  • the present invention is a method and system for intelligent monitoring and intelligent alerting.
  • One or more data inputs are received from one or more systems.
  • a weight is attributed to each data input based on such factors as the input data, the source of the input data, external events, etc.
  • One or more video inputs are received from one or more video sources.
  • Image analysis is performed on the video data to determine one or more video parameters.
  • the video parameters may include motion, duration of motion, face detection, etc.
  • the video parameters are assigned one or more weights based on such factors as the magnitude of the video parameters, the reliability of the video source, etc.
  • a series of rules are evaluated using the data inputs, the video inputs, and their respective weights.
  • An associated action is performed for each rule that is activated.
  • An action may be an email alert, an address on the public address system, an automatic call to the police, etc.
  • an accumulated value is calculated from the data inputs, the video inputs, and their respective weights.
  • a hierarchy of actions is performed based on the accumulated value and one or more threshold values.
  • the weights, the rules, and the actions are configurable by a system administrator.
  • the system administrator may customize the types of actions, their hierarchy, under what conditions actions are escalated, etc.
  • forced alerts are used, in which a person at a given level must respond to an alert, and if the person does not respond, the alert is automatically escalated to a higher level.
  • An authorized user such as a security officer, can view the status of the alerts at any time using a terminal connected to a network.
  • the security officer has an interface which shows meters representing an accumulation of all data inputs and video inputs.
  • the meters show the relationship of the accumulated value and the thresholds.
  • the meters may be sliding bars, circular gauges, or any alternative design. For example, the meter may go from grey to yellow when motion is detected in a given area of a certain video source for a given period of time. The meter may then turn red when in addition to the motion being detected, a certain individual swipes through a given entrance and the time is after a certain hour. This may be applied to a situation in which an executive enters his or her office building late at night, and there is motion detected for more than ten minutes behind his desk.
  • FIG. 1 illustrates a system architecture of one embodiment of the present invention
  • FIG. 2 illustrates a software architecture of one embodiment of the present invention
  • FIG. 3 illustrates a hardware architecture of one embodiment of the present invention
  • FIG. 4 illustrates a flowchart of a process according to one embodiment of the present invention.
  • the present invention provides intelligent security and safety monitoring.
  • the present invention may be implemented as a modular system that can utilize several core components that may be integrated together: video detection components, input components, action components, and service components.
  • a rules engine codifies and evaluates various rules, such as “issue an alert to person A when motion is detected in location B for time period C.”
  • the video detection components are used to extract relevant video parameters from the video sources; the video parameters are input into the rules engine.
  • the input components may be used to receive inputs from other systems, for example sensory devices, such as temperature probes.
  • the action components represent various actions that may be taken under certain conditions, and may be activated by the rules engine.
  • the service components provide interfaces for services performed by human beings (“Artificial artificial intelligence”), for example remote monitoring by off-shore security guards (“Mechanical Turks”).
  • the present invention may be implemented using any number of detection, input, action, and service components. Some illustrative components are presented here, but the present invention is not limited to this list of components.
  • An advantage of the present invention is the open architecture, in which new components may be added as they are developed.
  • FIG. 1 shows system architecture 100 of one embodiment of the present invention.
  • One or more data inputs 102 are received via one or more input components 104 (only one input component is illustrated for clarity).
  • the data inputs could be data from police reports, anonymous tips, sensory devices, etc. In one embodiment, data inputs could come from a personnel database in storage 112 and from temperature probe 116 .
  • the input components, such as input component 104 provide interfaces between the system 100 and various input devices.
  • the data inputs 102 are assigned a weight by data weight engine 106 .
  • the weights may be a function of the input data, the source of the input data (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one input data is shown being processed by data weight engine 106 for clarity.)
  • One or more video inputs 107 are received and processed by one or more detection components 108 (only one video detection component is illustrated for clarity).
  • the video inputs could be historical, archived video data, such as video from storage 112 , or could be video data from live video cameras, such as camera 114 or camera 115 .
  • the detection components such as detection component 108 , determine one or more video parameters from the video inputs 107 .
  • detection component 108 may detect whether or not there is a person in a particular region of video input 107 .
  • the one or more video parameters that are determined by the detection component 108 are assigned a weight by video weight engine 110 .
  • the weights may be a function of the video data, the video source (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one video parameter is shown being processed by video weight engine 110 for clarity.)
  • Cameras 114 and 115 may be digital IP cameras, digital PC cameras, web-cams, analog cameras, cameras attached to camera servers, etc. Any camera device is within the scope of the present invention, as long as the camera device can capture video. Some cameras may have an integrated microphone; alternatively, a separate microphone may be used to capture audio data along with video data. As used herein, the terms “video,” “video data,” “video source,” etc. are meant to include video without audio, as well as video with interlaced audio (audiovisual information). Of course, it is to be understood that the present invention may also be implemented using audio data without accompanying video data by replacing cameras with microphones.
  • the weighted input data and the weighted video data are processed by rules engine 120 .
  • Rules engine 120 evaluates a set of rules based on the weighted input data and the weighted video data.
  • the rules engine 120 activates one or more actions via one or more action components 122 .
  • the rules engine 120 may contain a rule stating: “Issue email alert to Executive A (Action Component 1 ) if Executive A swipes into office building (Data Input Component 1 ) and within the last twenty minutes there was motion for more than five minutes in the region behind his desk on the camera in his office (Detection Component 1 ).” If the preconditions of the rule are satisfied, the action is performed.
  • the preconditions may be weighted based on the data, the source of the data, external events, and other information. For example, the executive swiping into a building would be given a higher weight than a tip saying that the executive has entered the building. A security guard logging into a system that the executive has entered the building may receive an even higher weight.
  • data may also come from a service component 118 .
  • Service components such as service component 118
  • Service components are interfaces to human operators (“Artificial artificial intelligence”).
  • a service component may provide an interface for human operators to monitor a given area for suspicious activity, and to send a signal to the rules engine 120 that suspicious activity is going on in a given area.
  • the rules engine 120 will activate an action if a corresponding rule is activated.
  • the human operator may force an action to be performed by directly activating an action component, such as action component 122 .
  • Equations 1 to 4 show possible rules that may be evaluated by rules engine 120 .
  • action component a 1 will be activated if the expression on the left-hand side is greater than a predetermined threshold ⁇ 1 .
  • ⁇ 1 a predetermined threshold
  • Eqs. 1-4 “a” stands for action component, “f, g, and h” are predetermined functions, “w” stands for weight, “x” stands for the input data, and “v” stands for video data.
  • Eqs. 1-4 could represent a hierarchy of actions that would be activated for different threshold scenarios. Alternatively, Eqs. 1-4 could represent several rules being evaluated in parallel.
  • Eqs. 1-4 are illustrative of only one embodiment of the present invention, and the present invention may be implemented using other equations, other expressions, or even by using heuristic rules rather than equations.
  • Equation 5 shows an example of a calculation of determining a weight that may be performed by data weight engine 106 or video weight engine 110 .
  • the weight “w” may be based on several factors, including the source of the data “s” (for example, the reliability of the source), the time that the data was received “t” (for example, older data would be assigned a lower weight), and the frequency that the data was received “f” (for example, the same data received multiple times would be assigned a higher weight).
  • Other weighting factors may also be used, and the weighing factors described here are illustrative only and are not intended to limit the scope of the invention.
  • Equation 6 shows an example of a calculation that may be performed by detection component 108 to determine a video parameter “v i ” from the video data “v(t)”.
  • the video parameter “v i ” may be obtained as a function “f i ” of the integral.
  • a detection component for counting the number of people that enter a region over a period of time may perform face detection in a given frame, count the number of faces detected, and then integrate over several frames to obtain a final count.
  • the function “f i ” of Eq. 6 may be a composition of several functions, as shown in Equation 7.
  • a detection component may count the number of people wearing a safety helmet that enter a given area by composing a safety helmet detection function with a people counting function.
  • ⁇ i ⁇ 1 ⁇ 2 ⁇ . . . ⁇ n (7)
  • the new, or future, weights “w j ” may be based on the past weights “w i ” and external events “e i ”. Examples of external events could be “Amber Alerts” for missing children, “National Terror Alerts” for in the United States, etc.
  • Eq. 8 shows an example of a calculation for determining new, or future, weights “w j ” by composing a matrix of past weights “w i ” with external events “e i ”.
  • FIG. 2 shows a software architecture 200 of one embodiment of the present invention.
  • a presentation layer 202 provides the front-end interface to users of the system 100 of FIG. 1 .
  • a user interface is provided for an administrator, who can modify various system parameters, such as the data input components, the detection components, the data and video weights, the rules, as well as the action components.
  • Another user interface is provided for an officer, such as a security guard, to monitor the activity of the system 100 .
  • a user interface for the security officer would allow the officer to monitor alerts system-wide, turn on and off appropriate cameras, and notify authorities.
  • An interface is also provided for an end-user, such as an executive.
  • the interface for the end-user allows, for example, the end-user to monitor those alerts relevant to him or her, as well as to view those cameras and video sources he or she has permission to view.
  • Various user interfaces may be created for various users of the present invention, and the present invention is not limited to any particular user interface shown or described here.
  • a middle layer 204 provides the middleware logic for the system 100 .
  • the middle layer 204 includes the weight engines 106 , 110 as well as the rule engine 120 of FIG. 1 .
  • the middle layer interfaces with the user interface 202 and evaluates the logic of Equations 1-8.
  • a database layer 206 is provided for storing the input data and the video data.
  • the database layer 206 may be implemented using a hierarchical storage architecture, in which older data, or less frequently used data, is migrated to slower and cheaper storage media.
  • the database layer 206 provides the input data and the video data to the middle layer 204 , which in turn processes the data for display by the presentation layer 202 .
  • FIG. 3 shows a hardware architecture 300 of one embodiment of the present invention.
  • the software architecture 200 may be implemented using any hardware architecture, of which FIG. 3 is illustrative.
  • a bus 314 connects the various hardware subsystems.
  • a display 302 is used to present the output of the presentation layer 202 of FIG. 2 .
  • An I/O interface 304 provides an interface to input devices, such as keyboard and mouse (not shown).
  • a network interface 305 provides connectivity to a network, such as an Ethernet network, a Local Area Network (LAN), a Wide Area Network (WAN), an IP network, the Internet, etc.
  • RAM 306 provides working memory while executing a process according to system architecture 100 of FIG. 1 .
  • Hard disk 308 provides the program code for execution of a process according to system architecture 100 of FIG. 1 .
  • CPU 309 executes program code stored on hard disk 308 or RAM 306 , and controls the other system components.
  • Hierarchical storage manager 310 provides an interface to one or more storage modules 312 on which video data is stored. It is to be understood that this is only an illustrative hardware architecture on which the present invention may be implemented, and the present invention is not limited to the particular hardware shown or described here. It is also understood that numerous hardware components have been omitted for clarity, and that various hardware components may be added without departing from the spirit and scope of the present invention.
  • FIG. 4 illustrates a process 400 according to one embodiment of the present invention.
  • Process 400 may be stored in hard disk 308 and RAM 306 , and may be executed on CPU 309 of FIG. 3 .
  • the process starts at step 402 .
  • Input data from one or more data sources is received, as shown in step 404 .
  • Video data from one or more video sources is received, as shown in step 406 .
  • Image analysis is performed on the video data to generate one or more video parameters, as shown in step 408 .
  • One or more data weights are calculated for the input data, as shown in step 410 .
  • One or more video weights are calculated for the video parameters, as shown in step 412 .
  • a set of rules is evaluated using the input data, the data weights, the video parameters, and the video weights, as shown in step 414 .
  • One or more actions are performed based on the evaluation of the rules, as shown in step 416 .
  • Process 400 ends in step 418 .
  • various detection components may be used to determine one or more video parameters from the video inputs. These detection components may be configured to record meta-data along an occurrence of each event. For example, if a person is detected in an area by a face detection component, meta-data may be stored along with each occurrence of that person in the video.
  • Some illustrative detection components are listed below. However, the present invention is not limited to these detection components, and various detection components may be used to determine one or more video parameters, and are all within the scope of the present invention.
  • various sensory devices may be integrated into system 100 of FIG. 1 by adding an input component for receiving and processing the input from the sensory device.
  • Some illustrative input components are listed below. However, the present invention is not limited to these input components, and various other input components associated with various other sensory and other devices are within the scope of the present invention.
  • action components may be used to perform one or more actions in response to a rule being activated.
  • the rules engine may activate one or more action components under certain conditions defined by the rules.
  • Some illustrative action components are listed below. However, the present invention is not limited to these particular action components, and other action components are within the scope of the present invention.
  • service components may be used to integrate human intelligence into system 100 .
  • a service component may provide a user interface for remote security guards (“Mechanical Turks”) who may monitor the video inputs.
  • Mechanisms Remote security guards
  • Some illustrative examples of what the security guards could monitor for and detect is listed below.
  • Some events, such as “suspicious behavior,” which may be hard for a computer to detect, may be detected by a human operator (“Artificial artificial intelligence”).
  • the human operators may also add meta-data for each occurrence of an event.
  • a security guard may add meta-data to each portion of a video where he or she noticed suspicious activity.
  • the present invention is not limited to the examples described here, and is intended to cover all such service components which may be added to detect various events using a human operator.
  • the components listed above may be reused and combined to create advanced applications. Many advanced applications may be assembled by using various combinations and sub-combinations of components. The following discussion illustrates several advanced applications that may be created using the above components: a university security application, and a workflow safety monitoring application.
  • a security application may be created for a university, college, or school using appropriate components selected from the above.
  • cameras and gunshot detection devices may be installed around a campus.
  • the gunshot detection devices are interfaced to the system 100 via an appropriate gunshot input component.
  • the cameras are monitored by appropriate detection components, for example, a face detection component may be utilized in order to detect faces in a video image.
  • Various action components may be installed, including an action component to alert the campus police and an action component to send a text message via SMS to all students on campus.
  • the university's student and personnel system may also be interfaced to the system 100 via an appropriate input component.
  • the rules engine would be configured by a system administrator at the university.
  • a sample rule may say, “If only one card swipe is registered in the student system, while two or more people are detected passing a certain threshold on a video camera monitoring a turnstile (tailgating), then issue an audible alert to the security guard.”
  • Another rule may say “If a sensory device has detected a gunshot, then issue an alert to the campus police as well as a text message via SMS to all students on campus.”
  • a service component may be added, which may provide an interface for a security guard sitting in a central location on campus to monitor all alerts coming into the system.
  • the service component may include a user interface for the security guard to view selected cameras, notify the police, or issue alerts to all students.
  • the present invention is not limited to this particular scenario.
  • a student is detected tailgating behind another student to gain entrance into a dormitory, by using the face detection and student system input components described above.
  • an audible alert would be automatically issued to anybody in the vicinity of the dormitory entrance.
  • a gunshot is detected by one of the gunshot devices, and the data enters the system via a gunshot input component.
  • the rules engine would evaluate the second rule, and automatically notify the police and send a mass text message to all students on campus.
  • Such a system configuration of the present invention could have prevented the second set of shootings at Virginia Tech.
  • another action component may be activated, for example a “Follow Person” component.
  • a “Follow Person” component would track the person who was detected as tail-gating through the entrance and follow that person through multiple cameras around campus. The person may be followed either in real time, switching from camera to camera as the person moves, or by retracing the steps the person has already taken. Further, the person's steps on previous visits may also be retrieved using a face recognition component.
  • an input component may provide an interface to the police system, for example, a database of arrest and reports of criminal activities.
  • the data inputs and video inputs are weighted according to their sources. For example, data from police records are weighted highly, while data from anonymous tips are weighted lower.
  • the weighted values are input into the rules engine which determines the alerts or other actions to take based on the rules.
  • Another application may be created for safety, auditing, and security of a facility using the above components.
  • the safety system may enforce various workflow, process flow, and/or safety rules and regulations. For example, if a safety alert is issued, then certain cameras may be turned on to monitor each of various steps necessary to correct the safety problem by monitoring the number of people at each step of the process, the length of time each person stays at each step of the process, etc.
  • various sensory devices may monitor the refinery. If a sensory device detects something irregular, the rules engine issues an audible alert and activates certain cameras to ensure that a proper response is taken. For example, if the pressure in the refinery goes up, then cameras corresponding to those valves that need to be adjusted are turned on. If no face is detected in those cameras for the next ten minutes, then the rules engine may escalate the alert to appropriate management.
  • a system constructed according to the present invention from the components described here, could have prevented the explosion in the Texas refinery.
  • a signature may be captured via a signature input component.
  • a face may be captured via a camera installed on the ATM machine.
  • a face recognition component may be used to determine whether the face is associated with the signature, and/or the bank card that was inserted into the ATM machine.
  • Various alerts may be issued based on the face recognized, the time of day, the length of time the person is at the ATM, the amount of money that is being withdrawn, and the number of people in front of the ATM.
  • the rules may also be codified to perform certain actions if someone is detected sleeping in the ATM booth, or if a suspicious individual enters the ATM booth.
  • a security application may be created using the components of the present invention for a residential community with commonly shared facilities, such as streets, roads, playgrounds, tennis courts, swimming pools, etc.
  • Digital cameras would be installed in strategic places, e.g., on the streets overlooking the playgrounds.
  • Video data would be gathered and displayed in real-time over a wireless or wired network so that any resident (or security guard) can watch any camera from anywhere at any time.
  • Action components would be used to send an alert if a child is noticed alone in the playground, if unauthorized cars are perusing the street, etc.
  • other sensory devices may be added to the community security application by using other input components.
  • radon levels, CO 2 levels, fire detectors, smoke detectors, dust particle detectors, etc. may be added by using an appropriate input component.
  • Rules in the rules engine could be customized, for example, if a fire is detected, then alerts may be sent to individuals who are affected as well as to the fire department.
  • Another sample rule that could be implemented in the rules engine would detect a person who has not left their house for a certain period of time and to alert appropriate relatives. Yet another sample rule would send alerts if power failed in the community.
  • water and electric meters having the capability of transmitting data electronically may be used with appropriate input components.
  • An input component may receive data from water meters and the rules engine may monitor water usage and time of day, present water usage versus past water usage, etc. and send alerts for possible leaks, broken pipes, etc.
  • An input component may receive data from electrical meters and the rules engine may monitor electric usage and time of day, present electric usage versus past electric usage, etc. and send alerts for possible broken wires, electrical shorts, etc.
  • More advanced applications may be created using storage of historical data.
  • a database may be created of past intruders or fraudulent perpetrators. Their faces may be recorded in the database. Future incidences may be prevented by matching suspicious individuals against the database.
  • the rules may be set by a system administrator.
  • the rules may be heuristically updated. For example, the rules may be learned based on past occurrences.
  • a learning component may be added which can recognize missing rules. If an alert was not issued when it should have been, this may be noted by an administrator of the system, and a new rule may be automatically generated.
  • encryption is provided for added privacy.
  • a user may restrict who on the internet may watch the surveillance cameras and have access to the data. Access may also be restricted by time, location, and camera. Stored historical data may also be encrypted for privacy.

Abstract

Methods, systems, and apparatus for monitoring, alerting, and acting, including the following steps. Input data is received from one or more sensory devices. One or more data weights are determined for the input data based on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received. A set of rules is evaluated based on the input data and the data weights. One or more actions, including a hierarchy of one or more alerts, are activated based on the result of the evaluation of the rules.

Description

    REFERENCE TO RELATED APPLICATIONS
  • This applications claims priority from U.S. Ser. No. 11/746,043, filed on May 8, 2007, issued on ______ as U.S. patent No. ______, the entirety of which is hereby incorporated by reference herein.
  • FIELD OF THE INVENTION
  • The present invention is generally related to security and safety systems. More specifically, this invention relates to processing of data from various systems, including video data, and generating alerts based on relative weights attributed to the data, the source of the data, as well as weights attributed to external events. The present invention may be used for various security and safety purposes, including fighting crime and ensuring safety procedures are followed.
  • BACKGROUND OF THE INVENTION
  • Governments, corporations, universities, other institutions, and individuals are increasingly concerned about security and safety. In one example involving crime, on Apr. 16, 2007, a student at Virginia Tech University killed 32 people and injured 24 others. As a result, many parents, students, and university administrators are increasingly concerned about security on college campuses.
  • In an example involving safety, an explosion in a Texas oil refinery killed 15 people and injured 180 others. The U.S. Chemical Safety Board determined that various factors, one of which was the absence of adequate experience in the refinery, contributed to the accident: “As the unit was being heated, the Day Supervisor, an experienced ISOM operator, left the plant at 10:47 a.m. due to a family emergency. The second Day Supervisor was devoting most of his attention to the final stages of the ARU startup; he had very little ISOM experience and, therefore, did not get involved in the ISOM startup. No experienced supervisor or ISOM technical expert was assigned to the raffinate section startup after the Day Supervisor left, although BP's safety procedures required such oversight.” (See Investigation Report: Refinery Explosion and Fire, Chemical Safety Board, March 2007, pg. 52.) Accordingly, large and small corporations are concerned about ensuring that proper safety and security procedures are followed.
  • Therefore, as recognized by the present inventors, what are needed are a method, apparatus, and system for intelligent security and safety. What is needed is a method for monitoring data from various systems, including video data. What is also needed is a method for intelligent alerting of appropriate individuals based on the data.
  • Accordingly, it would be an advancement in the state of the art to provide an apparatus, system, and method for intelligent security and safety that receives data inputs from various systems, including video cameras, and that generates intelligent alerts based on the data inputs.
  • It is against this background that various embodiments of the present invention were developed.
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention is a method and system for intelligent monitoring and intelligent alerting. One or more data inputs are received from one or more systems. A weight is attributed to each data input based on such factors as the input data, the source of the input data, external events, etc. One or more video inputs are received from one or more video sources. Image analysis is performed on the video data to determine one or more video parameters. The video parameters may include motion, duration of motion, face detection, etc. The video parameters are assigned one or more weights based on such factors as the magnitude of the video parameters, the reliability of the video source, etc. A series of rules are evaluated using the data inputs, the video inputs, and their respective weights. An associated action is performed for each rule that is activated. An action may be an email alert, an address on the public address system, an automatic call to the police, etc.
  • In one embodiment, an accumulated value is calculated from the data inputs, the video inputs, and their respective weights. A hierarchy of actions is performed based on the accumulated value and one or more threshold values.
  • The weights, the rules, and the actions are configurable by a system administrator. The system administrator may customize the types of actions, their hierarchy, under what conditions actions are escalated, etc. In one embodiment, forced alerts are used, in which a person at a given level must respond to an alert, and if the person does not respond, the alert is automatically escalated to a higher level.
  • An authorized user, such as a security officer, can view the status of the alerts at any time using a terminal connected to a network. The security officer has an interface which shows meters representing an accumulation of all data inputs and video inputs. The meters show the relationship of the accumulated value and the thresholds. The meters may be sliding bars, circular gauges, or any alternative design. For example, the meter may go from grey to yellow when motion is detected in a given area of a certain video source for a given period of time. The meter may then turn red when in addition to the motion being detected, a certain individual swipes through a given entrance and the time is after a certain hour. This may be applied to a situation in which an executive enters his or her office building late at night, and there is motion detected for more than ten minutes behind his desk.
  • Other features, utilities and advantages of the various embodiments of the present invention will be apparent from the following more particular description of embodiments of the invention as illustrated in the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a system architecture of one embodiment of the present invention;
  • FIG. 2 illustrates a software architecture of one embodiment of the present invention;
  • FIG. 3 illustrates a hardware architecture of one embodiment of the present invention; and
  • FIG. 4 illustrates a flowchart of a process according to one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention provides intelligent security and safety monitoring. The present invention may be implemented as a modular system that can utilize several core components that may be integrated together: video detection components, input components, action components, and service components. A rules engine codifies and evaluates various rules, such as “issue an alert to person A when motion is detected in location B for time period C.” The video detection components are used to extract relevant video parameters from the video sources; the video parameters are input into the rules engine. The input components may be used to receive inputs from other systems, for example sensory devices, such as temperature probes. The action components represent various actions that may be taken under certain conditions, and may be activated by the rules engine. Finally, the service components provide interfaces for services performed by human beings (“Artificial artificial intelligence”), for example remote monitoring by off-shore security guards (“Mechanical Turks”).
  • The present invention may be implemented using any number of detection, input, action, and service components. Some illustrative components are presented here, but the present invention is not limited to this list of components. An advantage of the present invention is the open architecture, in which new components may be added as they are developed.
  • FIG. 1 shows system architecture 100 of one embodiment of the present invention. One or more data inputs 102 are received via one or more input components 104 (only one input component is illustrated for clarity). The data inputs could be data from police reports, anonymous tips, sensory devices, etc. In one embodiment, data inputs could come from a personnel database in storage 112 and from temperature probe 116. The input components, such as input component 104, provide interfaces between the system 100 and various input devices. The data inputs 102 are assigned a weight by data weight engine 106. The weights may be a function of the input data, the source of the input data (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one input data is shown being processed by data weight engine 106 for clarity.)
  • One or more video inputs 107 are received and processed by one or more detection components 108 (only one video detection component is illustrated for clarity). The video inputs could be historical, archived video data, such as video from storage 112, or could be video data from live video cameras, such as camera 114 or camera 115. The detection components, such as detection component 108, determine one or more video parameters from the video inputs 107. For example, detection component 108 may detect whether or not there is a person in a particular region of video input 107. The one or more video parameters that are determined by the detection component 108 are assigned a weight by video weight engine 110. The weights may be a function of the video data, the video source (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one video parameter is shown being processed by video weight engine 110 for clarity.)
  • Cameras 114 and 115 may be digital IP cameras, digital PC cameras, web-cams, analog cameras, cameras attached to camera servers, etc. Any camera device is within the scope of the present invention, as long as the camera device can capture video. Some cameras may have an integrated microphone; alternatively, a separate microphone may be used to capture audio data along with video data. As used herein, the terms “video,” “video data,” “video source,” etc. are meant to include video without audio, as well as video with interlaced audio (audiovisual information). Of course, it is to be understood that the present invention may also be implemented using audio data without accompanying video data by replacing cameras with microphones.
  • The weighted input data and the weighted video data (outputs from the data weight engine 106 and the video weight engine 110) are processed by rules engine 120. Rules engine 120 evaluates a set of rules based on the weighted input data and the weighted video data. The rules engine 120 activates one or more actions via one or more action components 122. For example, the rules engine 120 may contain a rule stating: “Issue email alert to Executive A (Action Component 1) if Executive A swipes into office building (Data Input Component 1) and within the last twenty minutes there was motion for more than five minutes in the region behind his desk on the camera in his office (Detection Component 1).” If the preconditions of the rule are satisfied, the action is performed. As discussed previously, the preconditions may be weighted based on the data, the source of the data, external events, and other information. For example, the executive swiping into a building would be given a higher weight than a tip saying that the executive has entered the building. A security guard logging into a system that the executive has entered the building may receive an even higher weight.
  • In FIG. 1, data may also come from a service component 118. Service components, such as service component 118, are interfaces to human operators (“Artificial artificial intelligence”). For example, a service component may provide an interface for human operators to monitor a given area for suspicious activity, and to send a signal to the rules engine 120 that suspicious activity is going on in a given area. The rules engine 120 will activate an action if a corresponding rule is activated. Alternatively, the human operator may force an action to be performed by directly activating an action component, such as action component 122.
  • Equations 1 to 4 show possible rules that may be evaluated by rules engine 120. For example, as shown in Eq. 1, action component a1 will be activated if the expression on the left-hand side is greater than a predetermined threshold τ1. In Eqs. 1-4, “a” stands for action component, “f, g, and h” are predetermined functions, “w” stands for weight, “x” stands for the input data, and “v” stands for video data. Eqs. 1-4 could represent a hierarchy of actions that would be activated for different threshold scenarios. Alternatively, Eqs. 1-4 could represent several rules being evaluated in parallel. Eqs. 1-4 are illustrative of only one embodiment of the present invention, and the present invention may be implemented using other equations, other expressions, or even by using heuristic rules rather than equations.

  • α1ii=l i=n w i ·x i)+g ji=1 i=m w i·νi)+h j(∫t=1 t=t n w(ν)·ν(t)dt)≧τ1  (1)

  • α2ii=1 i=n w i ·x i)+g ji=1 i=m w i·νi)+h j(∫t=1 t=t n w(ν)·ν(t)dt)≧τ2  (2)

  • . . .   (3)

  • αjji=1 i=n w i ·x i)+g ji=1 i=m w i·νi)+h j(∫t=1 t= n w(ν)·(t)dt)≧τj  (4)
  • Equation 5 shows an example of a calculation of determining a weight that may be performed by data weight engine 106 or video weight engine 110. The weight “w” may be based on several factors, including the source of the data “s” (for example, the reliability of the source), the time that the data was received “t” (for example, older data would be assigned a lower weight), and the frequency that the data was received “f” (for example, the same data received multiple times would be assigned a higher weight). Other weighting factors may also be used, and the weighing factors described here are illustrative only and are not intended to limit the scope of the invention.

  • w i =s i ·t i·. . . ƒi  (5)
  • Equation 6 shows an example of a calculation that may be performed by detection component 108 to determine a video parameter “vi” from the video data “v(t)”. Eq. 6 shows a video stream “v(t)” weighted by a weighting function “w(v)” and integrated over time from time t=1 to t=tn. The video parameter “vi” may be obtained as a function “fi” of the integral. For example, a detection component for counting the number of people that enter a region over a period of time may perform face detection in a given frame, count the number of faces detected, and then integrate over several frames to obtain a final count.

  • νii(∫t=1 t=t n w(ν)·ν(tdt)  (6)
  • In one embodiment, the function “fi” of Eq. 6 may be a composition of several functions, as shown in Equation 7. For example, a detection component may count the number of people wearing a safety helmet that enter a given area by composing a safety helmet detection function with a people counting function.

  • ƒi1∘ƒ2∘. . . ∘ƒn  (7)
  • In one embodiment, the new, or future, weights “wj” may be based on the past weights “wi” and external events “ei”. Examples of external events could be “Amber Alerts” for missing children, “National Terror Alerts” for in the United States, etc. Eq. 8 shows an example of a calculation for determining new, or future, weights “wj” by composing a matrix of past weights “wi” with external events “ei”.
  • [ w 1 w 2 w j ] = [ e 1 , e 2 , , e n ] · [ w 1 w 2 w j ] ( 8 )
  • FIG. 2 shows a software architecture 200 of one embodiment of the present invention. A presentation layer 202 provides the front-end interface to users of the system 100 of FIG. 1. Several user interfaces are provided. For example, a user interface is provided for an administrator, who can modify various system parameters, such as the data input components, the detection components, the data and video weights, the rules, as well as the action components. Another user interface is provided for an officer, such as a security guard, to monitor the activity of the system 100. For example, a user interface for the security officer would allow the officer to monitor alerts system-wide, turn on and off appropriate cameras, and notify authorities. An interface is also provided for an end-user, such as an executive. The interface for the end-user allows, for example, the end-user to monitor those alerts relevant to him or her, as well as to view those cameras and video sources he or she has permission to view. Various user interfaces may be created for various users of the present invention, and the present invention is not limited to any particular user interface shown or described here.
  • A middle layer 204 provides the middleware logic for the system 100. The middle layer 204 includes the weight engines 106, 110 as well as the rule engine 120 of FIG. 1. The middle layer interfaces with the user interface 202 and evaluates the logic of Equations 1-8.
  • A database layer 206 is provided for storing the input data and the video data. In one embodiment, the database layer 206 may be implemented using a hierarchical storage architecture, in which older data, or less frequently used data, is migrated to slower and cheaper storage media. The database layer 206 provides the input data and the video data to the middle layer 204, which in turn processes the data for display by the presentation layer 202.
  • FIG. 3 shows a hardware architecture 300 of one embodiment of the present invention. The software architecture 200 may be implemented using any hardware architecture, of which FIG. 3 is illustrative. A bus 314 connects the various hardware subsystems. A display 302 is used to present the output of the presentation layer 202 of FIG. 2. An I/O interface 304 provides an interface to input devices, such as keyboard and mouse (not shown). A network interface 305 provides connectivity to a network, such as an Ethernet network, a Local Area Network (LAN), a Wide Area Network (WAN), an IP network, the Internet, etc. RAM 306 provides working memory while executing a process according to system architecture 100 of FIG. 1. Hard disk 308 provides the program code for execution of a process according to system architecture 100 of FIG. 1. CPU 309 executes program code stored on hard disk 308 or RAM 306, and controls the other system components. Hierarchical storage manager 310 provides an interface to one or more storage modules 312 on which video data is stored. It is to be understood that this is only an illustrative hardware architecture on which the present invention may be implemented, and the present invention is not limited to the particular hardware shown or described here. It is also understood that numerous hardware components have been omitted for clarity, and that various hardware components may be added without departing from the spirit and scope of the present invention.
  • FIG. 4 illustrates a process 400 according to one embodiment of the present invention. Process 400 may be stored in hard disk 308 and RAM 306, and may be executed on CPU 309 of FIG. 3. The process starts at step 402. Input data from one or more data sources is received, as shown in step 404. Video data from one or more video sources is received, as shown in step 406. Image analysis is performed on the video data to generate one or more video parameters, as shown in step 408. One or more data weights are calculated for the input data, as shown in step 410. One or more video weights are calculated for the video parameters, as shown in step 412. A set of rules is evaluated using the input data, the data weights, the video parameters, and the video weights, as shown in step 414. One or more actions are performed based on the evaluation of the rules, as shown in step 416. Process 400 ends in step 418.
  • According to the present invention, various detection components may be used to determine one or more video parameters from the video inputs. These detection components may be configured to record meta-data along an occurrence of each event. For example, if a person is detected in an area by a face detection component, meta-data may be stored along with each occurrence of that person in the video. Some illustrative detection components are listed below. However, the present invention is not limited to these detection components, and various detection components may be used to determine one or more video parameters, and are all within the scope of the present invention.
  • 1. Detect presence of intruder in designated area
  • 2. Detect presence of intruder in designated area during designated time
  • 3. Detect whether it is a person in designated area (excluding pets, wind, etc.)
  • 4. Detect number of people in designated area
  • 5. Detect if more people entered a designated area than left the designate area
  • 6. Detect voice (sound) volume
  • 7. Recognize certain sound patterns, such as gunshots or shouts
  • 8. Detect certain key words
  • 9. Detect speed of motion of an object
  • 10. Detect size of object
  • 11. Detect area of motion
  • 12. Detect acceleration
  • 13. Detect if person is too short in designated area
  • 14. Detect if person is too long in designated area
  • 15. Detect a face
  • 16. Recognize a certain face
  • 17. Detect object left in a given area for a certain period of time
  • 18. Count number of vehicles
  • 19. Detect if vehicle crossed lane
  • 20. Detect if vehicles is driving the wrong way in a lane
  • 21. Determine type of vehicle
  • 22. Detect license plate of vehicle
  • 23. Detect percent of lane occupied
  • 24. Detect speed of vehicle
  • Additionally, various sensory devices may be integrated into system 100 of FIG. 1 by adding an input component for receiving and processing the input from the sensory device. Some illustrative input components are listed below. However, the present invention is not limited to these input components, and various other input components associated with various other sensory and other devices are within the scope of the present invention.
  • 1. Measure temperature
  • 2. Measure pressure
  • 3. Measure height
  • 4. Measure speed
  • 5. Measure revolutions per minute
  • 6. Measure blood pressure
  • 7. Measure heart rate
  • 8. Measure RFID signal
  • 9. Measure chlorine level
  • 10. Measure radon level
  • 11. Measure dust particle level
  • 12. Measure pollution level
  • 13. Measure CO2 emission level
  • 14. Measure bacteria level in water
  • 15. Measure water meter
  • 16. Measure electrical meter
  • As described above, various action components may be used to perform one or more actions in response to a rule being activated. The rules engine may activate one or more action components under certain conditions defined by the rules. Some illustrative action components are listed below. However, the present invention is not limited to these particular action components, and other action components are within the scope of the present invention.
  • 1. Send email alert to designated person
  • 2. Send SMS alert to designed phone number
  • 3. Send message to designated blackberry
  • 4. Send alert to public address system
  • 5. Send message or picture to police
  • 6. Send alert email to mass mailing list
  • 7. Send text message (SMS) to mass list
  • 8. Send alert to PC or PocketPC
  • 9. Call designated phone
  • 10. Turn lights on or off in designated area
  • 11. Turn thermostat up or down
  • 12. Turn camera on or off
  • 13. Issue a forced alerts (with automatic escalation if no response)
  • 14. Follow a person using Pan-Zoom-Tilt (PTZ) camera
  • 15. Follow a person from camera to camera
  • According to the present invention, service components may be used to integrate human intelligence into system 100. For example, a service component may provide a user interface for remote security guards (“Mechanical Turks”) who may monitor the video inputs. Some illustrative examples of what the security guards could monitor for and detect is listed below. Some events, such as “suspicious behavior,” which may be hard for a computer to detect, may be detected by a human operator (“Artificial artificial intelligence”). The human operators may also add meta-data for each occurrence of an event. For example, a security guard may add meta-data to each portion of a video where he or she noticed suspicious activity. The present invention is not limited to the examples described here, and is intended to cover all such service components which may be added to detect various events using a human operator.
  • 1. Detect people going into building but not coming out
  • 2. Detect people carrying packages in and not carrying out
  • 3. Detect people carrying packages out but not carrying in
  • 4. Detect people wearing different clothes
  • 5. Detect people acting suspiciously
  • 6. Detect people carrying guns
  • 7. Detect people tampering with locks
  • 8. Detect people being mugged
  • 9. Detect a shooting
  • 10. Detect people being bullied
  • The components listed above may be reused and combined to create advanced applications. Many advanced applications may be assembled by using various combinations and sub-combinations of components. The following discussion illustrates several advanced applications that may be created using the above components: a university security application, and a workflow safety monitoring application.
  • A security application may be created for a university, college, or school using appropriate components selected from the above. For example, cameras and gunshot detection devices may be installed around a campus. The gunshot detection devices are interfaced to the system 100 via an appropriate gunshot input component. The cameras are monitored by appropriate detection components, for example, a face detection component may be utilized in order to detect faces in a video image. Various action components may be installed, including an action component to alert the campus police and an action component to send a text message via SMS to all students on campus. The university's student and personnel system may also be interfaced to the system 100 via an appropriate input component. The rules engine would be configured by a system administrator at the university. For example, a sample rule may say, “If only one card swipe is registered in the student system, while two or more people are detected passing a certain threshold on a video camera monitoring a turnstile (tailgating), then issue an audible alert to the security guard.” Another rule may say “If a sensory device has detected a gunshot, then issue an alert to the campus police as well as a text message via SMS to all students on campus.”
  • A service component may be added, which may provide an interface for a security guard sitting in a central location on campus to monitor all alerts coming into the system. The service component may include a user interface for the security guard to view selected cameras, notify the police, or issue alerts to all students.
  • One example of an illustrative scenario is described here. However, the present invention is not limited to this particular scenario. Suppose a student is detected tailgating behind another student to gain entrance into a dormitory, by using the face detection and student system input components described above. First, an audible alert would be automatically issued to anybody in the vicinity of the dormitory entrance. Then, suppose a gunshot is detected by one of the gunshot devices, and the data enters the system via a gunshot input component. These two events are assigned weights by the weight engines. The rules engine would evaluate the second rule, and automatically notify the police and send a mass text message to all students on campus. Such a system configuration of the present invention could have prevented the second set of shootings at Virginia Tech.
  • In one embodiment, another action component may be activated, for example a “Follow Person” component. Such a component would track the person who was detected as tail-gating through the entrance and follow that person through multiple cameras around campus. The person may be followed either in real time, switching from camera to camera as the person moves, or by retracing the steps the person has already taken. Further, the person's steps on previous visits may also be retrieved using a face recognition component.
  • In another embodiment, an input component may provide an interface to the police system, for example, a database of arrest and reports of criminal activities. The data inputs and video inputs are weighted according to their sources. For example, data from police records are weighted highly, while data from anonymous tips are weighted lower. The weighted values are input into the rules engine which determines the alerts or other actions to take based on the rules.
  • Another application may be created for safety, auditing, and security of a facility using the above components. By integrating with legacy systems using the input components and receiving inputs from video cameras, the safety system may enforce various workflow, process flow, and/or safety rules and regulations. For example, if a safety alert is issued, then certain cameras may be turned on to monitor each of various steps necessary to correct the safety problem by monitoring the number of people at each step of the process, the length of time each person stays at each step of the process, etc.
  • One example of an illustrative scenario is described here. However, the present invention is not limited to this particular scenario. As stated previously, an explosion at an oil refinery in Texas was contributed by an experienced manager leaving the premises. This disaster would have been prevented by enforcing workflow and process flow constraints by using the principles and components of the present invention. For example, by using a face counting component to count the number of faces in a room, and a rule to issue an alert if there is not a face at each of the required workstations. An input component would be used to interface to legacy systems, such as personnel systems, and to determine average experience levels of personnel in each section of the refinery. Various action components may be used to issue intelligent alerts to various personnel depending on customizable conditions, such as severity, time, and response necessary for the alert. Various workflow and process flow constraints may be enforced by utilizing the input components and video detection components. The workflow may be codified in a set of rules in the rules engine, and which may be setup by a system administrator.
  • For example, suppose a first worker with 10 years of experience entered the refinery, his face was detected by a camera on the entrance doorway into the refinery, his face was recognized against a database of employees, his name was retrieved, his experience level was retrieved from the personnel system, and the rules engine determined that an acceptable average experience level was present in the refinery (10 years). Suppose another worker with only 2 years of experience entered the refinery, his face was detected by the camera on the entrance doorway into the refinery, his face was recognized against a database of employees, his name was retrieved, his experience level was retrieved from the personnel system, and the rules engine determined that an acceptable average experience level was present in the refinery (6 years average). Now suppose the first worker with 10 years experience leaves the refinery, his face is detected by a camera on the exit doorway, his face is recognized against a database of employees, his name is retrieved, his experience level is retrieved from the personnel system, and the rules engine determines that the average experience level has dropped below a predetermined threshold (e.g., 6 years). The rules engine then activates an appropriate action, such as issuing an alert, corresponding to the rule that was activated. (Suppose in this scenario, the rule was codified by the system administrator to issue an alert if the average experience level in the refinery drops below 6 years.) Such a system configuration of the present invention could have prevented the explosion in the Texas oil refinery.
  • In one embodiment, various sensory devices may monitor the refinery. If a sensory device detects something irregular, the rules engine issues an audible alert and activates certain cameras to ensure that a proper response is taken. For example, if the pressure in the refinery goes up, then cameras corresponding to those valves that need to be adjusted are turned on. If no face is detected in those cameras for the next ten minutes, then the rules engine may escalate the alert to appropriate management. Such a system, constructed according to the present invention from the components described here, could have prevented the explosion in the Texas refinery.
  • Other embodiments of the present invention may be used for auditing, compliance, and banking fraud detection. For example, the components of the present invention may be used to create an application for banking fraud detection, including banking verification, and fraud/theft prevention. At an ATM, a signature may be captured via a signature input component. A face may be captured via a camera installed on the ATM machine. A face recognition component may be used to determine whether the face is associated with the signature, and/or the bank card that was inserted into the ATM machine. Various alerts may be issued based on the face recognized, the time of day, the length of time the person is at the ATM, the amount of money that is being withdrawn, and the number of people in front of the ATM. The rules may also be codified to perform certain actions if someone is detected sleeping in the ATM booth, or if a suspicious individual enters the ATM booth.
  • Other embodiments of the present invention may be used in a community security system, which could be used in neighborhoods, retirement villages, residential communities, corporate campuses, construction sites, etc. For example, a security application may be created using the components of the present invention for a residential community with commonly shared facilities, such as streets, roads, playgrounds, tennis courts, swimming pools, etc. Digital cameras would be installed in strategic places, e.g., on the streets overlooking the playgrounds. Video data would be gathered and displayed in real-time over a wireless or wired network so that any resident (or security guard) can watch any camera from anywhere at any time. Action components would be used to send an alert if a child is noticed alone in the playground, if unauthorized cars are perusing the street, etc.
  • In some embodiments, other sensory devices may be added to the community security application by using other input components. For example, radon levels, CO2 levels, fire detectors, smoke detectors, dust particle detectors, etc. may be added by using an appropriate input component. Rules in the rules engine could be customized, for example, if a fire is detected, then alerts may be sent to individuals who are affected as well as to the fire department. Another sample rule that could be implemented in the rules engine would detect a person who has not left their house for a certain period of time and to alert appropriate relatives. Yet another sample rule would send alerts if power failed in the community.
  • In other embodiments, water and electric meters having the capability of transmitting data electronically may be used with appropriate input components. An input component may receive data from water meters and the rules engine may monitor water usage and time of day, present water usage versus past water usage, etc. and send alerts for possible leaks, broken pipes, etc. An input component may receive data from electrical meters and the rules engine may monitor electric usage and time of day, present electric usage versus past electric usage, etc. and send alerts for possible broken wires, electrical shorts, etc.
  • More advanced applications may be created using storage of historical data. For example, a database may be created of past intruders or fraudulent perpetrators. Their faces may be recorded in the database. Future incidences may be prevented by matching suspicious individuals against the database.
  • In one embodiment, the rules may be set by a system administrator. In another embodiment, the rules may be heuristically updated. For example, the rules may be learned based on past occurrences. In one embodiment, a learning component may be added which can recognize missing rules. If an alert was not issued when it should have been, this may be noted by an administrator of the system, and a new rule may be automatically generated.
  • In some embodiments of the present invention, encryption is provided for added privacy. For example, a user may restrict who on the internet may watch the surveillance cameras and have access to the data. Access may also be restricted by time, location, and camera. Stored historical data may also be encrypted for privacy.
  • While the methods disclosed herein have been described and shown with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form equivalent methods without departing from the teachings of the present invention. Accordingly, unless specifically indicated herein, the order and grouping of the operations is not a limitation of the present invention.
  • While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims (40)

1. A system for monitoring, alerting, and acting, comprising:
one or more sensory devices for performing a measurement and generating input data;
one or more video sources for capturing video data;
one or more storage areas for storing video data and input data;
one or more processors, operatively coupled to the one or more storage areas; and
one or more memories, operatively coupled to the one or more processors and the one or more storage areas, the one or more memories storing program code to:
receive input data from the one or more sensory devices;
determine one or more data weights for the input data based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received;
receive video data from the one or more video sources;
perform image analysis on the video data to generate one or more video parameters;
determine one or more video weights for the video parameters based at least on a weight corresponding to a source of video data used to calculate the video parameters;
evaluate a set of rules based on the input data, the video parameters, the data weights, and the video weights; and
perform one or more actions based on the evaluation of the set of rules.
2. The system of claim 1, wherein future data weights are determined from past data weights using additional external event weights corresponding to external events, and
wherein the events external to the system include at least an event selected from the group consisting of police databases, and anonymous tips.
3. The system of claim 1, further comprising program code to:
generate meta-data based on the input data and the video parameters.
4. The system of claim 1, further comprising program code to:
receive data from external police databases; and
evaluate the set of rules based on the received data.
5. The system of claim 1, further comprising program code to:
receive data from anonymous tips, the anonymous tips having unidentified source, unidentified location, and unidentified timestamp; and
evaluate the set of rules based on the received data.
6. The system of claim 1, wherein the set of rules includes at least a rule for activating an action when a weighted sum of a set of events is greater than a predetermined threshold.
7. The system of claim 1, wherein the video parameters are determined from the video data by a weighted integral of the video data over time, weighted by a video weighing function.
8. The system of claim 7, wherein the video parameters are determined from the video data by a predefined function of the weighted integral, and the predefined function is a composition of several other functions.
9. The system of claim 1, further comprising:
one or more audio sources for capturing audio data,
wherein the one or more memories further comprise program code to:
receive audio data from the one or more audio sources;
perform audio analysis on the audio data to generate one or more audio parameters; and
determine one or more audio weights for the audio parameters based at least on a weight corresponding to a source of the audio data used to generate the audio parameters.
10. The system of claim 1, further comprising program code to:
receive historical video data from the one or more storage areas;
determine one or more historical video data weights for the historical video data; and
evaluate the set of rules based at least on the historical video data and the historical video data weights.
11. A system for monitoring, alerting, and acting, comprising:
one or more sensory devices for performing a measurement and generating input data;
one or more storage areas for storing input data;
one or more processors, operatively coupled to the one or more storage areas; and
one or more memories, operatively coupled to the one or more processors and the one or more storage areas, the one or more memories storing program code to:
receive input data from the one or more sensory devices;
determine one or more data weights for the input data based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received;
evaluate a set of rules based on the input data and the data weights; and
perform one or more actions based on the evaluation of the set of rules.
12. The system of claim 11, wherein future data weights are determined from past data weights using additional external event weights, and
wherein the events external to the system include at least an event selected from the group consisting of police databases, and anonymous tips.
13. The system of claim 11, further comprising program code to:
receive input from external police databases; and
evaluate the set of rules based on the received input.
14. The system of claim 11, further comprising program code to:
receive input from anonymous tips, the anonymous tips having unidentified source, unidentified location, and unidentified timestamp; and
evaluate the set of rules based on the received input.
15. The system of claim 11, wherein the set of rules includes at least a rule for activating an action when a weighted sum of a set of events is greater than a predetermined threshold.
16. The system of claim 11, wherein the video parameters are determined from the video data by a weighted integral of the video data over time, weighted by a video weighing function.
17. The system of claim 11, wherein the video parameters are determined from the video data by a predefined function of the weighted integral, and the predefined function is a composition of several other functions.
18. The system of claim 11, further comprising:
one or more audio sources for capturing audio data;
wherein the one or more memories further comprise program code to:
receive audio data from the one or more audio sources;
perform audio analysis on the audio data to generate one or more audio parameters; and
determine one or more audio weights for the audio parameters based from a weight corresponding to a source of the audio data used to generate the audio parameters.
19. The system of claim 11, further comprising program code to:
receive historical data from the one or more storage areas;
determine one or more historical data weights for the historical data; and
evaluate the set of rules based at least on the historical data and the historical data weights.
20. The system of claim 11, further comprising program code to:
perform data analysis on the input data to generate one or more data parameters; and
determine one or more data parameter weights for the data parameters based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received.
21. A method for monitoring, alerting, and acting, comprising the following steps:
receiving input data from one or more sensory devices;
determining one or more data weights for the input data based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received;
evaluating a set of rules based on the input data and the data weights; and
performing one or more actions based on the evaluation of the set of rules.
22. The method of claim 21, wherein future data weights are determined from past data weights using additional external event weights, and
wherein the events external to the system include at least an event selected from the group consisting of police databases, and anonymous tips.
23. The method of claim 21, further comprising:
receiving input from external police databases; and
evaluating the set of rules based on the received input.
24. The method of claim 21, further comprising:
receiving input from anonymous tips, the anonymous tips having unidentified source, unidentified location, and unidentified timestamp; and
evaluating the set of rules based on the received input.
25. The method of claim 21, wherein the set of rules includes at least a rule for activating an action when a weighted sum of a set of events is greater than a predetermined threshold.
26. The method of claim 21, wherein the video parameters are determined from the video data by a weighted integral of the video data over time, weighted by a video weighing function.
27. The method of claim 21, wherein the video parameters are determined from the video data by a predefined function of the weighted integral, and the predefined function is a composition of several other functions.
28. The method of claim 21, further comprising:
receiving audio data from the one or more audio sources;
performing audio analysis on the audio data to generate one or more audio parameters; and
determining one or more audio weights for the audio parameters based from a weight corresponding to a source of the audio data used to generate the audio parameters.
29. The method of claim 21, further comprising:
receiving historical data from the one or more storage areas;
determining one or more historical data weights for the historical video data; and
evaluating the set of rules based at least on the historical data and the historical data weights.
30. The method of claim 21, further comprising:
performing data analysis on the input data to generate one or more data parameters; and
determining one or more data parameter weights for the data parameters based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received.
31. An apparatus for monitoring, alerting, and acting, comprising:
one or more input components adapted to receive input data from one or more data sources;
one or more storage areas for storing the input data;
a data weight component adapted to determine one or more data weights for the input data based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received;
a rules engine adapted to evaluate a set of rules based on the input data and the data weights; and
one or more action components adapted to perform one or more actions based on the evaluation of the set of rules.
32. The apparatus of claim 31, wherein future data weights are determined from past data weights using additional external event weights, and
wherein the events external to the system include at least an event selected from the group consisting of police databases, and anonymous tips.
33. The apparatus of claim 31, further comprising:
an external data source component adapted to receive input from external police databases,
wherein the rules engine evaluates the set of rules based on the received input.
34. The apparatus of claim 31, further comprising:
an external data source component adapted to receive input from anonymous tips, the anonymous tips having unidentified source, unidentified location, and unidentified timestamp,
wherein the rules engine evaluates the set of rules based at least on the received input.
35. The apparatus of claim 31, wherein the set of rules includes at least a rule for activating an action when a weighted sum of a set of events is greater than a predetermined threshold.
36. The apparatus of claim 31, wherein the weights are determined at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received.
37. The apparatus of claim 31, wherein the video parameters are determined from the video data by a weighted integral of the video data over time, weighted by a video weighing function.
38. The apparatus of claim 37, wherein the video parameters are determined from the video data by a predefined function of the weighted integral, and the predefined function is a composition of several other functions.
39. The apparatus of claim 31, further comprising:
one or more video inputs adapted to receive video data from one or more video sources;
one or more video detection components adapted to perform video analysis on the video data to generate one or more video parameters; and
one or more video weight components adapted to generate one or more video parameter weights based at least on a weight corresponding to a source of the video data used to generate the video parameters.
40. The apparatus of claim 31, further comprising:
one or more data analysis components adapted to perform data analysis on the input data to generate one or more data parameters; and
one or more data parameter weight components adapted to generate one or more data parameter weights based at least on a weight corresponding to a source of the input data, a weight corresponding to a time the input data was received, and a weight corresponding to a frequency that the input data was received,
wherein the rules engine evaluates the set of rules based at least on the data parameters and the data parameter weights.
US12/203,613 2007-05-08 2008-09-03 Methods and systems for alerting by weighing data based on the source, time received, and frequency received Expired - Fee Related US7876351B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/203,613 US7876351B2 (en) 2007-05-08 2008-09-03 Methods and systems for alerting by weighing data based on the source, time received, and frequency received

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/746,043 US7595815B2 (en) 2007-05-08 2007-05-08 Apparatus, methods, and systems for intelligent security and safety
US12/203,613 US7876351B2 (en) 2007-05-08 2008-09-03 Methods and systems for alerting by weighing data based on the source, time received, and frequency received

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/746,043 Continuation US7595815B2 (en) 2007-05-08 2007-05-08 Apparatus, methods, and systems for intelligent security and safety

Publications (2)

Publication Number Publication Date
US20080316315A1 true US20080316315A1 (en) 2008-12-25
US7876351B2 US7876351B2 (en) 2011-01-25

Family

ID=39969142

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/746,043 Expired - Fee Related US7595815B2 (en) 2007-05-08 2007-05-08 Apparatus, methods, and systems for intelligent security and safety
US12/203,613 Expired - Fee Related US7876351B2 (en) 2007-05-08 2008-09-03 Methods and systems for alerting by weighing data based on the source, time received, and frequency received
US12/207,111 Expired - Fee Related US7999847B2 (en) 2007-05-08 2008-09-09 Audio-video tip analysis, storage, and alerting system for safety, security, and business productivity

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/746,043 Expired - Fee Related US7595815B2 (en) 2007-05-08 2007-05-08 Apparatus, methods, and systems for intelligent security and safety

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/207,111 Expired - Fee Related US7999847B2 (en) 2007-05-08 2008-09-09 Audio-video tip analysis, storage, and alerting system for safety, security, and business productivity

Country Status (1)

Country Link
US (3) US7595815B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120038774A1 (en) * 2009-04-22 2012-02-16 Wincor Nixdorf International Gmbh Method for recognizing attempts at manipulating a self-service terminal, and data processing unit therefor
US9641692B2 (en) 2013-06-25 2017-05-02 Siemens Schweiz Ag Incident-centric mass notification system
US10136276B2 (en) 2013-06-25 2018-11-20 Siemens Schweiz Ag Modality-centric mass notification system

Families Citing this family (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8564661B2 (en) 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US9892606B2 (en) 2001-11-15 2018-02-13 Avigilon Fortress Corporation Video surveillance system employing video primitives
US8711217B2 (en) 2000-10-24 2014-04-29 Objectvideo, Inc. Video surveillance system employing video primitives
US7424175B2 (en) 2001-03-23 2008-09-09 Objectvideo, Inc. Video segmentation using statistical pixel modeling
WO2007045051A1 (en) 2005-10-21 2007-04-26 Honeywell Limited An authorisation system and a method of authorisation
CA2649389A1 (en) 2006-04-17 2007-11-08 Objectvideo, Inc. Video segmentation using statistical pixel modeling
US20090031381A1 (en) * 2007-07-24 2009-01-29 Honeywell International, Inc. Proxy video server for video surveillance
US7992094B2 (en) * 2007-08-14 2011-08-02 International Business Machines Corporation Intelligence driven icons and cursors
US20090063978A1 (en) * 2007-09-05 2009-03-05 Sony Corporation Network status icon in navigable toolbar
US8013738B2 (en) 2007-10-04 2011-09-06 Kd Secure, Llc Hierarchical storage manager (HSM) for intelligent storage of large volumes of data
WO2009045218A1 (en) 2007-10-04 2009-04-09 Donovan John J A video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis
US20090144341A1 (en) * 2007-12-03 2009-06-04 Apple Inc. Ad Hoc Data Storage Network
CN101615243B (en) * 2008-06-25 2016-03-23 汉王科技股份有限公司 A kind of slope image acquisition device and face identification system
TW201007634A (en) * 2008-08-06 2010-02-16 Univ Nat Taiwan Fire-fighting detection system and its weighting-value correction method
US8305211B1 (en) * 2008-10-03 2012-11-06 Vidsys, Inc. Method and apparatus for surveillance system peering
US9071626B2 (en) 2008-10-03 2015-06-30 Vidsys, Inc. Method and apparatus for surveillance system peering
US7980464B1 (en) * 2008-12-23 2011-07-19 Bank Of America Corporation Bank card fraud protection system
US20100201815A1 (en) * 2009-02-09 2010-08-12 Vitamin D, Inc. Systems and methods for video monitoring
WO2010106474A1 (en) 2009-03-19 2010-09-23 Honeywell International Inc. Systems and methods for managing access control devices
US9760573B2 (en) 2009-04-28 2017-09-12 Whp Workflow Solutions, Llc Situational awareness
US10565065B2 (en) 2009-04-28 2020-02-18 Getac Technology Corporation Data backup and transfer across multiple cloud computing providers
US10419722B2 (en) * 2009-04-28 2019-09-17 Whp Workflow Solutions, Inc. Correlated media source management and response control
US8311983B2 (en) 2009-04-28 2012-11-13 Whp Workflow Solutions, Llc Correlated media for distributed sources
WO2011001250A1 (en) * 2009-07-01 2011-01-06 Honeywell International Inc Security management using social networking
US10027711B2 (en) * 2009-11-20 2018-07-17 Alert Enterprise, Inc. Situational intelligence
US10019677B2 (en) 2009-11-20 2018-07-10 Alert Enterprise, Inc. Active policy enforcement
US9280365B2 (en) 2009-12-17 2016-03-08 Honeywell International Inc. Systems and methods for managing configuration data at disconnected remote devices
KR100990362B1 (en) * 2010-01-12 2010-10-29 파워테크주식회사 Control system for entire facilities by using local area data collector and record device
US9092962B1 (en) 2010-04-16 2015-07-28 Kontek Industries, Inc. Diversity networks and methods for secure communications
US8824554B2 (en) * 2010-09-02 2014-09-02 Intersil Americas LLC Systems and methods for video content analysis
US8787725B2 (en) * 2010-11-11 2014-07-22 Honeywell International Inc. Systems and methods for managing video data
US8892082B2 (en) * 2011-04-29 2014-11-18 At&T Intellectual Property I, L.P. Automatic response to localized input
WO2012174603A1 (en) 2011-06-24 2012-12-27 Honeywell International Inc. Systems and methods for presenting dvm system information
CH705143A1 (en) * 2011-06-30 2012-12-31 Belimo Holding Ag Method and apparatus for balancing a group of consumers in a fluid transport system.
KR101119848B1 (en) * 2011-08-05 2012-02-28 (주)리얼허브 Apparatus and method for detecting connectivity fault of image input device
US9344684B2 (en) 2011-08-05 2016-05-17 Honeywell International Inc. Systems and methods configured to enable content sharing between client terminals of a digital video management system
US10362273B2 (en) 2011-08-05 2019-07-23 Honeywell International Inc. Systems and methods for managing video data
WO2013020165A2 (en) 2011-08-05 2013-02-14 HONEYWELL INTERNATIONAL INC. Attn: Patent Services Systems and methods for managing video data
US9225944B2 (en) * 2011-09-08 2015-12-29 Schneider Electric It Corporation Method and system for displaying a coverage area of a camera in a data center
US20130093898A1 (en) * 2011-10-13 2013-04-18 Honeywell International Inc. Video Surveillance System and Method via the Internet
US10191742B2 (en) 2012-03-30 2019-01-29 Intel Corporation Mechanism for saving and retrieving micro-architecture context
CN102723094B (en) * 2012-06-15 2015-11-25 杭州海康威视数字技术股份有限公司 The video recording storage of highly reliable easy expansion, search method and system thereof
US8959022B2 (en) 2012-07-03 2015-02-17 Motorola Solutions, Inc. System for media correlation based on latent evidences of audio
US20140118543A1 (en) * 2012-10-31 2014-05-01 Motorola Solutions, Inc. Method and apparatus for video analysis algorithm selection based on historical incident data
US9239887B2 (en) 2012-12-18 2016-01-19 Cisco Technology, Inc. Automatic correlation of dynamic system events within computing devices
FR3000271B1 (en) * 2012-12-21 2016-03-11 Finsecur FIRE DETECTION DEVICE
WO2014134217A1 (en) * 2013-02-26 2014-09-04 Noland Bryan Lee System and method of automated gunshot emergency response system
US9000918B1 (en) 2013-03-02 2015-04-07 Kontek Industries, Inc. Security barriers with automated reconnaissance
EP2976702A1 (en) * 2013-03-18 2016-01-27 GE Intelligent Platforms, Inc. Apparatus and method for optimizing time series data storage based upon prioritization
US20160070737A1 (en) * 2013-03-18 2016-03-10 Ge Intelligent Platforms, Inc. Apparatus and method for optimizing time series data store usage
WO2014174796A1 (en) * 2013-04-23 2014-10-30 日本電気株式会社 Information processing system, information processing method and storage medium
SG11201508696QA (en) 2013-04-23 2015-11-27 Nec Corp Information processing system, information processing method and storage medium
US9544534B2 (en) * 2013-09-24 2017-01-10 Motorola Solutions, Inc. Apparatus for and method of identifying video streams transmitted over a shared network link, and for identifying and time-offsetting intra-frames generated substantially simultaneously in such streams
US10523903B2 (en) 2013-10-30 2019-12-31 Honeywell International Inc. Computer implemented systems frameworks and methods configured for enabling review of incident data
US20150326617A1 (en) * 2014-05-06 2015-11-12 DoNotGeoTrack, Inc. Privacy Control Processes for Mobile Devices, Wearable Devices, other Networked Devices, and the Internet of Things
KR20170115082A (en) * 2015-03-04 2017-10-16 가부시키가이샤 히타치 시스테무즈 A system for checking the situation by camera image data, a method for checking the situation by the control device and the camera image data
US9734702B2 (en) 2015-05-21 2017-08-15 Google Inc. Method and system for consolidating events across sensors
US9710460B2 (en) * 2015-06-10 2017-07-18 International Business Machines Corporation Open microphone perpetual conversation analysis
US9361011B1 (en) 2015-06-14 2016-06-07 Google Inc. Methods and systems for presenting multiple live video feeds in a user interface
USD812076S1 (en) 2015-06-14 2018-03-06 Google Llc Display screen with graphical user interface for monitoring remote video camera
USD803241S1 (en) 2015-06-14 2017-11-21 Google Inc. Display screen with animated graphical user interface for an alert screen
US10133443B2 (en) 2015-06-14 2018-11-20 Google Llc Systems and methods for smart home automation using a multifunction status and entry point icon
CN105181340A (en) * 2015-10-23 2015-12-23 河南柴油机重工有限责任公司 Device and method for monitoring refuse landfill gas engine
US10853359B1 (en) 2015-12-21 2020-12-01 Amazon Technologies, Inc. Data log stream processing using probabilistic data structures
US10522013B2 (en) * 2016-05-20 2019-12-31 Vivint, Inc. Street watch
US10460300B2 (en) * 2016-06-01 2019-10-29 Multimedia Image Solution Limited Method of preventing fraud and theft during automated teller machine transactions and related system
USD882583S1 (en) 2016-07-12 2020-04-28 Google Llc Display screen with graphical user interface
US10263802B2 (en) 2016-07-12 2019-04-16 Google Llc Methods and devices for establishing connections with remote cameras
CN106341634A (en) * 2016-08-31 2017-01-18 武汉烽火众智数字技术有限责任公司 Video acquisition system based on hard disk video recorder and method thereof
USD843398S1 (en) 2016-10-26 2019-03-19 Google Llc Display screen with graphical user interface for a timeline-video relationship presentation for alert events
US11238290B2 (en) 2016-10-26 2022-02-01 Google Llc Timeline-video relationship processing for alert events
WO2018081328A1 (en) * 2016-10-26 2018-05-03 Ring Inc. Customizable intrusion zones for audio/video recording and communication devices
US20180176512A1 (en) * 2016-10-26 2018-06-21 Ring Inc. Customizable intrusion zones associated with security systems
US10386999B2 (en) 2016-10-26 2019-08-20 Google Llc Timeline-video relationship presentation for alert events
US10972685B2 (en) 2017-05-25 2021-04-06 Google Llc Video camera assembly having an IR reflector
US10352496B2 (en) 2017-05-25 2019-07-16 Google Llc Stand assembly for an electronic device providing multiple degrees of freedom and built-in cables
US10819921B2 (en) 2017-05-25 2020-10-27 Google Llc Camera assembly having a single-piece cover element
US20180357870A1 (en) * 2017-06-07 2018-12-13 Amazon Technologies, Inc. Behavior-aware security systems and associated methods
US11105526B1 (en) 2017-09-29 2021-08-31 Integrated Global Services, Inc. Safety shutdown systems and methods for LNG, crude oil refineries, petrochemical plants, and other facilities
CN108630230A (en) * 2018-05-14 2018-10-09 哈尔滨工业大学 A kind of campus despot's icepro detection method based on action voice data joint identification
CN109582701A (en) * 2018-11-30 2019-04-05 广州净松软件科技有限公司 Supervise warning information acquisition methods, device, equipment and the storage medium of data
US20200225313A1 (en) * 2019-01-11 2020-07-16 Drift Net Security System for Detecting Hazardous Events and Occupants in a Building
US11019087B1 (en) 2019-11-19 2021-05-25 Ehsan Adeli Computer vision-based intelligent anomaly detection using synthetic and simulated data-system and method
US11328565B2 (en) * 2019-11-26 2022-05-10 Ncr Corporation Asset tracking and notification processing
US11488622B2 (en) * 2019-12-16 2022-11-01 Cellular South, Inc. Embedded audio sensor system and methods
CN111414873B (en) * 2020-03-26 2021-04-30 广州粤建三和软件股份有限公司 Alarm prompting method, device and alarm system based on wearing state of safety helmet
US20210335109A1 (en) * 2020-04-28 2021-10-28 Ademco Inc. Systems and methods for identifying user-customized relevant individuals in an ambient image at a doorbell device
US20230122732A1 (en) * 2021-10-19 2023-04-20 Motorola Solutions, Inc. Security ecosystem, device and method for communicating with communication devices based on workflow interactions

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5164979A (en) * 1989-11-21 1992-11-17 Goldstar Co., Ltd. Security system using telephone lines to transmit video images to remote supervisory location
US5365217A (en) * 1992-02-20 1994-11-15 Frank J. Toner Personal security system apparatus and method
US5382943A (en) * 1991-07-31 1995-01-17 Tanaka; Mutuo Remote monitoring unit
US5493273A (en) * 1993-09-28 1996-02-20 The United States Of America As Represented By The Secretary Of The Navy System for detecting perturbations in an environment using temporal sensor data
US5638302A (en) * 1995-12-01 1997-06-10 Gerber; Eliot S. System and method for preventing auto thefts from parking areas
US5666157A (en) * 1995-01-03 1997-09-09 Arc Incorporated Abnormality detection and surveillance system
US5786746A (en) * 1995-10-03 1998-07-28 Allegro Supercare Centers, Inc. Child care communication and surveillance system
US6249225B1 (en) * 1998-12-28 2001-06-19 Randall Wang Auxiliary alert process and system thereof for alarm system
US6424370B1 (en) * 1999-10-08 2002-07-23 Texas Instruments Incorporated Motion based event detection system and method
US6437819B1 (en) * 1999-06-25 2002-08-20 Rohan Christopher Loveland Automated video person tracking system
US6525663B2 (en) * 2001-03-15 2003-02-25 Koninklijke Philips Electronics N.V. Automatic system for monitoring persons entering and leaving changing room
US6529613B1 (en) * 1996-11-27 2003-03-04 Princeton Video Image, Inc. Motion tracking using image-texture templates
US6542625B1 (en) * 1999-01-08 2003-04-01 Lg Electronics Inc. Method of detecting a specific object in an image signal
US20030062997A1 (en) * 1999-07-20 2003-04-03 Naidoo Surendra N. Distributed monitoring for a video security system
US6570496B2 (en) * 2000-04-04 2003-05-27 Rick A. Britton Networks and circuits for alarm system operations
US6628805B1 (en) * 1996-06-17 2003-09-30 Sarnoff Corporation Apparatus and a method for detecting motion within an image sequence
US20030214583A1 (en) * 2002-05-20 2003-11-20 Mokhtar Sadok Distinguishing between fire and non-fire conditions using cameras
US6700487B2 (en) * 2000-12-06 2004-03-02 Koninklijke Philips Electronics N.V. Method and apparatus to select the best video frame to transmit to a remote station for CCTV based residential security monitoring
US6778085B2 (en) * 2002-07-08 2004-08-17 James Otis Faulkner Security system and method with realtime imagery
US6788205B1 (en) * 2002-08-30 2004-09-07 Ncr Corporation System and method for verifying surveillance tag deactivation in a self-checkout station
US6940998B2 (en) * 2000-02-04 2005-09-06 Cernium, Inc. System for automated screening of security cameras
US6940397B1 (en) * 2003-03-07 2005-09-06 Benjamin E Le Mire Vehicle surveillance system
US6958676B1 (en) * 2002-02-06 2005-10-25 Sts International Ltd Vehicle passenger authorization system
US6965313B1 (en) * 2001-04-24 2005-11-15 Alarm.Com Inc. System and method for connecting security systems to a wireless device
US6968294B2 (en) * 2001-03-15 2005-11-22 Koninklijke Philips Electronics N.V. Automatic system for monitoring person requiring care and his/her caretaker
US6969294B2 (en) * 2001-01-09 2005-11-29 Claudio Vicentelli Assembly of modules with magnetic anchorage for the construction of stable grid structures
US6970102B2 (en) * 2003-05-05 2005-11-29 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US6972787B1 (en) * 2002-06-28 2005-12-06 Digeo, Inc. System and method for tracking an object with multiple cameras
US6975220B1 (en) * 2000-04-10 2005-12-13 Radia Technologies Corporation Internet based security, fire and emergency identification and communication system
US6975346B2 (en) * 2002-06-27 2005-12-13 International Business Machines Corporation Method for suspect identification using scanning of surveillance media
US7016518B2 (en) * 2002-03-15 2006-03-21 Extreme Cctv Inc. Vehicle license plate imaging and reading system for day and night
US7046169B2 (en) * 2003-04-09 2006-05-16 Bucholz Andrew J System and method of vehicle surveillance
US7382244B1 (en) * 2007-10-04 2008-06-03 Kd Secure Video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453733A (en) * 1992-07-20 1995-09-26 Digital Security Controls Ltd. Intrusion alarm with independent trouble evaluation
US5917958A (en) * 1996-10-31 1999-06-29 Sensormatic Electronics Corporation Distributed video data base with remote searching for image data features
DE19829538A1 (en) * 1998-07-02 2000-01-05 Bosch Gmbh Robert Method for influencing source data for determining a route in a navigation system
US20030185296A1 (en) * 2002-03-28 2003-10-02 Masten James W. System for the capture of evidentiary multimedia data, live/delayed off-load to secure archival storage and managed streaming distribution
JP4420351B2 (en) * 2005-09-30 2010-02-24 富士通株式会社 Hierarchical storage system, control method and program

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5164979A (en) * 1989-11-21 1992-11-17 Goldstar Co., Ltd. Security system using telephone lines to transmit video images to remote supervisory location
US5382943A (en) * 1991-07-31 1995-01-17 Tanaka; Mutuo Remote monitoring unit
US5365217A (en) * 1992-02-20 1994-11-15 Frank J. Toner Personal security system apparatus and method
US5493273A (en) * 1993-09-28 1996-02-20 The United States Of America As Represented By The Secretary Of The Navy System for detecting perturbations in an environment using temporal sensor data
US5666157A (en) * 1995-01-03 1997-09-09 Arc Incorporated Abnormality detection and surveillance system
US5786746A (en) * 1995-10-03 1998-07-28 Allegro Supercare Centers, Inc. Child care communication and surveillance system
US5638302A (en) * 1995-12-01 1997-06-10 Gerber; Eliot S. System and method for preventing auto thefts from parking areas
US6628805B1 (en) * 1996-06-17 2003-09-30 Sarnoff Corporation Apparatus and a method for detecting motion within an image sequence
US6529613B1 (en) * 1996-11-27 2003-03-04 Princeton Video Image, Inc. Motion tracking using image-texture templates
US6249225B1 (en) * 1998-12-28 2001-06-19 Randall Wang Auxiliary alert process and system thereof for alarm system
US6542625B1 (en) * 1999-01-08 2003-04-01 Lg Electronics Inc. Method of detecting a specific object in an image signal
US6437819B1 (en) * 1999-06-25 2002-08-20 Rohan Christopher Loveland Automated video person tracking system
US20030062997A1 (en) * 1999-07-20 2003-04-03 Naidoo Surendra N. Distributed monitoring for a video security system
US6424370B1 (en) * 1999-10-08 2002-07-23 Texas Instruments Incorporated Motion based event detection system and method
US6940998B2 (en) * 2000-02-04 2005-09-06 Cernium, Inc. System for automated screening of security cameras
US6570496B2 (en) * 2000-04-04 2003-05-27 Rick A. Britton Networks and circuits for alarm system operations
US6975220B1 (en) * 2000-04-10 2005-12-13 Radia Technologies Corporation Internet based security, fire and emergency identification and communication system
US6700487B2 (en) * 2000-12-06 2004-03-02 Koninklijke Philips Electronics N.V. Method and apparatus to select the best video frame to transmit to a remote station for CCTV based residential security monitoring
US6969294B2 (en) * 2001-01-09 2005-11-29 Claudio Vicentelli Assembly of modules with magnetic anchorage for the construction of stable grid structures
US6968294B2 (en) * 2001-03-15 2005-11-22 Koninklijke Philips Electronics N.V. Automatic system for monitoring person requiring care and his/her caretaker
US6525663B2 (en) * 2001-03-15 2003-02-25 Koninklijke Philips Electronics N.V. Automatic system for monitoring persons entering and leaving changing room
US6965313B1 (en) * 2001-04-24 2005-11-15 Alarm.Com Inc. System and method for connecting security systems to a wireless device
US6958676B1 (en) * 2002-02-06 2005-10-25 Sts International Ltd Vehicle passenger authorization system
US7016518B2 (en) * 2002-03-15 2006-03-21 Extreme Cctv Inc. Vehicle license plate imaging and reading system for day and night
US20030214583A1 (en) * 2002-05-20 2003-11-20 Mokhtar Sadok Distinguishing between fire and non-fire conditions using cameras
US6975346B2 (en) * 2002-06-27 2005-12-13 International Business Machines Corporation Method for suspect identification using scanning of surveillance media
US6972787B1 (en) * 2002-06-28 2005-12-06 Digeo, Inc. System and method for tracking an object with multiple cameras
US6778085B2 (en) * 2002-07-08 2004-08-17 James Otis Faulkner Security system and method with realtime imagery
US6788205B1 (en) * 2002-08-30 2004-09-07 Ncr Corporation System and method for verifying surveillance tag deactivation in a self-checkout station
US6940397B1 (en) * 2003-03-07 2005-09-06 Benjamin E Le Mire Vehicle surveillance system
US7046169B2 (en) * 2003-04-09 2006-05-16 Bucholz Andrew J System and method of vehicle surveillance
US6970102B2 (en) * 2003-05-05 2005-11-29 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US7382244B1 (en) * 2007-10-04 2008-06-03 Kd Secure Video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120038774A1 (en) * 2009-04-22 2012-02-16 Wincor Nixdorf International Gmbh Method for recognizing attempts at manipulating a self-service terminal, and data processing unit therefor
US9165437B2 (en) * 2009-04-22 2015-10-20 Wincor Nixdorf International Gmbh Method for recognizing attempts at manipulating a self-service terminal, and data processing unit therefor
US9641692B2 (en) 2013-06-25 2017-05-02 Siemens Schweiz Ag Incident-centric mass notification system
US10136276B2 (en) 2013-06-25 2018-11-20 Siemens Schweiz Ag Modality-centric mass notification system

Also Published As

Publication number Publication date
US7595815B2 (en) 2009-09-29
US20090002157A1 (en) 2009-01-01
US20080278579A1 (en) 2008-11-13
US7876351B2 (en) 2011-01-25
US7999847B2 (en) 2011-08-16

Similar Documents

Publication Publication Date Title
US7876351B2 (en) Methods and systems for alerting by weighing data based on the source, time received, and frequency received
Norris From personal to digital: CCTV, the panopticon, and the technological mediation of suspicion and social control
US9779614B2 (en) System and method of alerting CMS and registered users about a potential duress situation using a mobile application
McLean et al. Here’s looking at you: An evaluation of public CCTV cameras and their effects on crime and disorder
US20070182540A1 (en) Local verification systems and methods for security monitoring
Ogunleye et al. A computer-based security framework for crime prevention in Nigeria
EP3026904A1 (en) System and method of contextual adjustment of video fidelity to protect privacy
KR20150092545A (en) Warning method and system using prompt situation information data
CN103384321A (en) System and method of post event/alarm analysis in cctv and integrated security systems
US20110141277A1 (en) System and method for providing a virtual community watch
KR101466004B1 (en) An intelligent triplex system integrating crime and disaster prevention and their post treatments and the control method thereof
CN113538825A (en) Campus wall-turning event alarm method and system
CN110889790A (en) System for rapidly screening suspected marketing users based on comprehensive community information
GB2412805A (en) Detecting and recording events on a computer system
KR100926580B1 (en) System of crime prevention for children and operating method thereof
US20160378268A1 (en) System and method of smart incident analysis in control system using floor maps
KR101509223B1 (en) Security system with an auto capturing for monitoring screen and method of the same
Rosenheim The modern American juvenile court
WO2015173836A2 (en) An interactive system that enhances video surveillance systems by enabling ease of speedy review of surveillance video and/or images and providing means to take several next steps, backs up surveillance video and/or images, as well as enables to create standardized intelligent incident reports and derive patterns
Norris Closed-circuit television: A review of its development and its implications for privacy
CN117354469B (en) District monitoring video target tracking method and system based on security precaution
WO2024009704A1 (en) Digital entry sensing system
Aballe et al. Security Measures: Effectiveness of the Installation of CCTV Cameras in Relation to Crime Prevention as Perceived by the Community
Manalo et al. Status of Closed Circuit Television Camera Usage in Batangas City: Basis for Enhancement
Shukla et al. Audio Analytics and Other Upgrades in Correctional Surveillance Systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: KD SECURE, LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DONOVAN, JOHN J;HUSSAIN, DANIAR;REEL/FRAME:021724/0878;SIGNING DATES FROM 20081009 TO 20081015

Owner name: KD SECURE, LLC,MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DONOVAN, JOHN J;HUSSAIN, DANIAR;SIGNING DATES FROM 20081009 TO 20081015;REEL/FRAME:021724/0878

Owner name: KD SECURE, LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DONOVAN, JOHN J;HUSSAIN, DANIAR;SIGNING DATES FROM 20081009 TO 20081015;REEL/FRAME:021724/0878

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: TIERRA VISTA GROUP, LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KD SECURE, LLC;REEL/FRAME:032948/0401

Effective date: 20140501

AS Assignment

Owner name: SECURENET SOLUTIONS GROUP, LLC, FLORIDA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 032948 FRAME 0401. ASSIGNOR(S) HEREBY CONFIRMS THE CORRECT ASSIGNEE IS SECURENET SOLUTIONS GROUP, LLC;ASSIGNOR:KD SECURE, LLC;REEL/FRAME:033012/0669

Effective date: 20140501

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230125