US20110037596A1 - Identifying activity in an area utilizing sound detection and comparison - Google Patents

Identifying activity in an area utilizing sound detection and comparison Download PDF

Info

Publication number
US20110037596A1
US20110037596A1 US12/890,254 US89025410A US2011037596A1 US 20110037596 A1 US20110037596 A1 US 20110037596A1 US 89025410 A US89025410 A US 89025410A US 2011037596 A1 US2011037596 A1 US 2011037596A1
Authority
US
United States
Prior art keywords
environment
expected
audio
waveform
count
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/890,254
Inventor
Fariborz M. Farhan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/890,254 priority Critical patent/US20110037596A1/en
Publication of US20110037596A1 publication Critical patent/US20110037596A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0469Presence detectors to detect unsafe condition, e.g. infrared sensor, microphone
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/0423Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting deviation from an expected pattern of behaviour or schedule
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices

Definitions

  • the present invention relates to monitoring and reporting of activity based on actual detected sounds that are compared to expected or known sounds and, more particularly in one embodiment is directed towards a health or wellness monitoring system and apparatus, that detects sub-normal, abnormal and/or normal behavior of a person or object and reports it to a central database which in turn makes the information available to the family member or to the caregiver.
  • Anderson teaches the use of sensors to detect and report the performance, or non-performance, of routine activities by the user. Sensors detect routine activities such as the opening or closing a refrigerator door, the exiting and leaving a bathroom, and the like. The activities are monitored and recorded on a daily basis, i.e. over a 24-hour period. If certain activities are not performed in 24 hours, an alarm is sent to a caregiver.
  • Anderson arguably discloses a method and system to monitor a household for routine activities
  • Anderson fails to teach a method and system that is functionally similar to the various embodiments presented for the present invention. For instance, Anderson is more suitable to conditions that require instantaneous analysis, whereas embodiments of the present invention are more suitable to daily living. Further, Anderson does not teach the use of a programmable processor that takes as its primary input an audio detection device. Additionally, Anderson fails to teach the use of trend analysis as a means for detecting abnormal activity. Rather, Anderson, monitors activities on a daily basis.
  • Eshelman differs greatly from the disclosed embodiments in that the data is used in a dissimilar manner. Eshelman discloses the combination of multiple sensor inputs to generate one output. Because multiple inputs are used, and taking into account possible errors in the programming associated with those multiple inputs, the output of Eshelman may generate data not indicative of the actual conditions of the household. Further, Eshelman does not disclose the use of trend analysis to help in determining alarm or abnormal conditions. Finally, Shepher teaches a system and method for monitoring movement, wherein the movement is detected by infrared motion detectors or direction audio sensors. Because motion does not necessary correlate to normal or abnormal living conditions, Shepher does not solve the same disadvantages or shortcomings over the prior art that are resolved by the various embodiments of the present invention.
  • a device located near or within a substantially enclosed environment similar to a household, will detect or listen to the non-speech sounds around and/or within the environment.
  • the detected non-speech sounds are examined to identify sound patterns that have been previously recorded or ‘learned’, or otherwise created and made accessible for comparison purposes. Examples of these sounds include, but are not limited to the opening or closing of a refrigerator door, opening or closing a particular drawer, turning on of a light switch or an appliance, etc.
  • the choice of which sound must be recorded and later detected is based on the habitual behavior of the person, animal, entity or object to be monitored.
  • the sound of the kettle whistling would be a target sound for learning and subsequent detection.
  • the requirement for these candidate sounds is that they have to sound similar to one another every time they are made. Examples of consistent sounds are the closing of a refrigerator door or a kettle whistling when the water inside comes to a boiling point.
  • the present invention counts the number of occurrences of a particular sound in a particular time interval, such as a unit of sub-hour interval, and performs a comparison to a historical average for the same interval of time. A decision rule is then employed to determine below normal activity for the particular interval.
  • a security guard may receive reports of sounds in a guarded facility.
  • a prison guard may receive reports regarding the sounds in the prison.
  • a factory operator may receive sounds regarding the operation of machinery in the factory.
  • FIG. 1 is the top level block diagram of one embodiment of the invention showing the major functional blocks of the illustrated embodiment.
  • FIG. 2 is the exploded view of the ‘Pre-Amp’ block 200 which includes a Pre-amplifier an Analog-to-Digital converter and a pre-scalar/power normalizer.
  • FIG. 3 is the exploded view of the of the ‘Filter’ block 400 wherein the details of the interface between the Speech detector/Silence detector and the waveform analyzer is shown.
  • FIG. 4 is the exploded view of the ‘Waveform Analyzer block 500 .
  • FIG. 5 is an illustration of the waveform pattern recognition algorithm.
  • One embodiment of the present invention is a Wellness Monitoring system as illustrated in FIG. 1 for the purpose of detecting reoccurring non-speech sounds around a room representing habitual behavior of an individual.
  • the sound energy is detected by microphone 100 and is fed to the ‘Pre-Amp’ 200 .
  • the output of the Pre-Amp 200 is a normalized digitized signal which is explained in FIG. 2 below and is simultaneously fed to a Speech Detector 300 and a ‘filter’ 400 .
  • the control signal 350 is used to signal the filter 400 when to pass the signal to the waveform analyzer block 500 .
  • the speech detector 300 is capable of detecting the presence of human voice (speech) and also can detect silence periods.
  • the speech detector may also be programmed to detect other sounds that are of little to no interest, such as passing by automobiles, horn beeps, white noise generated by an air outlet of a heating and air conditioning unit, etc.
  • Waveform Analyzer 500 takes non-speech signals between silence periods and with the control of the microprocessor (Central Processing Unit—CPU 640 shown in FIG. 4 ) running a software algorithm performs pattern matching to previously stored audio signals or Waveform records 600 .
  • the Counter array 700 is multi-dimensional array in software used to keep track of the occurrences of each recorded waveform for a selected period of time, such as a sub-hour interval as a non-limiting example, and over a course of an extended period of time, such as a twenty four hour period as a non-limiting example.
  • the role of the Pre-Amp as shown in FIG. 2 is to convert the small signal received by the microphone to the level required by the Analog-to-Digital converter (A/D) 220 .
  • the pre-scalar 230 will normalize the digitized signal by adjusting the gain of the pre-amplifier 210 based on the peak power seen by 230 to maximize the dynamic range of 220 and prior to clipping.
  • FIG. 3 illustrates the details of the interface between the speech/silence detector 300 and the waveform analyzer/recorder 500 .
  • the speech/silence detector 300 detects the presence of human speech (voice) it raises digital signal 310 .
  • the speech/silence detector 300 detects silence, which is absence of any audio energy, then it raises digital signal 320 .
  • Block 420 is a digital delay block to apply an amount of delay equal to the processing delay of 300 .
  • the processing delay of 300 is the larger amount of time required by the speech/silence detector 300 to detect a valid voice digital signal 310 or a silence digital signal 320 .
  • the speech/silence detector 300 takes 10 milliseconds to detect speech and takes 2 milliseconds to detect silence then the amount of delay 420 will be 10 milliseconds worth of audio signal.
  • Signals 310 and 320 are then fed to the input of a dual-input NOR gate.
  • the output of the NOR gate is then fed to the “Gate” input 430 of block 500 . So the input 430 of block 500 will only go high when neither a speech signal 310 nor a silence signal 320 is present. This is exactly when the signal needs to be analyzed. As soon as input 430 toggles (speech or silence is detected), the waveform recording stops and the analysis begins.
  • the waveform recorder/analyzer block 500 acts as a buffer to store incoming non-speech audio waveforms between silence periods.
  • the gate signal 430 is used to delineate the recording period. Once 430 toggles from high to low, the digital waveform is transferred to Random Access Memory (RAM) 620 . Subsequently 500 becomes ready to receive the next non-speech audio waveform.
  • RAM Random Access Memory
  • the CPU 640 will perform pattern matching algorithm, depicted in FIG. 5 , to determine whether a recognizable audio pattern has been encountered. By use of RAM the CPU can perform the audio pattern recognition independent of the nature or rate of arrival of incoming audio patterns.
  • the audio pattern recognition algorithm illustrated in FIG. 5 is one technique to perform the pattern recognition in various embodiments of the present invention and operates as follows.
  • the process begins at point 10 .
  • Step 30 adjusts for the starting point of the incoming waveform that would lead to the smallest error squared summation.
  • Error squared is defined as the difference between stored waveform and the incoming waveform squared or e 2 and error squared summation is e 2 computed over the entire span of the incoming waveform or also shown as ⁇ e 2 .
  • Thresh i is simply the average of at least three ⁇ e 2 computed for three occurrences of a desired audio sound. For example let us assume that the user has decided that the opening of the refrigerator door is a good indication of habitual behavior of his or her elderly relative, thus requiring it to be stored for subsequent detection. He will prepare the device for training by selecting the appropriate command in its menu. The user interface in one embodiment of the invention will instruct the user to push a button on the device and immediately open the refrigerator door.
  • the user When the sounds ends, the user is instructed to indicate that recoding should stop.
  • the device will accept this attempt as the base and instruct the user to repeat this cycle for at least two more times. Every time the cycle is repeated, the training waveform is committed to non-volatile (or permanent) memory and ⁇ e 2 is calculated. Subsequently an average of ⁇ e 2 is computed. The final average then is stored as Thresh i . Thresh 1 is therefore the decision making threshold used for the first training sound waveform.

Abstract

Microprocessor technology is used to detect routine sounds in a substantially enclosed environment to determine normal or abnormal activity or noise within the environment (i.e., habitual behavior of an individual) A device is situated in a substantially enclosed environment with audio input device similar to a microphone and an optional visual display for interaction with the local users. The device has the ability to be controlled locally via key-pad or USB port connected to a local laptop computer, or remotely via a phone line or Ethernet connection to internet. The device further performs audio pattern recognition using waveform matching scheme to detect the occurrence of pre-programmed sounds representing routine activities. The microprocessor counts the number occurrence of each recognizable sound for a particular interval and over a length of a day or other duration and reports to a remote server. The remote server performs tracking and trending analysis to a web-based caregiver client. Significant changes are detected and reported to caregiver or family web-based client.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of the U.S. patent application that was filed on Jul. 25, 2007 and assigned Ser. No. 11/828,121, which application claims the benefit of Provisional U.S. patent application No. 60/820,311 filed by Farhan on Jul. 25, 2006 and incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to monitoring and reporting of activity based on actual detected sounds that are compared to expected or known sounds and, more particularly in one embodiment is directed towards a health or wellness monitoring system and apparatus, that detects sub-normal, abnormal and/or normal behavior of a person or object and reports it to a central database which in turn makes the information available to the family member or to the caregiver.
  • DESCRIPTION OF THE RELATED ART
  • Several patents and patent applications exist that are directed to methods and systems that detect motion of the individual rather than detecting routine activities through the use of audio. Such references that detect motion, either through motion or sound sensors, are functionally inadequate to perform the feature of using trend analysis to determine an abnormal condition presented herein. Occupant movement in the household may or may not directly correlate to proper or safe living conditions. One of the most reliable indications of proper and safe living conditions is to monitor for activities that an occupant would normally perform in order to maintain a safe and comfortable environment. Therefore, it may be more useful to detect an ancillary input, e.g. sound from daily activities, rather than a primary input, e.g. motion. To more fully illustrate the systems and methods of the prior art, we draw specific reference to U.S. Pat. No. 4,284,849 to Anderson et al. (“Anderson”), U.S. Patent Application Publ. No. 2002/0171551 A1 to Eshelman et al. (“Eshelman”), and U.S. Pat. No. 6,445,298 B1 to Shepher (“Shepher”). More specifically, Anderson teaches the use of sensors to detect and report the performance, or non-performance, of routine activities by the user. Sensors detect routine activities such as the opening or closing a refrigerator door, the exiting and leaving a bathroom, and the like. The activities are monitored and recorded on a daily basis, i.e. over a 24-hour period. If certain activities are not performed in 24 hours, an alarm is sent to a caregiver. Although Anderson arguably discloses a method and system to monitor a household for routine activities, Anderson fails to teach a method and system that is functionally similar to the various embodiments presented for the present invention. For instance, Anderson is more suitable to conditions that require instantaneous analysis, whereas embodiments of the present invention are more suitable to daily living. Further, Anderson does not teach the use of a programmable processor that takes as its primary input an audio detection device. Additionally, Anderson fails to teach the use of trend analysis as a means for detecting abnormal activity. Rather, Anderson, monitors activities on a daily basis. As an example of the arguable difference between the two inventions, while Anderson would be able to detect the use of a vacuum cleaner, it would be impractical to use Anderson to detect that the vacuum cleaner is being used infrequently and in fewer spaces (indicating a possible lowering of clean living conditions). Since an exemplary embodiment of the present invention uses trend analysis, this deviation, while completely transparent to the invention of Anderson, would be readily visible to embodiments of the present invention and would properly sound an alarm condition. Eshelman discloses a system that monitors a variety of independent signals and combines them to analyze for possible abnormal or emergency conditions. Eshelman does use an audio detector to detect for activities within a household (paragraph 63: . . . sound of . . . vacuum cleaner, etc.). However, Eshelman differs greatly from the disclosed embodiments in that the data is used in a dissimilar manner. Eshelman discloses the combination of multiple sensor inputs to generate one output. Because multiple inputs are used, and taking into account possible errors in the programming associated with those multiple inputs, the output of Eshelman may generate data not indicative of the actual conditions of the household. Further, Eshelman does not disclose the use of trend analysis to help in determining alarm or abnormal conditions. Finally, Shepher teaches a system and method for monitoring movement, wherein the movement is detected by infrared motion detectors or direction audio sensors. Because motion does not necessary correlate to normal or abnormal living conditions, Shepher does not solve the same disadvantages or shortcomings over the prior art that are resolved by the various embodiments of the present invention.
  • BRIEF SUMMARY OF THE INVENTION
  • According to one embodiment of the present invention, a device located near or within a substantially enclosed environment, similar to a household, will detect or listen to the non-speech sounds around and/or within the environment. The detected non-speech sounds are examined to identify sound patterns that have been previously recorded or ‘learned’, or otherwise created and made accessible for comparison purposes. Examples of these sounds include, but are not limited to the opening or closing of a refrigerator door, opening or closing a particular drawer, turning on of a light switch or an appliance, etc. The choice of which sound must be recorded and later detected is based on the habitual behavior of the person, animal, entity or object to be monitored. If the monitored entity has a particular routine, such as boiling water around seven in the morning to make instant coffee, then the sound of the kettle whistling would be a target sound for learning and subsequent detection. The requirement for these candidate sounds is that they have to sound similar to one another every time they are made. Examples of consistent sounds are the closing of a refrigerator door or a kettle whistling when the water inside comes to a boiling point. The present invention counts the number of occurrences of a particular sound in a particular time interval, such as a unit of sub-hour interval, and performs a comparison to a historical average for the same interval of time. A decision rule is then employed to determine below normal activity for the particular interval. The performance for each interval is then reported to a central database via a telephone line, a high speed internet uplink, or other communication technology which in turn is made available to a family member or a caregiver in a numerical and/or graphical fashion, or any other entity, device or individual that may be interested or have a reasonable reason for obtaining the information. For example, a security guard may receive reports of sounds in a guarded facility. A prison guard may receive reports regarding the sounds in the prison. A factory operator may receive sounds regarding the operation of machinery in the factory.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • FIG. 1 is the top level block diagram of one embodiment of the invention showing the major functional blocks of the illustrated embodiment.
  • FIG. 2 is the exploded view of the ‘Pre-Amp’ block 200 which includes a Pre-amplifier an Analog-to-Digital converter and a pre-scalar/power normalizer.
  • FIG. 3 is the exploded view of the of the ‘Filter’ block 400 wherein the details of the interface between the Speech detector/Silence detector and the waveform analyzer is shown.
  • FIG. 4 is the exploded view of the ‘Waveform Analyzer block 500.
  • FIG. 5 is an illustration of the waveform pattern recognition algorithm.
  • DETAILED DESCRIPTION OF THE INVENTION
  • One embodiment of the present invention is a Wellness Monitoring system as illustrated in FIG. 1 for the purpose of detecting reoccurring non-speech sounds around a room representing habitual behavior of an individual. The sound energy is detected by microphone 100 and is fed to the ‘Pre-Amp’ 200. The output of the Pre-Amp 200 is a normalized digitized signal which is explained in FIG. 2 below and is simultaneously fed to a Speech Detector 300 and a ‘filter’ 400. The control signal 350 is used to signal the filter 400 when to pass the signal to the waveform analyzer block 500. The speech detector 300 is capable of detecting the presence of human voice (speech) and also can detect silence periods. The speech detector may also be programmed to detect other sounds that are of little to no interest, such as passing by automobiles, horn beeps, white noise generated by an air outlet of a heating and air conditioning unit, etc. Waveform Analyzer 500 takes non-speech signals between silence periods and with the control of the microprocessor (Central Processing Unit—CPU 640 shown in FIG. 4) running a software algorithm performs pattern matching to previously stored audio signals or Waveform records 600. The Counter array 700 is multi-dimensional array in software used to keep track of the occurrences of each recorded waveform for a selected period of time, such as a sub-hour interval as a non-limiting example, and over a course of an extended period of time, such as a twenty four hour period as a non-limiting example.
  • The role of the Pre-Amp as shown in FIG. 2 is to convert the small signal received by the microphone to the level required by the Analog-to-Digital converter (A/D) 220. The pre-scalar 230 will normalize the digitized signal by adjusting the gain of the pre-amplifier 210 based on the peak power seen by 230 to maximize the dynamic range of 220 and prior to clipping.
  • FIG. 3 illustrates the details of the interface between the speech/silence detector 300 and the waveform analyzer/recorder 500. When the speech/silence detector 300 detects the presence of human speech (voice) it raises digital signal 310. When the speech/silence detector 300 detects silence, which is absence of any audio energy, then it raises digital signal 320. Block 420 is a digital delay block to apply an amount of delay equal to the processing delay of 300. The processing delay of 300 is the larger amount of time required by the speech/silence detector 300 to detect a valid voice digital signal 310 or a silence digital signal 320. For example, if the speech/silence detector 300 takes 10 milliseconds to detect speech and takes 2 milliseconds to detect silence then the amount of delay 420 will be 10 milliseconds worth of audio signal. Signals 310 and 320 are then fed to the input of a dual-input NOR gate. The output of the NOR gate is then fed to the “Gate” input 430 of block 500. So the input 430 of block 500 will only go high when neither a speech signal 310 nor a silence signal 320 is present. This is exactly when the signal needs to be analyzed. As soon as input 430 toggles (speech or silence is detected), the waveform recording stops and the analysis begins.
  • In FIG. 4 the details of the waveform analysis is shown. The waveform recorder/analyzer block 500 acts as a buffer to store incoming non-speech audio waveforms between silence periods. The gate signal 430 is used to delineate the recording period. Once 430 toggles from high to low, the digital waveform is transferred to Random Access Memory (RAM) 620. Subsequently 500 becomes ready to receive the next non-speech audio waveform. Once in RAM, the CPU 640 will perform pattern matching algorithm, depicted in FIG. 5, to determine whether a recognizable audio pattern has been encountered. By use of RAM the CPU can perform the audio pattern recognition independent of the nature or rate of arrival of incoming audio patterns.
  • The audio pattern recognition algorithm illustrated in FIG. 5 is one technique to perform the pattern recognition in various embodiments of the present invention and operates as follows. The process begins at point 10. There is a loop between 20 and 50 used to compare the incoming waveform to each and every type stored previously in non-volatile memory. Step 30 adjusts for the starting point of the incoming waveform that would lead to the smallest error squared summation. Error squared is defined as the difference between stored waveform and the incoming waveform squared or e2 and error squared summation is e2 computed over the entire span of the incoming waveform or also shown as Σe2. In step 40, Σe2 adjusted for optimal starting point is compared to a number Threshi, where i=1, . . . , M and M is the number of previously trained and stored waveforms. Threshi is simply the average of at least three Σe2 computed for three occurrences of a desired audio sound. For example let us assume that the user has decided that the opening of the refrigerator door is a good indication of habitual behavior of his or her elderly relative, thus requiring it to be stored for subsequent detection. He will prepare the device for training by selecting the appropriate command in its menu. The user interface in one embodiment of the invention will instruct the user to push a button on the device and immediately open the refrigerator door. When the sounds ends, the user is instructed to indicate that recoding should stop. The device will accept this attempt as the base and instruct the user to repeat this cycle for at least two more times. Every time the cycle is repeated, the training waveform is committed to non-volatile (or permanent) memory and Σe2 is calculated. Subsequently an average of Σe2 is computed. The final average then is stored as Threshi. Thresh1 is therefore the decision making threshold used for the first training sound waveform.
  • The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art.
  • It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described herein above. Rather the scope of the invention is defined by the claims that follow. cm What is claimed is:

Claims (6)

1. A device for identifying activity of a subject in an environment, the device comprising the components of:
a memory element that includes a plurality of expected audio signal waveforms;
a detector configured to:
detect the presence of audio energy in the environment;
detect human speech within the environment;
detect silence within the environment; and
provide a record signal output when audio energy is detected in the environment but the audio energy is not human speech;
a recorder configured to receive the record signal and record audio energy within the environment for a period of time controlled by the record signal and store an environment audio signal waveform representing the recorded audio energy into a memory element;
a digital audio waveform analyzer configured compare two or more digital audio waveforms to perform pattern recognition; and
a processing unit configured to:
compare the environment audio signal waveform with the plurality of expected audio signal waveforms to identify a match; and
generate an alarm if the environment audio signal does not match any expected audio signal waveforms.
2. The device of claim 1, wherein said processor is configured to enter a learning mode to store expected audio signal waveforms.
3. The device of claim 2, includes a user interface and the processor is configured such that in the learning mode, to provide an indicator to a user to actuate the user interface prior to conducting an activity that will generate audio energy and then, cause the recorded to record and store the detected audio energy as an expected audio signal waveform.
4. A method for detecting abnormal activity in an environment, the method comprising the steps of:
recording sounds expected to occur in the environment;
storing information representing each unique expected sound into a waveform record;
a recorder recording audio energy that does not include human speech in the environment over a first period of time;
a processor processing each audio energy recording by comparing the audio energy recording to the stored information of unique expected sounds in the waveform record to determine if there is a match;
the processor maintaining a count of the number of matches for each identified audio energy recording over the first period of time;
the processor comparing the counts to expected values for each count within that first period of time; and
the processing sounding an alarm condition based at least in parts on the counts and the expected values.
5. The method of claim 4, wherein the step of sounding an alarm further comprises the step of, if the maintained counts deviate from the expected counts by a threshold amount, reporting such information to a centralized server.
6. The method of claim 4, wherein the step of sounding an alarm further comprises the steps of:
for each count, reporting (a) if the count exceeds the expected count by a first threshold amount, (b) is less than the expected count by a second threshold amount or (c) is within a threshold distance from the expected count.
US12/890,254 2006-07-25 2010-09-24 Identifying activity in an area utilizing sound detection and comparison Abandoned US20110037596A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/890,254 US20110037596A1 (en) 2006-07-25 2010-09-24 Identifying activity in an area utilizing sound detection and comparison

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US82031106P 2006-07-25 2006-07-25
US11/828,121 US7825813B2 (en) 2006-07-25 2007-07-25 Identifying activity in an area utilizing sound detection and comparison
US12/890,254 US20110037596A1 (en) 2006-07-25 2010-09-24 Identifying activity in an area utilizing sound detection and comparison

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/828,121 Continuation US7825813B2 (en) 2006-07-25 2007-07-25 Identifying activity in an area utilizing sound detection and comparison

Publications (1)

Publication Number Publication Date
US20110037596A1 true US20110037596A1 (en) 2011-02-17

Family

ID=38986299

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/828,121 Expired - Fee Related US7825813B2 (en) 2006-07-25 2007-07-25 Identifying activity in an area utilizing sound detection and comparison
US12/890,254 Abandoned US20110037596A1 (en) 2006-07-25 2010-09-24 Identifying activity in an area utilizing sound detection and comparison

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/828,121 Expired - Fee Related US7825813B2 (en) 2006-07-25 2007-07-25 Identifying activity in an area utilizing sound detection and comparison

Country Status (1)

Country Link
US (2) US7825813B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324428A1 (en) * 2013-04-30 2014-10-30 Ebay Inc. System and method of improving speech recognition using context
WO2015195503A1 (en) * 2014-06-17 2015-12-23 David Seese Individual activity monitoring system and method
CN110110613A (en) * 2019-04-19 2019-08-09 北京航空航天大学 A kind of rail traffic exception personnel's detection method based on action recognition

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7825813B2 (en) * 2006-07-25 2010-11-02 Intelehealth, Inc Identifying activity in an area utilizing sound detection and comparison
US8018337B2 (en) 2007-08-03 2011-09-13 Fireear Inc. Emergency notification device and system
KR101519104B1 (en) * 2008-10-30 2015-05-11 삼성전자 주식회사 Apparatus and method for detecting target sound
ITTO20090696A1 (en) * 2009-09-11 2011-03-12 Indesit Co Spa INTERFACEABLE STANDARD HOUSEHOLD APPLIANCES WITH ANY SYSTEM FOR TELE-HOME ASSISTANCE
TWI403304B (en) 2010-08-27 2013-08-01 Ind Tech Res Inst Method and mobile device for awareness of linguistic ability
US20130311080A1 (en) * 2011-02-03 2013-11-21 Nokia Corporation Apparatus Configured to Select a Context Specific Positioning System
US20130218582A1 (en) * 2011-11-08 2013-08-22 Cardiac Pacemakers, Inc. Telemedicine system for imd patients using audio/video data
US9615063B2 (en) * 2011-12-27 2017-04-04 Eye Stalks Corporation Method and apparatus for visual monitoring
CN102522082B (en) * 2011-12-27 2013-07-10 重庆大学 Recognizing and locating method for abnormal sound in public places
CN103841252A (en) * 2012-11-22 2014-06-04 腾讯科技(深圳)有限公司 Sound signal processing method, intelligent terminal and system
US9066207B2 (en) * 2012-12-14 2015-06-23 Apple Inc. Managing states of location determination
CN103730109B (en) * 2014-01-14 2016-02-03 重庆大学 A kind of abnormal sound in public places feature extracting method
US9981107B2 (en) 2014-06-05 2018-05-29 Eight Sleep Inc. Methods and systems for gathering and analyzing human biological signals
US9694156B2 (en) 2014-06-05 2017-07-04 Eight Sleep Inc. Bed device system and methods
US10069971B1 (en) * 2014-12-16 2018-09-04 Amazon Technologies, Inc. Automated conversation feedback
GB2538043B (en) 2015-03-09 2017-12-13 Buddi Ltd Activity monitor
US10134425B1 (en) * 2015-06-29 2018-11-20 Amazon Technologies, Inc. Direction-based speech endpointing
CN106548771A (en) * 2015-09-21 2017-03-29 上海日趋信息技术有限公司 For the method that speech recognition system eliminates burst noise
US10105092B2 (en) 2015-11-16 2018-10-23 Eight Sleep Inc. Detecting sleeping disorders
US10154932B2 (en) 2015-11-16 2018-12-18 Eight Sleep Inc. Adjustable bedframe and operating methods for health monitoring
EP3220367A1 (en) * 2016-03-14 2017-09-20 Tata Consultancy Services Limited System and method for sound based surveillance
KR101813593B1 (en) * 2016-03-15 2018-01-30 엘지전자 주식회사 Acoustic sensor, and home appliance system including the same
WO2018053627A1 (en) 2016-09-21 2018-03-29 Smart Wave Technologies Corp. Universal dispenser monitor
CN106411897B (en) * 2016-09-29 2019-10-18 恒大智慧科技有限公司 A kind of sign-off initiates user management method and equipment
US10741197B2 (en) 2016-11-15 2020-08-11 Amos Halava Computer-implemented criminal intelligence gathering system and method
US10310471B2 (en) * 2017-02-28 2019-06-04 Accenture Global Solutions Limited Content recognition and communication system
WO2019139939A1 (en) 2018-01-09 2019-07-18 Eight Sleep, Inc. Systems and methods for detecting a biological signal of a user of an article of furniture
GB2584241B (en) 2018-01-19 2023-03-08 Eight Sleep Inc Sleep pod
US10740389B2 (en) 2018-04-12 2020-08-11 Microsoft Technology Licensing, KKC Remembering audio traces of physical actions
US20230137193A1 (en) * 2021-11-01 2023-05-04 Optum, Inc. Behavior deviation detection with activity timing prediction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5651070A (en) * 1995-04-12 1997-07-22 Blunt; Thomas O. Warning device programmable to be sensitive to preselected sound frequencies
US6028514A (en) * 1998-10-30 2000-02-22 Lemelson Jerome H. Personal emergency, safety warning system and method
US20020107694A1 (en) * 1999-06-07 2002-08-08 Traptec Corporation Voice-recognition safety system for aircraft and method of using the same
US20020171551A1 (en) * 2001-03-15 2002-11-21 Eshelman Larry J. Automatic system for monitoring independent person requiring occasional assistance
US20030135311A1 (en) * 2002-01-17 2003-07-17 Levine Howard B. Aircraft flight and voice data recorder system and method
US7825813B2 (en) * 2006-07-25 2010-11-02 Intelehealth, Inc Identifying activity in an area utilizing sound detection and comparison

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4303801A (en) * 1979-11-14 1981-12-01 Gte Products Corporation Apparatus for monitoring and signalling system
US4284849A (en) * 1979-11-14 1981-08-18 Gte Products Corporation Monitoring and signalling system
US5553609A (en) * 1995-02-09 1996-09-10 Visiting Nurse Service, Inc. Intelligent remote visual monitoring system for home health care service
US5504714A (en) * 1995-02-24 1996-04-02 The United States Of America As Represented By The Secretary Of The Navy Acoustic and environmental monitoring system
US20010044588A1 (en) * 1996-02-22 2001-11-22 Mault James R. Monitoring system
US6058076A (en) * 1997-12-08 2000-05-02 Komninos; Nikolaos I. Signal detector and method for detecting signals having selected frequency characteristics
US6072392A (en) * 1998-08-10 2000-06-06 Jose Armando Coronado Apparatus and method for monitoring and recording the audible environment of a child, patient, older person or pet left in the care of a third person or persons
US6445298B1 (en) * 2000-12-21 2002-09-03 Isaac Shepher System and method for remotely monitoring movement of individuals
JP3996428B2 (en) * 2001-12-25 2007-10-24 松下電器産業株式会社 Abnormality detection device and abnormality detection system
US20050275541A1 (en) * 2004-06-09 2005-12-15 Sengupta Uttam K Method and apparatus to perform remote monitoring
US20050286686A1 (en) * 2004-06-28 2005-12-29 Zlatko Krstulich Activity monitoring systems and methods
US7148797B2 (en) * 2004-07-23 2006-12-12 Innovalarm Corporation Enhanced fire, safety, security and health monitoring and alarm response method, system and device
US7126467B2 (en) * 2004-07-23 2006-10-24 Innovalarm Corporation Enhanced fire, safety, security, and health monitoring and alarm response method, system and device
US20060033625A1 (en) * 2004-08-11 2006-02-16 General Electric Company Digital assurance method and system to extend in-home living
US7405653B2 (en) * 2005-06-13 2008-07-29 Honeywell International Inc. System for monitoring activities and location

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5651070A (en) * 1995-04-12 1997-07-22 Blunt; Thomas O. Warning device programmable to be sensitive to preselected sound frequencies
US6028514A (en) * 1998-10-30 2000-02-22 Lemelson Jerome H. Personal emergency, safety warning system and method
US20020107694A1 (en) * 1999-06-07 2002-08-08 Traptec Corporation Voice-recognition safety system for aircraft and method of using the same
US20020171551A1 (en) * 2001-03-15 2002-11-21 Eshelman Larry J. Automatic system for monitoring independent person requiring occasional assistance
US20030135311A1 (en) * 2002-01-17 2003-07-17 Levine Howard B. Aircraft flight and voice data recorder system and method
US7825813B2 (en) * 2006-07-25 2010-11-02 Intelehealth, Inc Identifying activity in an area utilizing sound detection and comparison

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324428A1 (en) * 2013-04-30 2014-10-30 Ebay Inc. System and method of improving speech recognition using context
US9626963B2 (en) * 2013-04-30 2017-04-18 Paypal, Inc. System and method of improving speech recognition using context
US20170221477A1 (en) * 2013-04-30 2017-08-03 Paypal, Inc. System and method of improving speech recognition using context
US10176801B2 (en) * 2013-04-30 2019-01-08 Paypal, Inc. System and method of improving speech recognition using context
WO2015195503A1 (en) * 2014-06-17 2015-12-23 David Seese Individual activity monitoring system and method
CN110110613A (en) * 2019-04-19 2019-08-09 北京航空航天大学 A kind of rail traffic exception personnel's detection method based on action recognition

Also Published As

Publication number Publication date
US20080025477A1 (en) 2008-01-31
US7825813B2 (en) 2010-11-02

Similar Documents

Publication Publication Date Title
US7825813B2 (en) Identifying activity in an area utilizing sound detection and comparison
US11846954B2 (en) Home and building automation system
EP3588455B1 (en) Identifying a location of a person
US10176705B1 (en) Audio monitoring and sound identification process for remote alarms
US10290198B2 (en) Sensor system
US8237571B2 (en) Alarm method and system based on voice events, and building method on behavior trajectory thereof
EP3223253A1 (en) Multi-stage audio activity tracker based on acoustic scene recognition
Chen et al. An automatic acoustic bathroom monitoring system
WO2016142672A2 (en) Activity monitor
KR100719240B1 (en) Method for providing resident management service, system for using resident management service, and computer readable medium processing the method
EP4135569A1 (en) System and method for providing a health care related service
Srinivasan et al. Presence detection using wideband audio-ultrasound sensor
US11532223B1 (en) Vulnerable social group danger recognition detection method based on multiple sensing
JP2005172548A (en) Supervisory device and database construction method
Hervás et al. homeSound: A High Performance Platform for Massive Data Acquisition and Processing in Ambient Assisted Living Environments.
EP3767602A1 (en) Sensor system for activity recognition
WO2020206506A1 (en) Context responsive sensing apparatus, system and method for automated building monitoring and control
KR20230108853A (en) vulnerable social group danger recognition detection system based on multiple sensing
KR20230108857A (en) vulnerable social group danger recognition detection method based on multiple sensing
Istrate et al. Multichannel sound acquisition with stress situations determination for medical supervision in a smart house
Fezari et al. Distress Situation Detection Based Data Fusion Analysis for HSH

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION