WO2010090622A1 - Systems and methods for video analysis - Google Patents

Systems and methods for video analysis Download PDF

Info

Publication number
WO2010090622A1
WO2010090622A1 PCT/US2009/000841 US2009000841W WO2010090622A1 WO 2010090622 A1 WO2010090622 A1 WO 2010090622A1 US 2009000841 W US2009000841 W US 2009000841W WO 2010090622 A1 WO2010090622 A1 WO 2010090622A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
search result
target
user
clip
Prior art date
Application number
PCT/US2009/000841
Other languages
French (fr)
Inventor
Doug Anderson
Ryan Case
Rob Haitani
Bob Petersen
Original Assignee
Vitamin D, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vitamin D, Inc. filed Critical Vitamin D, Inc.
Priority to PCT/US2009/000841 priority Critical patent/WO2010090622A1/en
Publication of WO2010090622A1 publication Critical patent/WO2010090622A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Definitions

  • a method for providing an analysis includes four steps.
  • the first step is the step of identifying a target by a computing device.
  • the target is displayed from a video through a display of the computing device.
  • the second step of the method is the step of receiving a query related to the identified target via a user input to the computing device.
  • the third step of the method is the step of generating a search result based on the video.
  • the search result comprises information relating to the identified target.
  • the fourth step is the step of displaying the search result through the display of the computing device.
  • a system for video analysis includes a target identification module, an interface module, a search result module, and a display module.
  • the target identification module is configured for identifying a target from the video supplied to a computing device.
  • the interface module is in communication with the target identification module.
  • the interface module is configured for receiving a query related to the identified target via a user input to the computing device.
  • the search result module is in communication with the interface module.
  • the search result module is configured to generate a search result based on the video.
  • the search result comprises information related to the identified target.
  • the display module is in communication with the search result module.
  • the display module is configured to display the search result through the display of the computing device.
  • a system for generating a search result based on an analysis includes a processor and a computer readable storage medium.
  • the computer readable storage medium includes instructions for execution by the processor which causes the processor to provide a response.
  • the processor is coupled to the computer readable storage medium.
  • the processor executes the instructions on the computer readable storage medium to identify a target from a video supplied to a computing device, receive a query related to the identified target, and generate the search result based on the video.
  • the search result comprises information related to the identified target.
  • FIG. 1 is a diagram of an exemplary network environment for a system for video analysis.
  • FIG. 2 is a flow chart showing an exemplary method of providing a video analysis.
  • FIG. 3 is a diagram of an exemplary architecture of a system for video analysis.
  • FIG. 4 is an exemplary screenshot of a display on a computing device interacting with some of the various embodiments disclosed herein.
  • FIG. 5 is a second exemplary screenshot of a display on a computing device interacting with some of the various embodiments disclosed herein.
  • FIG. 6 is a third exemplary screenshot of a display on a computing device interacting with some of the various embodiments disclosed herein.
  • FIG. 7 is an exemplary screenshot of a display on a computing device during a quick search using some of the various embodiments disclosed herein.
  • FIG. 8 is an exemplary screenshot of a display on a computing device during a rule search using some of the various embodiments disclosed herein.
  • FIG. 9 is an exemplary screenshot of a pop-up alert displayed on a display of a computing device using some of the various embodiments disclosed herein.
  • systems and methods for providing analysis in a convenient and meaningful presentation that is beneficial to the user.
  • systems and methods for providing data analysis and generating reliable search results are provided herein.
  • Such systems and methods may be based on queries.
  • Queries may include rules that may be configurable by the user. In other words, the user may be given the flexibility to define the rules.
  • Such user-defined rules may be created, saved, edited, and re-applied to data of any type, including but not limited to data streams, data archives, and data presentations.
  • the technology provided herein may be user-extensible. For instance, the user is provided with the means to define rules, searches, and user selections (such as user selections regarding data sources, cameras, targets, triggers, responses, time frames, and the like).
  • Metadata in video may be searched using user-configurable rules for both real-time and archive searches.
  • metadata in video may be associated with camera, target and/or trigger attributes of a target that is logged for processing, analyzing, reporting and/or data mining methodologies.
  • Metadata may be extracted, filtered, presented, and used as keywords for searches. Metadata in video may also be accessible to external applications.
  • the technology herein may also utilize, manipulate, or display metadata for searching data archives.
  • the metadata may be associated with a video.
  • metadata in a video may be useful to define and/or recognize triggered events according to rules that are established by a user.
  • Metadata may also be useful to provide only those videos or video clips that conform to the parameters set by a user through rules. By doing this, videos or video clips that only include triggered events as identified by the user are provided to the user. Thus, the user is not presented with a search result having hundreds or thousands of videos, but rather a much smaller set of videos that meet the user's requirements as set forth in rules. Further discussion regarding the use of metadata in video will be provided herein.
  • the technology may be implemented through a variety of means, such as object recognition, artificial intelligence, hierarchical temporal memory (HTM), any technology that recognizes patterns found in objects, and any technology that can establish categories of objects.
  • object recognition such as object recognition, artificial intelligence, hierarchical temporal memory (HTM), any technology that recognizes patterns found in objects, and any technology that can establish categories of objects.
  • HTM hierarchical temporal memory
  • this list is simply an exemplary one and the technology is not limited to a single type of implementation.
  • any type of analysis from any data source may be utilized with this technology.
  • an external data source such as a web-based data source in the form of a news feed
  • the technology is flexible to utilize any data source, and is not restricted to only video sources or video streams.
  • FIG. 1 depicts an exemplary networking environment 100 for a system that provides video analysis.
  • the exemplary networking environment 100 includes a network 1 10, one or more computing devices 120, one or more video sources 130, one or more optional towers 140, a server 150, and an optional external database 160.
  • the network 1 10 may be the Internet, a mobile network, a local area network, a home network, or any combination thereof.
  • the network 1 10 is configured to couple with one or more computing devices 120.
  • the computing device 120 may be a computer, a laptop computer, a desktop computer, a mobile communications device, a personal digital assistant, a video player, an entertainment device, a game console, a GPS device, a networked sensor, a card key reader, a credit card reader, a digital device, a digital computing device and any combination thereof.
  • the computing device 120 preferably includes a display (not shown).
  • a display may include one or more browsers, one or more user interfaces, and any combination thereof.
  • the display of the computing device 120 may be configured to show one or more videos.
  • a video may be a video feed, a video scene, a captured video, a video clip, a video recording, or any combination thereof.
  • the network 1 10 may also be configured to couple to one or more video sources 130.
  • the video may be provided by one or more video sources 130, such as a camera, a fixed security camera, a video camera, a video recording device, a mobile video recorder, a webcam, an IP camera, pre-recorded data (e.g., pre-recorded data on a DVD or a CD), previously stored data (including, but not limited to, previously stored data on a database or server), archived data (including but not limited to, video archives or historical data), and any combination thereof.
  • the computing device 120 may be a mobile communications device that is configured to receive and transmit signals via one or more optional towers 140.
  • the network UO may be configured to couple to the server 150.
  • the server 150 may use one or more exemplary methods (such as the method 200 shown in FIG. 2).
  • the server 150 may also be included in one or more exemplary systems described herein (such as the system 300 shown in FlG. 3).
  • the server 150 may include an internal database to store data.
  • One or more optional external databases 160 may be configured to couple to the server 150 for storage purposes.
  • FIG. 1 Although one computing device 120 is shown, the technology allows for the network 1 10 to couple to one or more computing devices 120.
  • FIG. 1 although one network 1 10 and one server 150 are shown in FIG. 1 , one skilled in the art can appreciate that more than one network and/or more than one server may be utilized and still fall within the scope of various embodiments.
  • FIG. 1 includes dotted lines to show relationships between elements, such relationships are exemplary. For instance, FIG.
  • FIG. 1 shows that the video source 130 is coupled to the network 1 10, and the computing device 120 is coupled to the network 1 10.
  • the various embodiments described herein also encompass any networking environment where one or more video sources 130 are coupled to the computing device 120, and the computing device 120 is coupled to the network 1 10. Further details as to various embodiments of the system 100 of FIG. 1 can be found in the
  • the method 200 may include four steps.
  • a target is identified.
  • a query related to the identified target is received via a user input to the computing device.
  • a search result is generated.
  • the search result may be based on any type of data.
  • the search result may be based on one or more videos.
  • the search result includes information related to the identified target.
  • the search result is displayed.
  • the search result may be displayed through the display of the computing device.
  • the steps of method 200 are exemplary and may be combined, omitted, skipped, repeated, and/or modified.
  • any aspect of the method 200 may be user-extensible.
  • the target, the query, the search result, and any combination thereof may be user-extensible.
  • the user may therefore define any aspect of the method 200 to suit his requirements for analysis.
  • the feature of user-extensibility allows for this technology to be more robust and more flexible than the existing technology.
  • Users may combine targets, queries, and search results in various combinations to achieve customized results.
  • the target is identified by a computing device 120.
  • the target is displayed from a video through a display of the computing device 120.
  • the target may include one of a recognized object, a motion sequence, a state, and any combination thereof.
  • the recognized object may be a person, a pet or a vehicle.
  • a motion sequence may be a series of actions that are being targeted for identification.
  • a state may be a condition or mode (such as the state of a flooded basement, an open window, or a machine when a belt has fallen off). Further information regarding target identification is provided in the U.S.
  • identifying the target from a video may include receiving a selection of a predefined object.
  • a predefined object For instance, preprogrammed icons depicting certain objects (such as a person, a pet or a vehicle) that have already been learned and/or otherwise identified by the software program may be shown to the user through a display of the computing device 120.
  • the user may then select a predefined object (such as a person, a pet or a vehicle) by selecting the icon that best matches the target.
  • the user may drag and drop the icon onto another portion of the display of the computing device, such that the icon (sometimes referred to as a block) may be rendered on the display.
  • the icon may become part of a rule (such as the rule 405 shown in FlG. 4). For instance, if the user selects people as the target, an icon of "Look for: People" (such as the icon 455 of FIG. 4) may be rendered on the display of the computing device.
  • one or more icons may be added such that the one or more icons may be rendered on the display via a user interface. Exemplary user interfaces include, but are not limited to, "Add" button(s), drop down menu(s), menu command(s), one or more radio button(s), and any combination thereof.
  • one skilled in the art will recognize that any type of user interface may be used with this technology.
  • one or more icons may be removed from the display or modified as rendered on the display, through a user interface.
  • the technology allows for user-extensibility for defining targets. For instance, a user may "teach” the technology how to recognize new objects by assigning information (such as labels or tags) to clips of video that include the new objects. Thus, a software program may "learn” the differences between categories of pets, such as cats and dogs, or even categories of persons, such as adults, infants, men, and women.
  • identifying the target from a video may include recognizing an object based on a pattern. For instance, facial patterns (frowns, smiles, grimaces, smirks, and the like) of a person or a pet may be recognized.
  • a category may be established. For instance, a category of various human smiles may be established through the learning process of the software. Likewise, a category of variety of human frowns may be established by the software. Further, a behavior of a target may be recognized. Thus, the software may establish any type of behavior of a target, such as the behavior of a target when the target is resting or fidgeting. The software may be trained to recognize new or previously unknown objects. The software may be programmed to recognize new actions, new behaviors, new states, and/or any changes in actions, behaviors or states. The software may also be programmed to recognize metadata from video and provide the metadata to the user through the display of a computing device 120.
  • the motion sequence may be a series of actions that are being targeted for identification.
  • a motion sequence is the sequence of lifting a rock and tossing the rock through a window.
  • targets may be user-extensible.
  • the technology allows for users to extend the set of targets to include targets that were not previously recognized by the program.
  • targets may include previously unrecognized motion sequences, such as the motion sequence of kicking a door down.
  • targets may even include visual, audio, and both visual-audio targets.
  • the software program may be taught to recognize a baby's face versus an adult female's face.
  • the program may be taught to recognize a baby's voice versus an adult female's voice.
  • a query related to the identified target is received via a user input to the computing device 120.
  • the query may be stored on a computer readable storage medium (not shown).
  • the query may include one or more user-defined rules. Rules may include source selection (such as video source selection), triggers, and responses. Rules are described in further detail in the U.S. Patent Application Serial No. filed on February 9, 2009, titled "Systems and Methods for Video
  • the query may include an instruction to provide one or more clips of one or more videos based on a specific time period or time frame.
  • the time period can be of any measurement, including but not limited to days, weeks, hours, minutes, seconds, and the like.
  • the query may include an instruction to provide all video clips within the last 24 hours.
  • the query may include an instruction to provide all video clips for the last 2 Thursdays.
  • the query may include an instruction to provide all video clips regardless of a video timestamp. This is exemplified by a time duration field 760 showing "When: Anytime" in FIG. 7.
  • Metadata from a video including but not limited to time stamp and video properties relating to duration, may be extracted from the video. Such extracted metadata may then be used to determine whether a video or a clip of a video falls within a specific time period as defined in a query.
  • the query may include an instruction to provide one or more videos from one or more video sources.
  • a user may define which video source(s) should be included in the query.
  • An example is found in FIG. 7, where the user designated in a location field 730 that video from a camera in a living room should be the video source ("Camera: Living room").
  • a drop down menu is provided for the location field 730 so that a user may select which camera is included in the query.
  • the a user may define a video source through any type of user input to a computing device 120, and the technology is not limited to only drop down menus for user selection of video sources.
  • the query may comprise an instruction to provide a video clip regarding the identified target.
  • the identified target may include one or more persons, vehicles or pets.
  • the identified target may be a user-defined target. User-defined targets are discussed at length in the U.S. Patent Application Serial No. filed on
  • the query may include an instruction to provide a video clip showing an identified target within a region.
  • a query may include an instruction to provide video clips that show people within a region designated by the user.
  • the user may designate a region by drawing a box (such as a bounding box), circle or other shape around a region that can be viewed by a video source.
  • a search result is generated.
  • the search result may be based on any type of data.
  • the search result may be based on one or more videos captured by one or more video sources.
  • the search result may include information related to the identified target.
  • Generating the search result may include filtering the video based on the query.
  • filtering videos based on a query can be accomplished by using metadata that is associated with the videos being analyzed. As discussed previously, this technology may extract, identify, utilize and determine the metadata that is associated with videos.
  • the metadata may include metadata relating to identified targets, attributes regarding identified targets, timestamps of videos or clips of videos, source settings (such as video source location or camera location), recognized behaviors, patterns, states, motion sequences, user-defined regions as captured by videos, and any further information that may be garnered to execute a query.
  • source settings such as video source location or camera location
  • generating the search result may include providing one or more video clips with a text description of the information related to the identified target.
  • the text description of a given video clip may be all or part of a query, a rule, and/or metadata associated with the video clip. For instance, based on the object recognition aspects of this technology, the technology may recognize a user's pet dog. If the user's pet dog is seen moving in a designated region based on a video, then the generation of the search result may include providing the video clip of the dog in the region with the location of the video source.
  • the text description of "Pet - Living Room Camera" 850 is given to a video clip that shows the user's pet moving in a region of the living room.
  • the video clip may be represented with a thumbnail 860 of a frame where the identified target (pet) matched the executed search query.
  • the text description may include further information about the identified target, based on a query, a rule and/or metadata associated with the video clip.
  • the thumbnail 860 of the video clips of "Pet - Living Room Camera” 850 (as shown in FIG. 8) has further text that provides the name of the pet (Apollo) and the region that the user designated (couch).
  • the technology may be able to distinguish the pet Apollo from another pet in the user's household.
  • Generating the search result may include providing a thumbnail of the video or video clip which may include a bounding box of the identified target that matched an executed search query.
  • the bounding box 870 of the identified target (a pet named Apollo) is shown to the user on the display of a computing device.
  • generating the search result may show a frame where the identified target matched an executed search query (such as the frame 860 of the pet Apollo in FlG. 8).
  • Generating a search result may include providing a timeline showing triggered events that occur within a specified time period, as shown in the video clip. Further discussion regarding timelines and triggered events is provided later.
  • the search result is displayed to the user.
  • the search result may be displayed to the user on a display of a computing device 120.
  • the search result may be presented in any format or presentation.
  • One type of format is displaying the search results in a list with thumbnails for each of the video clips that match the search query or criteria, as described earlier herein.
  • Both FlGs. 7 and 8 show lists of search results. For instance, FIG. 7 shows 3 search results, with a thumbnail for each of the search results.
  • the method 200 may include steps that are not shown in FIG. 2.
  • the method 200 may include the step of receiving a selection of at least one delivery option for the search result.
  • a non-exhaustive and exemplary list of delivery options includes an electronic mail message delivery, a text message delivery, a multimedia message delivery, a forwarding of a web link delivery option, an option to upload the search result onto a website, and any combination thereof.
  • the method 200 may include the step of delivering the search result based on the delivery option selected.
  • the method 200 may also include the step of providing an alert for display on the display of the computing device 120.
  • An exemplary alert is a pop-up alert 900 in FIG. 9 which shows a thumbnail of a frame from a video clip.
  • the system 300 may include four modules, namely, a target identification module 310, an interface module 320, a search result module 330, and a display module 340.
  • the system 300 can utilize any of the various exemplary methods described herein, including the method 200 (FIG. 2) described earlier herein. It will be appreciated by one skilled in the art that any of the modules shown in the exemplary system 300 can be combined, omitted, or modified, and still fall within the scope of various embodiments.
  • the target identification module 310 is configured for identifying a target from the video supplied to a computing device 120 (FIG. 1).
  • the interface module 320 is in communication with the target identification module 3 10.
  • the interface module 320 is configured for receiving a query related to the identified target via a user input to the computing device.
  • the search result module 330 is in communication with the interface module 320.
  • the search result module 330 is configured for generating a search result based on the video.
  • the search result may include information related to the identified target.
  • the display module 340 is in communication with the search result module.
  • the display module 340 is configured to display the search result through the display of the computing device 120.
  • the search result module 340 is configured to filter the video based on the query.
  • the search result module 340 may be configured to provide the video with a text description of the information related to the identified target.
  • the information related to the identified target may include metadata associated with the clip of the video, or it may include all or part of the query.
  • the search result module 340 is also configured to provide a thumbnail of the video clip, as described earlier herein.
  • the system 300 may comprise a processor (not shown) and a computer readable storage medium (not shown).
  • the processor and/or the computer readable storage medium may act as one or more of the four modules (i.e., the target identification module 310, the interface module 320, the search result module 330, and the display module 340) of the system 300.
  • examples of computer readable storage medium may include discs, memory cards, servers and/or computer discs. Instructions may be retrieved and executed by the processor. Some examples of instructions include software, program code, and firmware. Instructions are generally operational when executed by the processor to direct the processor to operate in accord with embodiments of the invention.
  • various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of various embodiments.
  • FIG. 4 an exemplary screenshot of a rule editor 400 as depicted on a display of a computing device 120 (FIG. 1) is shown.
  • the rule editor 400 is a feature of the technology that allows the user to define one or more aspects of a given rule or query 405.
  • a rule name for a given rule (such as a rule name of "People in the garden") is provided in a name field 410.
  • the rule editor 400 allows the user to provide names to the rule 405 that the user defines or otherwise composes.
  • a plurality of icons may be provided to the user 420.
  • An icon of a video source 440 may be provided.
  • the video source 440 may be displayed with one or more settings, such as the location of the camera ("Video source: Side camera” in FIG. 4).
  • a user may click on the video source icon 440, drag it across to another portion of the display, and drop it in an area of the display.
  • the dragged and dropped icon then becomes a selected side camera video source icon 445 (“Video source: Side camera”), which is shown on FIG. 4 as being located near the center of the display.
  • a user may click on the video source icon 440 until a corresponding icon of the selected video source 445 (with a setting, such as the location of the selected video source) is depicted in the rule 405.
  • the user may be provided with one or more video sources 440, and the user can select from those video sources 440.
  • a list of possible video sources may appear on the display.
  • the list of possible video sources may appear on a right portion of the display.
  • the user may add, remove, or modify one or more icons (such as the video source icon 440) from the display through one or more user interfaces, such as an "Add" button, drop down menu(s), menu command(s), one or more radio button(s), and any combination thereof.
  • icons include but are not limited to icons representing triggers, targets, and responses.
  • a video source 440 is selected and displayed as part of the rule 405 (such as the selected side camera video source icon 445)
  • the user may define the target that is to be identified by a computing device.
  • the user may select the "Look for” icon 450 on a left portion of the display of the computing device.
  • a selection of preprogrammed targets is provided to the user.
  • the user may select one target (such as "Look for: People" icon 455 as shown in the exemplary rule 405 of FIG.
  • the user may select one or more triggers.
  • the user may select a trigger via a user input to the computing device 120.
  • a plurality of trigger icons 460, 465 may be provided to the user for selection.
  • Trigger icons depicted in FIG. 4 are the "Where” icon 460 and the “When” icon 465. If the "Where" icon 460 is selected, then the "Look Where” pane 430 on the right side of the display may be provided to the user.
  • the "Look Where" pane 430 allows for the user to define the boundaries of a location or region that the user wants movements to be monitored. For instance, the user may define the boundaries of a location by drawing a box, a circle, or any other shape. In FIG.
  • the user has drawn a bounding box around an area that is on the left hand side of a garbage can.
  • the bounding box surrounds an identified target.
  • the bounding box may be used to determine whether a target has entered a region or it serves as a visual clue to the user where the target is in the video.
  • Regions may be named by the user.
  • queries or rules may be named by the user. Regions, queries and/or rules may be saved by the user for later use. Rules may be processed in real time.
  • the bounding box may track an identified target.
  • the bounding box may track an identified target that has been identified as a result of an application of a rule.
  • the bounding box may resize based on the dimensions of the identified target.
  • the bounding box may move such that it tracks the identified target as the identified target moves in a video. For instance, a clip of a video may be played back, and during playback, the bounding box may surround and/or resize to the dimensions of the identified target. If the identified target moves or otherwise makes an action that causes the dimensions of the identified target to change, the bounding box may resize such that it may surround the identified target while the identified target is shown in the video, regardless of the changing dimensions of the identified target.
  • FIG. 7 shows an exemplary bounding box 775.
  • one or more bounding boxes may be shown to the user to assist in tracking one or more identified targets while a video is played.
  • the "Look Where" pane 430 may allow the user to select a radio button that defines the location attribute of the identified target as a trigger.
  • the user may select the option that movement "Anywhere" is a trigger.
  • the user may select the option that "inside” a designated region (such as “the garden") is a trigger.
  • the user may select "outside” a designated region.
  • the user may select an option that movement that is “Coming in through a door” is a trigger.
  • the user may select an option that movement that is “Coming out through a door” is a trigger.
  • the user may select an option that movement that is "Walking on part of the ground” (not shown) is a trigger.
  • the technology may recognize when an object is walking on part of the ground.
  • the technology may recognize movement and/or object in three-dimensional space, even when the movement and/or object is shown on the video in two dimensions. Further, the user may select an option of "crossing a boundary" is a selected trigger.
  • the "When" icon 465 is selected, then the "Look When” pane (not shown) on the right side of the display is provided to the user.
  • the "Look When” pane may allow for the user to define the boundaries of a time period that the user wants movements to be monitored. Movement may be monitored when motion is visible for more than a given number of seconds. Alternatively, movement may be monitored for when motion is visible for less than a given number of seconds. Alternatively, movement may be monitored within a given range of seconds. In other words, a specific time duration may be selected by a user.
  • any measurement of time including, but not limited to, weeks, days, hours, minutes, or seconds
  • the user selection can be through any means (including, but not limited to, dropping and dragging icons, checkmarks, selection highlights, radio buttons, text input, and the like).
  • a response may be provided.
  • One or more of a plurality of response icons (such as Record icon 470, Notify icon 472, Report icon 474, and Advanced icon 476) may be selected by the user.
  • Record icon 470 is selected by the user, then "If seen: Record to video" 490 appears on the display of the computing device 120.
  • the rule 405 of FIG. 4 entitled “People in the garden” states that using the side camera as a video source, look for people that are inside the garden. If the rule is met, then the response is: "if seen, record to video" (490 of FIG. 4).
  • Notify icon 472 If the Notify icon 472 is selected, then a notification may be sent to the computing device 120 of the user. A user may select the response of "If seen: Send email" (not shown) as part of the notification. The user may drag and drop a copy of the Notify icon 472 and then connect the Notify icon 472 to the rule 405.
  • a notification may also be sending a text message to a cell phone, sending a multimedia message to a cell phone, or a notification by an automated phone. If the Report icon 474 is selected, then a generation of a report may be the response. If the Advanced icon 476 is selected, the computer may play a sound to alert the user. Alternatively the computer may store the video onto a database or other storage means associated with the computing device 120 or upload a video directly to a user-designated URL. The computer may interact with external application interfaces, or it may display custom text and/or graphics.
  • FIG. 5 shows a screenshot 500 of a display of a computing device 120, where a rule 505 is known as a complex rule.
  • the user may select one or more target(s), one or more trigger(s), and any combination thereof, and may utilize Boolean language (such as "and” and “or") in association with the selected target(s) and/or trigger(s).
  • Boolean language such as "and” and "or”
  • FIG. 5 shows Boolean language being used with targets.
  • the user selects the "Look for" icon 450, the user may be presented with a selection list of possible targets 510, which include People, Pets, Vehicles, Unknown Objects and All Objects.
  • the selection list of possible targets 510 may be a drop down menu.
  • the user may then select the targets he or she wishes to select.
  • the user selected targets in such a way that the program will identify targets that are either People ("Look for: People") or Pets ("Look for: Pets"), and the program will also look for targets that are Vehicles ("Look for: Vehicles").
  • the selection list of possible targets 510 may include an "Add object” or "Add target” option, which the user may select in order to "train” the technology to recognize an object or a target that was previously unknown or not identified by the technology.
  • the user may select a Connector icon 480 to connect one or more icons, in order to determine the logic flow of the rule 505 and/or the logic flow between icons that have been selected.
  • Boolean language is used to apply to multiple triggers for a particular target.
  • Boolean language may be applied, such that the user has instructed the technology to locate a person "in the garden OR (on the sidewalk AND moving left to right)." With this type of instruction, the technology will locate either persons in the garden or persons that are on the sidewalk who are also moving left to right.
  • the user may include Boolean language that apply for both one or more targets(s) as well as one or more trigger(s).
  • a further embodiment is a rule 505 that includes Boolean language that provides a sequence (such as "AND THEN”). For instance, a user may select two or more triggers to occur in a sequence (e.g., "Trigger A” happens AND THEN "Trigger B” happens. Further, one skilled in the art will understand that a rule 505 includes one or more nested rules, as well as one or more rules in a sequence, in a series, or in parallel. Rules may be ordered in a tree structure with multiple branches, with one or more responses coupled to the rules.
  • the user may select the targets by placing checkmarks next to the targets he wishes to designate in the selection list of possible targets 510.
  • the selection of targets can be accomplished by any means of selection, and the selection of targets is not limited to highlighting or placing checkmarks next to selected targets.
  • a monitor view 600 of the one or more video sources 130 (FIG. 1 ) is provided.
  • the monitor view 600 provides an overall glance of one or more video sources 130, in relation with certain timelines of triggered events and rules established by users.
  • the monitor view 600 is a live view of a selected camera.
  • the monitor view 600 may provide a live thumbnail of a camera view.
  • the timelines of triggered events may be representations of metadata that are identified and/or extracted from the video by the software program.
  • the monitor view 600 includes thumbnail video views of the Backyard 610, Front 620, and Office 630. Further, as depicted in FIG. 6, the thumbnail video view of the Backyard 610 is selected and highlighted on the left side of the display. On the right hand of the display, a larger view 640 of the video that is presented in the thumbnail video view of the Backyard 610 may be provided to the user, along with a time and date stamp 650. Also, the monitor view 600 may provide rules and associated timelines. For instance, the video source 130 located in the Backyard 610 has two rule applications, namely, "People - Walking on the lawn" 660 and "Pets - In the Pool" 670.
  • a first timeline 665 is associated with the rule application "People - Walking on the lawn” 660.
  • a second timeline 675 is associated with the rule application “Pets - In the Pool” 670.
  • a rule application may comprise a set of triggered events that meet requirements of a rule, such as "People in the garden” 405 (FlG. 4). The triggered events are identified in part through the use of metadata of the video that is recognized, extracted or otherwise identified by the program.
  • the first timeline 665 is from 8 am to 4 pm.
  • the first timeline 665 shows five vertical lines. Each vertical line may represent the amount of time in which movement was detected according to the parameters of the rule application "People — Walking on the lawn” 660. In other words, there were five times during the time period of 8 am to 4 pm in which movement was detected that is likely to be people walking on the lawn.
  • the second timeline 675 is also from 8 am to 4 pm.
  • the second timeline 675 shows only one vertical line, which means that in one time period (around 10:30 am), movement was detected according to the parameters of the rule application "Pets - In the Pool” 670. According to FIG. 6, around 10:30 am, movement was detected that is likely to be one or more pets being in the pool.
  • FIG. 7 shows a screenshot 700 of a display of a computing device 120 following the execution of a quick search, according to one exemplary embodiment.
  • the quick search option 710 is one of two options for searching in FIG. 7.
  • the second option is a rule search option 720, which will be discussed in greater detail in FIG. 8.
  • a quick search may allow for a user to quickly search for videos or clips of videos that meet certain criteria.
  • the criteria may include information provided in a location field 730, a target field 740, and a duration field 750. Searches may be done immediately upon receipt of the criteria. Searches may be done on live video and/or archived video. [0063] In FIG.
  • the user has selected "Living room” for the location of the camera (or video source) in the location field 730, "people” for identified targets to look for in the target field 740, and "anytime” as the criteria for the timestamp of the video to be searched in the duration field 750.
  • the user has asked for a quick search of videos that have been captured by the living room camera.
  • the exemplary quick search in FIG. 7 is to identify all the triggered events in which people were in the living room at anytime. By doing so, the quick search may narrow the video clips from a huge set to a much smaller subset, where the subset conforms to the user's query or search parameters.
  • Search results may filter existing video to display to the user only the relevant content.
  • the relevant content may be that content which matches or fits the criteria selected by the user.
  • the relevant content may be that content which matches or fits the rule defined and selected by the user.
  • the technology may use object recognition and metadata associated with video clips in order to conduct a search and generate a search result.
  • the quick search has provided a search result of only three video clips.
  • the three video clips may be listed in a chronological order, with a thumbnail of a frame showing the identified target and a bounding box.
  • Each of the three video clips includes a text description of "People - Living room.” The text description may have been generated from information related to the identified objects and/or metadata associated with the video clips.
  • one of the three video clips 760 is highlighted and selected by the user. Once a video clip is selected, a larger image 765 of the video clip 760 is provided to the user on the display of the computing device 120.
  • the larger image 765 may include a bounding box 775 of the identified target that matched the executed search criteria or rule. Videos may start playing at the frame where the identified target matched the executed search.
  • the larger image 765 may also include a title 770, such as "Living room,” to indicate the setting or location of the camera or video source.
  • Controls for videos 780 may be provided to the user.
  • the user may be able to playback, rewind, fast forward, or skip throughout a video using the appropriate video controls 780.
  • the user may also select the speed in which the user wishes to view the video using a playback speed control 785.
  • a timeline control 790 that shows all the instances of a current search over a given time period may be displayed to the user.
  • the exemplary timeline control 790 is a timeline that stretches from 8 am to 6 pm, and it shows each instance of a search result that matches the quick search criteria 730, 740, and 750.
  • FIG. 8 shows a screenshot 800 of a display of a computing device 120 following the execution of a rule search.
  • the rule search option 720 has been selected by the user in the example of FIG. 8.
  • a rule search is a search based on a user-defined rule.
  • a rule may include a target and a trigger.
  • targets and triggers can be defined by users, rules and portions of rules are user- extensible. Further information regarding rules may be found in the U.S. Patent
  • a rule may be saved by a user.
  • three rules have been saved by the user. Those rules are called “Approaching the door,” “Climbing over the fence into the garden” and “Loitering by the fence.” Saved rules may be displayed in a rule list 810.
  • One of the saved rules may be selected, along with a definition of a time frame through the duration field 750, to execute a rule search.
  • the rule "Climbing over the fence into the garden” has been selected by the user and the time frame is "anytime.”
  • the exemplary rule search in FIG. 8 is for the technology to search any videos that show an object climbing over the fence into the garden at anytime.
  • rules may be modified or edited by a user.
  • a user may edit a rule by selecting a rule and hitting the "Edit" button 820.
  • a user may change any portion of a rule using the "Edit” button.
  • a user may select a rule and then the user may be presented with the rule as it currently stands in the rule editor 400 (FIG. 4).
  • the user may edit the rule by changing the flow logic of a rule or by modifying the targets, triggers, and/or responses of the existing rule.
  • a new rule may be created as well, using the rule editor 400, and then the user may save the rule, thereby adding the new rule to the rule list 810.
  • Rules may be uploaded and downloaded by a user to the Internet, such that rules can be shared amongst users of this technology. For example, a first user may create a sprinkler rule to turn on the sprinkler system when a person jumps a fence and enters a region. The first user may then upload his sprinkler rule onto the Internet, such that a second user can download this the first user's sprinkler rule. The second user may then use the first user's sprinkler rule in its entirety, or the second user may modify the first user's sprinkler rule to add that if a pet jumps the fence and enter the region, then the sprinkler will also activate. The second user may then upload the modified sprinkler rule onto the Internet, such that the first user and any third party may download the modified sprinkler rule.
  • rules may be defined for archival searches.
  • videos may be archived using a database or an optional video storage module (not shown) in the system 300 (FIG. 3). Rules may be selected for execution and application on those archived videos. Based on historical learning, after archived videos have been recorded, a user may also execute a new rule search on archived videos. The user may define a new rule, the user may use another user's rules that have been shared, or the user may download a new rule from the Internet.
  • the optional video storage module (not shown) in the system 300 may be referenced in to do a subsequent analysis or application of rules.
  • the technology includes a pop-up alert 900.
  • the pop-up alert 900 may be for display on the display of the computing device 120.
  • the pop-up alert 900 includes a thumbnail of a frame from a video clip.
  • text may be presented to the user in the pop-up alert 900, advising the user that a person was seen entering the garden via the side camera, based on object recognition, historical learning, and metadata associated with the video clip.
  • the pop-up alert 900 may be a result of a rule application where the user has requested the system to inform the user when persons are seen entering the garden via the side camera.
  • the pop-up alert 900 may include an invitation for the user to view the relevant video clip provided by the side camera.
  • This pop-up alert 900 may also include a timestamp, which may also be provided by metadata associated with the video clip.
  • External data sources such as web-based data sources, can be utilized in the system 100 of FlG 1 .
  • Such external data sources may be used either in conjunction with or in place of the one or more video sources 130 in the system 100 of FIG. 1.
  • the technology encompasses embodiments that include data from the Internet, such as a news feed.
  • the system 100 of FIG. 1 allows for such a rule and response to be defined by a user and then followed by the system 100.
  • a rule includes a target and a trigger.
  • a rule may include a target, a trigger, a response, and any combination thereof.

Abstract

Embodiments of systems and methods for video analysis are given. A method for providing a video analysis includes four steps. A target is identified by a computing device and is displayed from a video through a display of the computing device. A query related to the identified target is received via a user input to the computing device. A search result is generated based on the video. The search result includes information related to the identified target. The search result is then displayed through the display of the computing device.

Description

SYSTEMS AND METHODS FOR VIDEO ANALYSIS SUMMARY OF THE INVENTION
[0001] Embodiments of systems and methods for video analysis are provided herein. In a first embodiment, a method for providing an analysis includes four steps. The first step is the step of identifying a target by a computing device. The target is displayed from a video through a display of the computing device. The second step of the method is the step of receiving a query related to the identified target via a user input to the computing device. The third step of the method is the step of generating a search result based on the video. The search result comprises information relating to the identified target. The fourth step is the step of displaying the search result through the display of the computing device.
[0002] In a second embodiment, a system for video analysis is provided. The system includes a target identification module, an interface module, a search result module, and a display module. The target identification module is configured for identifying a target from the video supplied to a computing device. The interface module is in communication with the target identification module. The interface module is configured for receiving a query related to the identified target via a user input to the computing device. The search result module is in communication with the interface module. The search result module is configured to generate a search result based on the video. The search result comprises information related to the identified target. The display module is in communication with the search result module. The display module is configured to display the search result through the display of the computing device.
[0003] According to a third embodiment, a system for generating a search result based on an analysis is supplied. The system includes a processor and a computer readable storage medium. The computer readable storage medium includes instructions for execution by the processor which causes the processor to provide a response. The processor is coupled to the computer readable storage medium. The processor executes the instructions on the computer readable storage medium to identify a target from a video supplied to a computing device, receive a query related to the identified target, and generate the search result based on the video. The search result comprises information related to the identified target.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 is a diagram of an exemplary network environment for a system for video analysis.
[0005] FIG. 2 is a flow chart showing an exemplary method of providing a video analysis.
[0006] FIG. 3 is a diagram of an exemplary architecture of a system for video analysis.
[0007] FIG. 4 is an exemplary screenshot of a display on a computing device interacting with some of the various embodiments disclosed herein.
[0008] FIG. 5 is a second exemplary screenshot of a display on a computing device interacting with some of the various embodiments disclosed herein.
[0009] FIG. 6 is a third exemplary screenshot of a display on a computing device interacting with some of the various embodiments disclosed herein.
[0010] FIG. 7 is an exemplary screenshot of a display on a computing device during a quick search using some of the various embodiments disclosed herein.
[0011] FIG. 8 is an exemplary screenshot of a display on a computing device during a rule search using some of the various embodiments disclosed herein.
[0012] FIG. 9 is an exemplary screenshot of a pop-up alert displayed on a display of a computing device using some of the various embodiments disclosed herein.
DETAILED DESCRIPTION OF THE INVENTION
[0013] There are inherent difficulties associated with searching and analyzing data using existing technologies. Existing technologies are time-consuming, inconvenient, unreliable, and provide false positives. Furthermore, existing technologies have a tendency not to be helpful insofar that they cannot reduce or filter a large set of data to a meaningful subset for presentation to a user.
[0014] In contrast, the technology presented herein provides embodiments of systems and methods for providing analysis in a convenient and meaningful presentation that is beneficial to the user. Specifically, systems and methods for providing data analysis and generating reliable search results are provided herein. Such systems and methods may be based on queries. Queries may include rules that may be configurable by the user. In other words, the user may be given the flexibility to define the rules. Such user-defined rules may be created, saved, edited, and re-applied to data of any type, including but not limited to data streams, data archives, and data presentations. The technology provided herein may be user-extensible. For instance, the user is provided with the means to define rules, searches, and user selections (such as user selections regarding data sources, cameras, targets, triggers, responses, time frames, and the like).
[0015] Moreover, the technology described herein provides systems and methods for providing the user with a selection of existing rules and/or time frames to execute searches. Also, data may be pre-processed to generate metadata, which may then be searched with one or more rules. For instance, metadata in video may be searched using user-configurable rules for both real-time and archive searches. As will be described in greater detail herein, metadata in video may be associated with camera, target and/or trigger attributes of a target that is logged for processing, analyzing, reporting and/or data mining methodologies. Metadata may be extracted, filtered, presented, and used as keywords for searches. Metadata in video may also be accessible to external applications.
[0016] The technology herein may also utilize, manipulate, or display metadata for searching data archives. In some embodiments, the metadata may be associated with a video. For instance, metadata in a video may be useful to define and/or recognize triggered events according to rules that are established by a user. Metadata may also be useful to provide only those videos or video clips that conform to the parameters set by a user through rules. By doing this, videos or video clips that only include triggered events as identified by the user are provided to the user. Thus, the user is not presented with a search result having hundreds or thousands of videos, but rather a much smaller set of videos that meet the user's requirements as set forth in rules. Further discussion regarding the use of metadata in video will be provided herein.
[0017] The technology may be implemented through a variety of means, such as object recognition, artificial intelligence, hierarchical temporal memory (HTM), any technology that recognizes patterns found in objects, and any technology that can establish categories of objects. However, one skilled in the art will recognize that this list is simply an exemplary one and the technology is not limited to a single type of implementation.
[0018] One skilled in the art will recognize that although some embodiments are provided herein for video analysis, any type of analysis from any data source may be utilized with this technology. For instance, instead of a video source, an external data source (such as a web-based data source in the form of a news feed) may be provided instead. The technology is flexible to utilize any data source, and is not restricted to only video sources or video streams.
[0019] FIG. 1 depicts an exemplary networking environment 100 for a system that provides video analysis. Like numbered elements in the figures refer to like elements. The exemplary networking environment 100 includes a network 1 10, one or more computing devices 120, one or more video sources 130, one or more optional towers 140, a server 150, and an optional external database 160. The network 1 10 may be the Internet, a mobile network, a local area network, a home network, or any combination thereof. The network 1 10 is configured to couple with one or more computing devices 120.
[0020] The computing device 120 may be a computer, a laptop computer, a desktop computer, a mobile communications device, a personal digital assistant, a video player, an entertainment device, a game console, a GPS device, a networked sensor, a card key reader, a credit card reader, a digital device, a digital computing device and any combination thereof. The computing device 120 preferably includes a display (not shown). One skilled in the art will recognize that a display may include one or more browsers, one or more user interfaces, and any combination thereof. The display of the computing device 120 may be configured to show one or more videos. A video may be a video feed, a video scene, a captured video, a video clip, a video recording, or any combination thereof.
[0021] The network 1 10 may also be configured to couple to one or more video sources 130. The video may be provided by one or more video sources 130, such as a camera, a fixed security camera, a video camera, a video recording device, a mobile video recorder, a webcam, an IP camera, pre-recorded data (e.g., pre-recorded data on a DVD or a CD), previously stored data (including, but not limited to, previously stored data on a database or server), archived data (including but not limited to, video archives or historical data), and any combination thereof. The computing device 120 may be a mobile communications device that is configured to receive and transmit signals via one or more optional towers 140.
[0022] Still referring to FIG. 1 , the network UO may be configured to couple to the server 150. As will be described herein, the server 150 may use one or more exemplary methods (such as the method 200 shown in FIG. 2). The server 150 may also be included in one or more exemplary systems described herein (such as the system 300 shown in FlG. 3). The server 150 may include an internal database to store data. One or more optional external databases 160 may be configured to couple to the server 150 for storage purposes.
[0023] Notably, one skilled in the art can recognize that all the figures herein are exemplary. For all the figures, the layout, arrangement and the number of elements depicted are exemplary only. Any number of elements may be used to implement the technology of the embodiments herein. For instance, in FIG. 1 , although one computing device 120 is shown, the technology allows for the network 1 10 to couple to one or more computing devices 120. Likewise, although one network 1 10 and one server 150 are shown in FIG. 1 , one skilled in the art can appreciate that more than one network and/or more than one server may be utilized and still fall within the scope of various embodiments. Also, although FIG. 1 includes dotted lines to show relationships between elements, such relationships are exemplary. For instance, FIG. 1 shows that the video source 130 is coupled to the network 1 10, and the computing device 120 is coupled to the network 1 10. However, the various embodiments described herein also encompass any networking environment where one or more video sources 130 are coupled to the computing device 120, and the computing device 120 is coupled to the network 1 10. Further details as to various embodiments of the system 100 of FIG. 1 can be found in the
U.S. Patent Application Serial No. filed on February 9, 2009, titled
"Systems and Methods for Video Monitoring," which is hereby incorporated by reference.
[0024] Turning to FIG. 2, an exemplary method 200 for providing video analysis is shown. The method 200 may include four steps. At step 202, a target is identified. At step 204, a query related to the identified target is received via a user input to the computing device. At step 206, a search result is generated. The search result may be based on any type of data. The search result may be based on one or more videos. The search result includes information related to the identified target. At step 208, the search result is displayed. The search result may be displayed through the display of the computing device. As with all the methods described herein, the steps of method 200 are exemplary and may be combined, omitted, skipped, repeated, and/or modified.
[0025] Any aspect of the method 200 may be user-extensible. For example, the target, the query, the search result, and any combination thereof may be user-extensible. The user may therefore define any aspect of the method 200 to suit his requirements for analysis. The feature of user-extensibility allows for this technology to be more robust and more flexible than the existing technology. Users may combine targets, queries, and search results in various combinations to achieve customized results.
[0026] Still referring to FIG. 2, at step 202, the target is identified by a computing device 120. The target is displayed from a video through a display of the computing device 120. The target may include one of a recognized object, a motion sequence, a state, and any combination thereof. The recognized object may be a person, a pet or a vehicle. As will be discussed later herein, a motion sequence may be a series of actions that are being targeted for identification. A state may be a condition or mode (such as the state of a flooded basement, an open window, or a machine when a belt has fallen off). Further information regarding target identification is provided in the U.S.
Patent Application Serial No. filed on February 9, 2009, titled "Systems and Methods for Video Monitoring," which is hereby incorporated by reference.
[0027] Also, at step 202, identifying the target from a video may include receiving a selection of a predefined object. For instance, preprogrammed icons depicting certain objects (such as a person, a pet or a vehicle) that have already been learned and/or otherwise identified by the software program may be shown to the user through a display of the computing device 120. Thus, the user may then select a predefined object (such as a person, a pet or a vehicle) by selecting the icon that best matches the target. Once a user selects an icon of the target, the user may drag and drop the icon onto another portion of the display of the computing device, such that the icon (sometimes referred to as a block) may be rendered on the display. Thus, the icon may become part of a rule (such as the rule 405 shown in FlG. 4). For instance, if the user selects people as the target, an icon of "Look for: People" (such as the icon 455 of FIG. 4) may be rendered on the display of the computing device. In further embodiments, one or more icons may be added such that the one or more icons may be rendered on the display via a user interface. Exemplary user interfaces include, but are not limited to, "Add" button(s), drop down menu(s), menu command(s), one or more radio button(s), and any combination thereof. One skilled in the art will recognize that any type of user interface may be used with this technology. Similarly, one or more icons may be removed from the display or modified as rendered on the display, through a user interface.
[0028] The technology allows for user-extensibility for defining targets. For instance, a user may "teach" the technology how to recognize new objects by assigning information (such as labels or tags) to clips of video that include the new objects. Thus,, a software program may "learn" the differences between categories of pets, such as cats and dogs, or even categories of persons, such as adults, infants, men, and women. Alternatively, at step 202, identifying the target from a video may include recognizing an object based on a pattern. For instance, facial patterns (frowns, smiles, grimaces, smirks, and the like) of a person or a pet may be recognized. [0029] Through such recognition based on a pattern, a category may be established. For instance, a category of various human smiles may be established through the learning process of the software. Likewise, a category of variety of human frowns may be established by the software. Further, a behavior of a target may be recognized. Thus, the software may establish any type of behavior of a target, such as the behavior of a target when the target is resting or fidgeting. The software may be trained to recognize new or previously unknown objects. The software may be programmed to recognize new actions, new behaviors, new states, and/or any changes in actions, behaviors or states. The software may also be programmed to recognize metadata from video and provide the metadata to the user through the display of a computing device 120.
[0030] In the case where the target is a motion sequence, the motion sequence may be a series of actions that are being targeted for identification. One example of a motion sequence is the sequence of lifting a rock and tossing the rock through a window. Such a motion sequence may be preprogrammed as a target. However, as described earlier, targets may be user-extensible. Thus, the technology allows for users to extend the set of targets to include targets that were not previously recognized by the program. For instance, in some embodiments, targets may include previously unrecognized motion sequences, such as the motion sequence of kicking a door down. Also, targets may even include visual, audio, and both visual-audio targets. Thus, the software program may be taught to recognize a baby's face versus an adult female's face. The program may be taught to recognize a baby's voice versus an adult female's voice.
[0031] At step 204, a query related to the identified target is received via a user input to the computing device 120. The query may be stored on a computer readable storage medium (not shown). The query may include one or more user-defined rules. Rules may include source selection (such as video source selection), triggers, and responses. Rules are described in further detail in the U.S. Patent Application Serial No. filed on February 9, 2009, titled "Systems and Methods for Video
Monitoring," which is hereby incorporated by reference. [0032] The query may include an instruction to provide one or more clips of one or more videos based on a specific time period or time frame. One skilled in the art will recognize that the time period can be of any measurement, including but not limited to days, weeks, hours, minutes, seconds, and the like. For instance, the query may include an instruction to provide all video clips within the last 24 hours. Another example is the query may include an instruction to provide all video clips for the last 2 Thursdays. Alternatively, the query may include an instruction to provide all video clips regardless of a video timestamp. This is exemplified by a time duration field 760 showing "When: Anytime" in FIG. 7. Thus, a user may define or designate a time period that he is interested to view videos. Metadata from a video, including but not limited to time stamp and video properties relating to duration, may be extracted from the video. Such extracted metadata may then be used to determine whether a video or a clip of a video falls within a specific time period as defined in a query.
[0033] The query may include an instruction to provide one or more videos from one or more video sources. A user may define which video source(s) should be included in the query. An example is found in FIG. 7, where the user designated in a location field 730 that video from a camera in a living room should be the video source ("Camera: Living room"). In FIG. 7, a drop down menu is provided for the location field 730 so that a user may select which camera is included in the query. However, one skilled in the art can recognize that the a user may define a video source through any type of user input to a computing device 120, and the technology is not limited to only drop down menus for user selection of video sources.
[0034] The query may comprise an instruction to provide a video clip regarding the identified target. The identified target may include one or more persons, vehicles or pets. The identified target may be a user-defined target. User-defined targets are discussed at length in the U.S. Patent Application Serial No. filed on
February 9, 2009, titled "Systems and Methods for Video Monitoring," which is hereby incorporated by reference. The query may include an instruction to provide a video clip showing an identified target within a region. For instance, a query may include an instruction to provide video clips that show people within a region designated by the user. The user may designate a region by drawing a box (such as a bounding box), circle or other shape around a region that can be viewed by a video source.
[0035] At step 206, a search result is generated. As mentioned previously, the search result may be based on any type of data. The search result may be based on one or more videos captured by one or more video sources. The search result may include information related to the identified target. Generating the search result may include filtering the video based on the query. One skilled in the art will recognize that there is a multitude of ways to filter videos. For instance, filtering videos based on a query can be accomplished by using metadata that is associated with the videos being analyzed. As discussed previously, this technology may extract, identify, utilize and determine the metadata that is associated with videos. Due to the object recognition aspects and the sophisticated higher level learning of this technology, the metadata may include metadata relating to identified targets, attributes regarding identified targets, timestamps of videos or clips of videos, source settings (such as video source location or camera location), recognized behaviors, patterns, states, motion sequences, user-defined regions as captured by videos, and any further information that may be garnered to execute a query. One skilled in the art will recognize that this list of metadata that can be determined by this technology is non-exhaustive and is exemplary.
[0036] Still referring to step 206, generating the search result may include providing one or more video clips with a text description of the information related to the identified target. The text description of a given video clip may be all or part of a query, a rule, and/or metadata associated with the video clip. For instance, based on the object recognition aspects of this technology, the technology may recognize a user's pet dog. If the user's pet dog is seen moving in a designated region based on a video, then the generation of the search result may include providing the video clip of the dog in the region with the location of the video source. In FIG. 8, the text description of "Pet - Living Room Camera" 850 is given to a video clip that shows the user's pet moving in a region of the living room. The video clip may be represented with a thumbnail 860 of a frame where the identified target (pet) matched the executed search query.
[0037] The text description may include further information about the identified target, based on a query, a rule and/or metadata associated with the video clip. For instance, the thumbnail 860 of the video clips of "Pet - Living Room Camera" 850 (as shown in FIG. 8) has further text that provides the name of the pet (Apollo) and the region that the user designated (couch). With object recognition and higher-level learning capabilities, the technology may be able to distinguish the pet Apollo from another pet in the user's household.
[0038] Generating the search result may include providing a thumbnail of the video or video clip which may include a bounding box of the identified target that matched an executed search query. In the previous example, the bounding box 870 of the identified target (a pet named Apollo) is shown to the user on the display of a computing device. Alternatively, generating the search result may show a frame where the identified target matched an executed search query (such as the frame 860 of the pet Apollo in FlG. 8). Generating a search result may include providing a timeline showing triggered events that occur within a specified time period, as shown in the video clip. Further discussion regarding timelines and triggered events is provided later.
[0039] At step 208, the search result is displayed to the user. The search result may be displayed to the user on a display of a computing device 120. The search result may be presented in any format or presentation. One type of format is displaying the search results in a list with thumbnails for each of the video clips that match the search query or criteria, as described earlier herein. Both FlGs. 7 and 8 show lists of search results. For instance, FIG. 7 shows 3 search results, with a thumbnail for each of the search results.
[0040] The method 200 may include steps that are not shown in FIG. 2. The method 200 may include the step of receiving a selection of at least one delivery option for the search result. A non-exhaustive and exemplary list of delivery options includes an electronic mail message delivery, a text message delivery, a multimedia message delivery, a forwarding of a web link delivery option, an option to upload the search result onto a website, and any combination thereof. The method 200 may include the step of delivering the search result based on the delivery option selected. The method 200 may also include the step of providing an alert for display on the display of the computing device 120. An exemplary alert is a pop-up alert 900 in FIG. 9 which shows a thumbnail of a frame from a video clip. [0041] FIG. 3 is an exemplary system 300 for providing an analysis. The system 300 may include four modules, namely, a target identification module 310, an interface module 320, a search result module 330, and a display module 340. The system 300 can utilize any of the various exemplary methods described herein, including the method 200 (FIG. 2) described earlier herein. It will be appreciated by one skilled in the art that any of the modules shown in the exemplary system 300 can be combined, omitted, or modified, and still fall within the scope of various embodiments.
[0042] According to one exemplary embodiment, the target identification module 310 is configured for identifying a target from the video supplied to a computing device 120 (FIG. 1). The interface module 320 is in communication with the target identification module 3 10. The interface module 320 is configured for receiving a query related to the identified target via a user input to the computing device. The search result module 330 is in communication with the interface module 320. The search result module 330 is configured for generating a search result based on the video. The search result may include information related to the identified target. The display module 340 is in communication with the search result module. The display module 340 is configured to display the search result through the display of the computing device 120.
[0043] The search result module 340 is configured to filter the video based on the query. The search result module 340 may be configured to provide the video with a text description of the information related to the identified target. The information related to the identified target may include metadata associated with the clip of the video, or it may include all or part of the query. The search result module 340 is also configured to provide a thumbnail of the video clip, as described earlier herein.
[0044] The system 300 may comprise a processor (not shown) and a computer readable storage medium (not shown). The processor and/or the computer readable storage medium may act as one or more of the four modules (i.e., the target identification module 310, the interface module 320, the search result module 330, and the display module 340) of the system 300. It will be appreciated by one of ordinary skill that examples of computer readable storage medium may include discs, memory cards, servers and/or computer discs. Instructions may be retrieved and executed by the processor. Some examples of instructions include software, program code, and firmware. Instructions are generally operational when executed by the processor to direct the processor to operate in accord with embodiments of the invention. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of various embodiments.
[0045] Turning to FIG. 4, an exemplary screenshot of a rule editor 400 as depicted on a display of a computing device 120 (FIG. 1) is shown. The rule editor 400 is a feature of the technology that allows the user to define one or more aspects of a given rule or query 405. In FIG. 4, a rule name for a given rule (such as a rule name of "People in the garden") is provided in a name field 410. Preferably, the rule editor 400 allows the user to provide names to the rule 405 that the user defines or otherwise composes.
[0046] Still referring to FIG. 4, a plurality of icons may be provided to the user 420. An icon of a video source 440 may be provided. The video source 440 may be displayed with one or more settings, such as the location of the camera ("Video source: Side camera" in FIG. 4). A user may click on the video source icon 440, drag it across to another portion of the display, and drop it in an area of the display. The dragged and dropped icon then becomes a selected side camera video source icon 445 ("Video source: Side camera"), which is shown on FIG. 4 as being located near the center of the display. Alternatively, a user may click on the video source icon 440 until a corresponding icon of the selected video source 445 (with a setting, such as the location of the selected video source) is depicted in the rule 405. Alternatively, the user may be provided with one or more video sources 440, and the user can select from those video sources 440. A list of possible video sources (not shown) may appear on the display. Preferably, the list of possible video sources (not shown) may appear on a right portion of the display. Alternatively, as described previously herein, the user may add, remove, or modify one or more icons (such as the video source icon 440) from the display through one or more user interfaces, such as an "Add" button, drop down menu(s), menu command(s), one or more radio button(s), and any combination thereof. Such icons include but are not limited to icons representing triggers, targets, and responses.
[0047] Once a video source 440 is selected and displayed as part of the rule 405 (such as the selected side camera video source icon 445), the user may define the target that is to be identified by a computing device. Preferably, the user may select the "Look for" icon 450 on a left portion of the display of the computing device. Then, a selection of preprogrammed targets is provided to the user. The user may select one target (such as "Look for: People" icon 455 as shown in the exemplary rule 405 of FIG.
4).
[0048] The user may select one or more triggers. The user may select a trigger via a user input to the computing device 120. A plurality of trigger icons 460, 465 may be provided to the user for selection. Trigger icons depicted in FIG. 4 are the "Where" icon 460 and the "When" icon 465. If the "Where" icon 460 is selected, then the "Look Where" pane 430 on the right side of the display may be provided to the user. The "Look Where" pane 430 allows for the user to define the boundaries of a location or region that the user wants movements to be monitored. For instance, the user may define the boundaries of a location by drawing a box, a circle, or any other shape. In FIG. 4, the user has drawn a bounding box around an area that is on the left hand side of a garbage can. The bounding box surrounds an identified target. The bounding box may be used to determine whether a target has entered a region or it serves as a visual clue to the user where the target is in the video. Regions may be named by the user. Likewise, queries or rules may be named by the user. Regions, queries and/or rules may be saved by the user for later use. Rules may be processed in real time.
[0049] The bounding box may track an identified target. Preferably, the bounding box may track an identified target that has been identified as a result of an application of a rule. The bounding box may resize based on the dimensions of the identified target. The bounding box may move such that it tracks the identified target as the identified target moves in a video. For instance, a clip of a video may be played back, and during playback, the bounding box may surround and/or resize to the dimensions of the identified target. If the identified target moves or otherwise makes an action that causes the dimensions of the identified target to change, the bounding box may resize such that it may surround the identified target while the identified target is shown in the video, regardless of the changing dimensions of the identified target. FIG. 7 shows an exemplary bounding box 775. One skilled in the art will appreciate that one or more bounding boxes may be shown to the user to assist in tracking one or more identified targets while a video is played.
[0050] Also, the "Look Where" pane 430 may allow the user to select a radio button that defines the location attribute of the identified target as a trigger. The user may select the option that movement "Anywhere" is a trigger. The user may select the option that "inside" a designated region (such as "the garden") is a trigger. Similarly, the user may select "outside" a designated region. The user may select an option that movement that is "Coming in through a door" is a trigger. The user may select an option that movement that is "Coming out through a door" is a trigger. The user may select an option that movement that is "Walking on part of the ground" (not shown) is a trigger. In other words, the technology may recognize when an object is walking on part of the ground. The technology may recognize movement and/or object in three-dimensional space, even when the movement and/or object is shown on the video in two dimensions. Further, the user may select an option of "crossing a boundary" is a selected trigger.
[0051] If the "When" icon 465 is selected, then the "Look When" pane (not shown) on the right side of the display is provided to the user. The "Look When" pane may allow for the user to define the boundaries of a time period that the user wants movements to be monitored. Movement may be monitored when motion is visible for more than a given number of seconds. Alternatively, movement may be monitored for when motion is visible for less than a given number of seconds. Alternatively, movement may be monitored within a given range of seconds. In other words, a specific time duration may be selected by a user. One skilled in the art that any measurement of time (including, but not limited to, weeks, days, hours, minutes, or seconds) can be utilized. Also, one skilled in the art may appreciate that the user selection can be through any means (including, but not limited to, dropping and dragging icons, checkmarks, selection highlights, radio buttons, text input, and the like).
[0052] Still referring to FIG. 4, once a target has been identified and a trigger has been selected, a response may be provided. One or more of a plurality of response icons (such as Record icon 470, Notify icon 472, Report icon 474, and Advanced icon 476) may be selected by the user. As shown in the example provided in FIG. 4, if the Record icon 470 is selected by the user, then "If seen: Record to video" 490 appears on the display of the computing device 120. If read in its entirety, the rule 405 of FIG. 4 entitled "People in the garden" states that using the side camera as a video source, look for people that are inside the garden. If the rule is met, then the response is: "if seen, record to video" (490 of FIG. 4).
[0053] If the Notify icon 472 is selected, then a notification may be sent to the computing device 120 of the user. A user may select the response of "If seen: Send email" (not shown) as part of the notification. The user may drag and drop a copy of the Notify icon 472 and then connect the Notify icon 472 to the rule 405.
[0054] As described earlier, a notification may also be sending a text message to a cell phone, sending a multimedia message to a cell phone, or a notification by an automated phone. If the Report icon 474 is selected, then a generation of a report may be the response. If the Advanced icon 476 is selected, the computer may play a sound to alert the user. Alternatively the computer may store the video onto a database or other storage means associated with the computing device 120 or upload a video directly to a user-designated URL. The computer may interact with external application interfaces, or it may display custom text and/or graphics.
[0055] FIG. 5 shows a screenshot 500 of a display of a computing device 120, where a rule 505 is known as a complex rule. The user may select one or more target(s), one or more trigger(s), and any combination thereof, and may utilize Boolean language (such as "and" and "or") in association with the selected target(s) and/or trigger(s). For example, FIG. 5 shows Boolean language being used with targets. When the user selects the "Look for" icon 450, the user may be presented with a selection list of possible targets 510, which include People, Pets, Vehicles, Unknown Objects and All Objects. The selection list of possible targets 510 may be a drop down menu. The user may then select the targets he or she wishes to select. In the example provided in FIG. 5, the user selected targets in such a way that the program will identify targets that are either People ("Look for: People") or Pets ("Look for: Pets"), and the program will also look for targets that are Vehicles ("Look for: Vehicles"). The selection list of possible targets 510 may include an "Add object" or "Add target" option, which the user may select in order to "train" the technology to recognize an object or a target that was previously unknown or not identified by the technology. The user may select a Connector icon 480 to connect one or more icons, in order to determine the logic flow of the rule 505 and/or the logic flow between icons that have been selected.
[0056] Another embodiment is where Boolean language is used to apply to multiple triggers for a particular target. For instance, Boolean language may be applied, such that the user has instructed the technology to locate a person "in the garden OR (on the sidewalk AND moving left to right)." With this type of instruction, the technology will locate either persons in the garden or persons that are on the sidewalk who are also moving left to right. As mentioned above, one skilled in the art will recognize that the user may include Boolean language that apply for both one or more targets(s) as well as one or more trigger(s).
[0057] A further embodiment is a rule 505 that includes Boolean language that provides a sequence (such as "AND THEN"). For instance, a user may select two or more triggers to occur in a sequence (e.g., "Trigger A" happens AND THEN "Trigger B" happens. Further, one skilled in the art will understand that a rule 505 includes one or more nested rules, as well as one or more rules in a sequence, in a series, or in parallel. Rules may be ordered in a tree structure with multiple branches, with one or more responses coupled to the rules.
[0058] As shown in FIG. 5, the user may select the targets by placing checkmarks next to the targets he wishes to designate in the selection list of possible targets 510. However, one skilled in the art can appreciate that the selection of targets can be accomplished by any means of selection, and the selection of targets is not limited to highlighting or placing checkmarks next to selected targets.
[0059] Now referring to FIG. 6, a monitor view 600 of the one or more video sources 130 (FIG. 1 ) is provided. The monitor view 600 provides an overall glance of one or more video sources 130, in relation with certain timelines of triggered events and rules established by users. Preferably, the monitor view 600 is a live view of a selected camera. The monitor view 600 may provide a live thumbnail of a camera view. The timelines of triggered events may be representations of metadata that are identified and/or extracted from the video by the software program.
|0060] In the example provided in FIG. 6, the monitor view 600 includes thumbnail video views of the Backyard 610, Front 620, and Office 630. Further, as depicted in FIG. 6, the thumbnail video view of the Backyard 610 is selected and highlighted on the left side of the display. On the right hand of the display, a larger view 640 of the video that is presented in the thumbnail video view of the Backyard 610 may be provided to the user, along with a time and date stamp 650. Also, the monitor view 600 may provide rules and associated timelines. For instance, the video source 130 located in the Backyard 610 has two rule applications, namely, "People - Walking on the lawn" 660 and "Pets - In the Pool" 670. A first timeline 665 is associated with the rule application "People - Walking on the lawn" 660. Similarly, a second timeline 675 is associated with the rule application "Pets - In the Pool" 670. A rule application may comprise a set of triggered events that meet requirements of a rule, such as "People in the garden" 405 (FlG. 4). The triggered events are identified in part through the use of metadata of the video that is recognized, extracted or otherwise identified by the program.
[0061] The first timeline 665 is from 8 am to 4 pm. The first timeline 665 shows five vertical lines. Each vertical line may represent the amount of time in which movement was detected according to the parameters of the rule application "People — Walking on the lawn" 660. In other words, there were five times during the time period of 8 am to 4 pm in which movement was detected that is likely to be people walking on the lawn. The second timeline 675 is also from 8 am to 4 pm. The second timeline 675 shows only one vertical line, which means that in one time period (around 10:30 am), movement was detected according to the parameters of the rule application "Pets - In the Pool" 670. According to FIG. 6, around 10:30 am, movement was detected that is likely to be one or more pets being in the pool.
[0062] FIG. 7 shows a screenshot 700 of a display of a computing device 120 following the execution of a quick search, according to one exemplary embodiment. The quick search option 710 is one of two options for searching in FIG. 7. The second option is a rule search option 720, which will be discussed in greater detail in FIG. 8. A quick search may allow for a user to quickly search for videos or clips of videos that meet certain criteria. The criteria may include information provided in a location field 730, a target field 740, and a duration field 750. Searches may be done immediately upon receipt of the criteria. Searches may be done on live video and/or archived video. [0063] In FIG. 7, the user has selected "Living room" for the location of the camera (or video source) in the location field 730, "people" for identified targets to look for in the target field 740, and "anytime" as the criteria for the timestamp of the video to be searched in the duration field 750. In other words, with this set of criteria, the user has asked for a quick search of videos that have been captured by the living room camera. The exemplary quick search in FIG. 7 is to identify all the triggered events in which people were in the living room at anytime. By doing so, the quick search may narrow the video clips from a huge set to a much smaller subset, where the subset conforms to the user's query or search parameters.
[0064] Search results may filter existing video to display to the user only the relevant content. In the case of quick searches, the relevant content may be that content which matches or fits the criteria selected by the user. In the case of rule searches (which will be discussed at length in conjunction with FIG. 8), the relevant content may be that content which matches or fits the rule defined and selected by the user. The technology may use object recognition and metadata associated with video clips in order to conduct a search and generate a search result.
[0065] In FIG. 7, the quick search has provided a search result of only three video clips. The three video clips may be listed in a chronological order, with a thumbnail of a frame showing the identified target and a bounding box. Each of the three video clips includes a text description of "People - Living room." The text description may have been generated from information related to the identified objects and/or metadata associated with the video clips.
[0066] In FIG. 7, one of the three video clips 760 is highlighted and selected by the user. Once a video clip is selected, a larger image 765 of the video clip 760 is provided to the user on the display of the computing device 120. The larger image 765 may include a bounding box 775 of the identified target that matched the executed search criteria or rule. Videos may start playing at the frame where the identified target matched the executed search. The larger image 765 may also include a title 770, such as "Living room," to indicate the setting or location of the camera or video source.
[0067] Controls for videos 780 may be provided to the user. The user may be able to playback, rewind, fast forward, or skip throughout a video using the appropriate video controls 780. The user may also select the speed in which the user wishes to view the video using a playback speed control 785. Also, a timeline control 790 that shows all the instances of a current search over a given time period may be displayed to the user. In FlG. 7, the exemplary timeline control 790 is a timeline that stretches from 8 am to 6 pm, and it shows each instance of a search result that matches the quick search criteria 730, 740, and 750. When a user highlights or otherwise selects a video clip from the results of a quick search, a corresponding vertical line that represents the time interval of the video clip in relation to the timeline may be also highlighted.
[0068] Turning to FIG. 8, FIG. 8 shows a screenshot 800 of a display of a computing device 120 following the execution of a rule search. The rule search option 720 has been selected by the user in the example of FIG. 8. A rule search is a search based on a user-defined rule. A rule may include a target and a trigger. By virtue of the fact that targets and triggers can be defined by users, rules and portions of rules are user- extensible. Further information regarding rules may be found in the U.S. Patent
Application Serial No. filed on February 9, 2009, titled "Systems and
Methods for Video Monitoring," which is hereby incorporated by reference.
[0069] A rule may be saved by a user. In FIG. 8, three rules have been saved by the user. Those rules are called "Approaching the door," "Climbing over the fence into the garden" and "Loitering by the fence." Saved rules may be displayed in a rule list 810. One of the saved rules may be selected, along with a definition of a time frame through the duration field 750, to execute a rule search. In the example provided in FIG. 8, the rule "Climbing over the fence into the garden" has been selected by the user and the time frame is "anytime." Thus, the exemplary rule search in FIG. 8 is for the technology to search any videos that show an object climbing over the fence into the garden at anytime.
[0070] As earlier described, rules may be modified or edited by a user. A user may edit a rule by selecting a rule and hitting the "Edit" button 820. Thus, a user may change any portion of a rule using the "Edit" button. For instance, a user may select a rule and then the user may be presented with the rule as it currently stands in the rule editor 400 (FIG. 4). The user may edit the rule by changing the flow logic of a rule or by modifying the targets, triggers, and/or responses of the existing rule. A new rule may be created as well, using the rule editor 400, and then the user may save the rule, thereby adding the new rule to the rule list 810.
[0071] Rules may be uploaded and downloaded by a user to the Internet, such that rules can be shared amongst users of this technology. For example, a first user may create a sprinkler rule to turn on the sprinkler system when a person jumps a fence and enters a region. The first user may then upload his sprinkler rule onto the Internet, such that a second user can download this the first user's sprinkler rule. The second user may then use the first user's sprinkler rule in its entirety, or the second user may modify the first user's sprinkler rule to add that if a pet jumps the fence and enter the region, then the sprinkler will also activate. The second user may then upload the modified sprinkler rule onto the Internet, such that the first user and any third party may download the modified sprinkler rule.
[0072] Also, rules may be defined for archival searches. In other words, videos may be archived using a database or an optional video storage module (not shown) in the system 300 (FIG. 3). Rules may be selected for execution and application on those archived videos. Based on historical learning, after archived videos have been recorded, a user may also execute a new rule search on archived videos. The user may define a new rule, the user may use another user's rules that have been shared, or the user may download a new rule from the Internet. The optional video storage module (not shown) in the system 300 may be referenced in to do a subsequent analysis or application of rules.
[0073] Turning now to FIG. 9, as previously discussed, the technology includes a pop-up alert 900. The pop-up alert 900 may be for display on the display of the computing device 120. The pop-up alert 900 includes a thumbnail of a frame from a video clip. In the exemplary pop-up alert 900, text may be presented to the user in the pop-up alert 900, advising the user that a person was seen entering the garden via the side camera, based on object recognition, historical learning, and metadata associated with the video clip. The pop-up alert 900 may be a result of a rule application where the user has requested the system to inform the user when persons are seen entering the garden via the side camera. The pop-up alert 900 may include an invitation for the user to view the relevant video clip provided by the side camera. This pop-up alert 900 may also include a timestamp, which may also be provided by metadata associated with the video clip.
[0074] The technology mentioned herein is not limited to video. External data sources, such as web-based data sources, can be utilized in the system 100 of FlG 1 . Such external data sources may be used either in conjunction with or in place of the one or more video sources 130 in the system 100 of FIG. 1. For instance, the technology encompasses embodiments that include data from the Internet, such as a news feed. The system 100 of FIG. 1 allows for such a rule and response to be defined by a user and then followed by the system 100. Preferably, a rule includes a target and a trigger. However, in some embodiments, a rule may include a target, a trigger, a response, and any combination thereof.
[0075] While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.

Claims

1. A method for providing an analysis, the method comprising: identifying a target by a computing device, the target being displayed from a video through a display of the computing device; receiving a query related to the identified target via a user input to the computing device; generating a search result based on the video, the search result comprising information related to the identified target, and displaying the search result through the display of the computing device.
2. The method of claim 1 , wherein generating the search result based on the video further comprises filtering the video based on the query.
3. The method of claim 1 , wherein generating the search result further comprises providing a clip of the video with a text description of the information related to the identified target.
4. The method of claim 3, wherein the information related to the identified target includes metadata associated with the clip of the video.
5. The method of claim 3, wherein generating the search result further comprises providing a thumbnail of the clip of the video.
6. The method of claim 5, wherein the thumbnail includes a bounding box surrounding the identified target.
7. The method of claim 5, wherein the thumbnail includes a frame of the clip of the video, the frame being where the identified target matches the query.
8. The method of claim 1 , wherein the query further comprises a user-defined rule.
9. The method of claim 1 , wherein at least one of the query and the search result is stored on a computer readable storage medium.
10. The method of claim 3, wherein the query comprises an instruction to provide the clip of the video based on a specified time period.
1 1. The method of claim 3, wherein the query comprises an instruction to provide the clip of the video from a video source.
12. The method of claim 1 1 , wherein the video source comprises one of an IP camera, a web camera, a security camera, a video camera, a video recorder, and any combination thereof.
13. The method of claim 3, wherein the query comprises an instruction to provide the clip of the video regarding the identified target, the identified target comprising a person, a vehicle or a pet.
14. The method of claim 3, wherein the query comprises an instruction to provide the clip of the video showing an identified target within a region.
15. The method of claim 1, wherein the target comprises one of a recognized object, a motion sequence, a state, and any combination thereof.
16. The method of claim 1 , wherein identifying the target from the video further comprises receiving a selection of a predefined object.
17. The method of claim 1 , wherein identifying the target from the video further comprises recognizing an object based on a pattern.
18. The method of claim 17, wherein the recognized object is at least one of a person, a pet and a vehicle.
19. The method of claim 1 , wherein the video comprises one of a video feed, a video scene, a captured video, a video clip, a video recording, and any combination thereof.
20. The method of claim 1 , further comprising receiving a selection of at least one delivery option for the search result.
21. The method of claim 20, wherein the delivery option comprises an electronic mail message delivery, a text message delivery, a multimedia message delivery, a forwarding of a web link delivery option, an option to upload the search result onto a website, and any combination thereof.
22. The method of claim 20, further comprising delivering the search result based on the delivery option selected.
23. The method of claim 3, wherein generating the search result further comprises providing a timeline showing triggered events that occur within a specified time period, as shown in the clip of the video.
24. The method of claim 3, wherein displaying the search result further comprises providing a playback of the clip of the video.
25. The method of claim 3, further comprising providing an alert for display on the display of the computing device.
26. A system for providing an analysis, the system comprising: a target identification module configured for identifying a target from a video supplied to a computing device; an interface module in communication with the target identification module, the interface module configured for receiving a query related to the identified target via a user input to the computing device; a search result module in communication with the interface module, the search result module configured for generating a search result based on the video, the search result comprising information related to the identified target; and a display module in communication with the search result module, the display module configured for displaying the search result through the display of the computing device.
27. The system of claim 26, wherein the search result module is configured to filter the video based on the query.
28. The system of claim 26, wherein the search result module is configured to provide the clip of the video with a text description of the information related to the identified target.
29. The system of claim 28, wherein the information related to the identified target includes metadata associated with the clip of the video.
30. The system of claim 28, wherein the search result module is configured to provide a thumbnail of the clip of the video.
31. A system for generating a search result based on an analysis, the system comprising: a processor; a computer readable storage medium having instructions for execution by the processor which causes the processor to generate a search result; wherein the processor is coupled to the computer readable storage medium, the processor executing the instructions on the computer readable storage medium to: identify a target from a video supplied to a computing device; receive a query related to the identified target; and generate the search result based on the video, the search result comprising information related to the identified target.
32. The system of claim 31 , further comprising a display for displaying the search result.
33. The system of claim 31 , wherein the computer readable storage medium further includes the instruction to provide a clip of the video with a text description of the information related to the identified target.
PCT/US2009/000841 2009-02-09 2009-02-09 Systems and methods for video analysis WO2010090622A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2009/000841 WO2010090622A1 (en) 2009-02-09 2009-02-09 Systems and methods for video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2009/000841 WO2010090622A1 (en) 2009-02-09 2009-02-09 Systems and methods for video analysis

Publications (1)

Publication Number Publication Date
WO2010090622A1 true WO2010090622A1 (en) 2010-08-12

Family

ID=42542311

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/000841 WO2010090622A1 (en) 2009-02-09 2009-02-09 Systems and methods for video analysis

Country Status (1)

Country Link
WO (1) WO2010090622A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243101A (en) * 2022-06-20 2022-10-25 上海众源网络有限公司 Video dynamic and static rate identification method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030028889A1 (en) * 2001-08-03 2003-02-06 Mccoskey John S. Video and digital multimedia aggregator
WO2004043029A2 (en) * 2002-11-08 2004-05-21 Aliope Limited Multimedia management
US6774917B1 (en) * 1999-03-11 2004-08-10 Fuji Xerox Co., Ltd. Methods and apparatuses for interactive similarity searching, retrieval, and browsing of video
WO2006042142A2 (en) * 2004-10-07 2006-04-20 Bernard Widrow Cognitive memory and auto-associative neural network based pattern recognition and searching
US20070033170A1 (en) * 2000-07-24 2007-02-08 Sanghoon Sull Method For Searching For Relevant Multimedia Content
WO2007053627A1 (en) * 2005-10-31 2007-05-10 Microsoft Corporation Media sharing and authoring on the web
US20070255755A1 (en) * 2006-05-01 2007-11-01 Yahoo! Inc. Video search engine using joint categorization of video clips and queries based on multiple modalities

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6774917B1 (en) * 1999-03-11 2004-08-10 Fuji Xerox Co., Ltd. Methods and apparatuses for interactive similarity searching, retrieval, and browsing of video
US20070033170A1 (en) * 2000-07-24 2007-02-08 Sanghoon Sull Method For Searching For Relevant Multimedia Content
US20030028889A1 (en) * 2001-08-03 2003-02-06 Mccoskey John S. Video and digital multimedia aggregator
WO2004043029A2 (en) * 2002-11-08 2004-05-21 Aliope Limited Multimedia management
WO2006042142A2 (en) * 2004-10-07 2006-04-20 Bernard Widrow Cognitive memory and auto-associative neural network based pattern recognition and searching
WO2007053627A1 (en) * 2005-10-31 2007-05-10 Microsoft Corporation Media sharing and authoring on the web
US20070255755A1 (en) * 2006-05-01 2007-11-01 Yahoo! Inc. Video search engine using joint categorization of video clips and queries based on multiple modalities

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"International Multimedia Conference archive, Proceedings of the 13th annual ACM international conference on Multimedia [online], 2005", article HUA ET AL.: "Personal Media Sharing and Authoring on the Web.", pages: 375 - 378 *
"Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques [online], 2002", article CASARES ET AL.: "Simplifying Video Editing Using Metadata.' Designing Interactive Systems archive", pages: 157 - 166 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115243101A (en) * 2022-06-20 2022-10-25 上海众源网络有限公司 Video dynamic and static rate identification method and device, electronic equipment and storage medium
CN115243101B (en) * 2022-06-20 2024-04-12 上海众源网络有限公司 Video dynamic and static ratio identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20100205203A1 (en) Systems and methods for video analysis
US20100201815A1 (en) Systems and methods for video monitoring
US11656748B2 (en) Machine learning in video classification with playback highlighting
US9588640B1 (en) User interface for video summaries
AU2015222869B2 (en) System and method for performing spatio-temporal analysis of sporting events
EP0719046B1 (en) Method and apparatus for video data management
US10299017B2 (en) Video searching for filtered and tagged motion
US9805567B2 (en) Temporal video streaming and summaries
US10552482B2 (en) Electronic system and method for marking highlights in a multimedia file and manipulating the multimedia file using the highlights
US10192588B2 (en) Method, device, and computer-readable medium for tagging an object in a video
WO2000010075A1 (en) Multi-perspective viewer for content-based interactivity
US20170076156A1 (en) Automatically determining camera location and determining type of scene
US11874871B2 (en) Detecting content in a real-time video stream recorded by a detection unit
US20210279470A1 (en) Detecting content in a real-time video stream using machine-learning classifiers
WO2017046704A1 (en) User interface for video summaries
WO2010090621A1 (en) Systems and methods for video monitoring
WO2010090622A1 (en) Systems and methods for video analysis
US11972099B2 (en) Machine learning in video classification with playback highlighting
WO2018201195A1 (en) Devices, systems and methodologies configured to enable generation, capture, processing, and/or management of digital media data
Timothy et al. Show me where the action is!
CN117743634A (en) Object retrieval method, system and equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09839784

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09839784

Country of ref document: EP

Kind code of ref document: A1