WO2017046704A1 - User interface for video summaries - Google Patents

User interface for video summaries Download PDF

Info

Publication number
WO2017046704A1
WO2017046704A1 PCT/IB2016/055456 IB2016055456W WO2017046704A1 WO 2017046704 A1 WO2017046704 A1 WO 2017046704A1 IB 2016055456 W IB2016055456 W IB 2016055456W WO 2017046704 A1 WO2017046704 A1 WO 2017046704A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
events
camera
motion
time
Prior art date
Application number
PCT/IB2016/055456
Other languages
French (fr)
Inventor
Vincent Borel
Aaron Standridge
Fabian Nater
Helmut Grabner
Original Assignee
Logitech Europe S.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/853,965 external-priority patent/US9313556B1/en
Priority claimed from US14/853,980 external-priority patent/US20170076156A1/en
Priority claimed from US14/853,943 external-priority patent/US9805567B2/en
Priority claimed from US14/853,989 external-priority patent/US10299017B2/en
Application filed by Logitech Europe S.A. filed Critical Logitech Europe S.A.
Priority to CN201680066486.6A priority Critical patent/CN108351965B/en
Priority to DE112016004160.8T priority patent/DE112016004160T5/en
Publication of WO2017046704A1 publication Critical patent/WO2017046704A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Definitions

  • the invention generally relates to improvements in methods of automatic video editing, and more specifically to methods used in automatically creating summaries based on webcam video content, as determined by image analysis.
  • Devices such as video cameras and microphones are often used for monitoring an area or a room.
  • Existing video editing and monitoring systems typically record events when motion is detected, and provide alerts to a user over the Internet. The user can then view just the stored portions of the monitored area when motion was detected.
  • a summary can, for example, provide a series of still images from each video, to give the user a sense of whether the motion is worth viewing. For example, the user can see if a person is in the scene, or if the motion appears to have been a drape moving, a bird, etc.
  • Magisto Pub. No. 20150015735 describes capturing images, as opposed to editing, based on various factors, and detecting important objects and deciding whether to take a video or snapshot based on importance (e.g., whether someone is smiling), BriefCam has patents that describe detecting an amount of activity, or objects, moving in an image, and overlaying different object movements on the same image, as a mosaic. See, e.g., Pub. 2009- 0219300 (refers to different sampling rates on the image acquisition side) and Pub. 2010- 0092037 (refers to "adaptive fast-forward").
  • Pub. No. 20150189402 describes creating a video summary of just detected important events in a video, such as shots in a soccer match. See also Pub. No. 20050160457, which describes detecting baseball hits visually and from excited announcer sound,
  • Pub. No. 20100315497 is an example of systems capturing the images based on face recognition, with a target face profile.
  • Object Video Pub. No. 20070002141 describes a video-based human verification system that processes video to verify a human presence, a non-human presence, and/or motion. See also Wells Fargo Alarm Services Pat. No.
  • Examples include vehicles, animals, plant growth (e.g., a system that detects when it is time to trim hedges), falling objects (e.g., a system that detects when a recyclable can is dropped into a garbage chute), and microscopic entities (e.g., a system that detects when a microbe has permeated a cell wall)."
  • Pub. No. 20120308077 describes determining a location of an image by comparing it to images from tagged locations on a social networking site.
  • Pub. No. 201 0285842 describes determining a location for a vehicle navigation system by using landmark recognition, such as a sign, or a bridge, tunnel, tower, pole, building, or other structure
  • Object Video Pub. No. 2008-0100704 describes object recognition for a variety of purposes. It describes detecting certain types of movement (climbing fence, move in wrong direction), monitoring assets (e.g., for removal from a museum, or, for example: detecting if a single person takes a suspiciously large number of a given item in a retail store), detecting if a person slips and falls, detecting if a vehicle parks in a no parking area, etc.
  • Pub. No. 2005-0168574 describes "passback" [e.g., entering through airport exit] detection. There is automatic learning a normal direction of motion in the video monitored area, which may be learned as a function of time, and be different for different time periods. "The analysis system 3 may then automatically change the passback direction based on the time of day, the day of the week, and/or relative time (e.g., beginning of a sporting event, and ending of sporting event). The learned passback directions and times may be displayed for the user, who may verify and/or modify them.”
  • Logitech Pat. 6995794 describe image processing split between a camera and host (color processing and scaling moved to the host).
  • Intel Pat. 6,803,945 describes motion detection processing in a webcam to upload only interesting "interesting" pictures, in particular a threshold amount of motion (threshold number of pixels changing).
  • Yahoo! Pub. No. 20140355907 is an example of examining image and video content to identify features to tag for subsequent searching.
  • objects recognized include facial recognition, facial features (smile, frown, etc.), object recognition (e.g., cars, bicycles, group of individuals), and scene recognition (beach, mountain). See paragraphs 0067-0076. See also Disney Enterprises Pub. No. 20100082585, paragraph 0034.
  • a remote video camera intermittently transmits video clips, or video events, where motion is detected to a remote server.
  • the remote server provides video summaries to an application on a user device, such as a smartphone.
  • the User Interface provides a live stream from the webcam, with markers on the side indicating the stored, detected important events (such as by using a series of bubbles indicating how long ago an event occurred).
  • the indicators are marked to indicate the relative importance, such as with color coding.
  • the time-lapse summary is displayed, along with a time of day- indication.
  • the user can select to have a time-lapse display of ail the events in sequence, using a more condensed time lapse, with less important events having less time or being left out.
  • the UI upon the application being launched, provides a video summary of content since the last launch of the application.
  • the user can scroll through the video at a hyper-lapse speed, and then select a portion for a normal time lapse, or normal time view.
  • a video camera selectively streams to a remote server. Still images or short video events are intermittently transmitted when there is no significant motion detected. When significant motion is detected, video is streamed to the remote server.
  • the images and video can be higher resolution than the bandwidth used, by locally buffering the images and video, and transmitting it at a lower frame rate that extends to when there is no live streaming. This provides a time-delayed stream, but with more resolution at lower bandwidth.
  • Embodiments of the present invention are directed to automatically editing videos from a remote camera using artificial intelligence to focus on important events.
  • multiple videos/images over a period of time e.g., a day
  • a short summary video e.g., 30 seconds
  • Image recognition techniques are used to identify important events (e.g., the presence of people), for which a time lapse video is generated, while less important events and lack of activity are provided with a much greater time interval for the time-lapse. This creates a weighted video summary with different time-lapse speeds that focuses on important events.
  • the characteristics of events are logged into an event log, and this event log is used to generate the summaiy.
  • Each event may be assigned a contextual tag such that events may be summarized easily,
  • image recognition is used to determine the type of location where the camera is mounted, such as indoors or outdoors, in a conference room or in a dining room.
  • a filter for selecting the types of events for a summary has parameters varied depending on the type of location. For example, an indoor location may tag events where humans are detected, and ignore animals (pets).
  • An outdoor location can have the parameters set to detect both human and animal movement,
  • Determining the type of scene in one embodiment involves determining the relevance of detected events, in particular motion. At a basic level, it involves the elimination of minimal motion, or non-significant motion (curtains moving, a fan moving, shadows gradually moving with the sun during the day, etc.). At a higher level, it involves grouping "meaningful" things together, for scenes such as breakfast, kids having a pillow fight, etc.
  • Some primary cues for determining when a scene, or activity, begins and ends include the amount of time after movement stops (indicating the end of a scene), continuou s movement for a long period of time (indicating part of the same scene, new motion in a different place (indicating a new scene), and a change in the number of objects, or a person leaving, or a new person entering.
  • D VIDEO SEARCHING FOR FILTERED AND TAGGED MOTION
  • captured video summaries are tagged with metadata so the videos can be easily searched.
  • the videos are classified into different scenes, depending on the type of action in the video, so searching can be based on the type of scene.
  • tags are provided for moving objects or people.
  • the type of object that is moving is tagged (car, ball, person, pet, etc.).
  • Video search results are ranked based on the weighting of the video events or video summaries.
  • the video event weighting provides a score for a video event based on weights assigned to tags for the event.
  • high weights are assigned to a time duration tag that is a long time, a motion tag indicating a lot of motion, or centered motion, a person tag based on a close relationship to the user, etc.
  • the video summary weighting focuses on important events, with multiple videos/images over a period of time condensed into a short summary video. This creates a weighted video summary with different time-lapse speeds that focuses on important events.
  • a processor in a camera does the initial filtering of video, at least based on the presence of significant motion.
  • the creation of video events and summaries is done by a server from video transmitted by the camera over the Internet.
  • a smart phone, with a downloaded application provides the display and user interface for the searching, which is done in cooperation with the server.
  • the search results provide videos that don't have tags matching the search terms, but are proximate in time.
  • a search for "birthday” may return video summaries or video events that don't include birthday, but include the birthday boy on the same day.
  • other tags in the videos forming the search results may be used to provide similar video events.
  • a search for "pool parties” may return, below the main search results, other videos with people in the pool parties found.
  • FIG. 1 is a block diagram of a camera used in an embodiment of the invention.
  • FIG. 2 is a block diagram of a cloud-based system used in to an embodiment of the invention.
  • FIG. 3 is a flowchart illustrating the basic steps performed in the camera and the server according to an embodiment of the invention.
  • FIG. 4 is a diagram illustrating the transition to different user interface display camera views according to an embodiment of the invention.
  • FIG. 5 is a diagram illustrating the transition to different user interface display menus according to an embodiment of the invention.
  • FIG. 6 is a diagram illustrating a split user interface display for multiple webcams according to an embodiment of the invention.
  • FIG . 1 is a block diagram of a camera used in an embodiment of the invention.
  • a camera 100 has an image sensor 102 which provides images to a memory 104 under control of microprocessor 106, operating under a program in a program memory 107.
  • a microphone 110 is provided to detect sounds, and a speaker 1 12 is provided to allow remote
  • a transceiver 108 provides a wireless connection to the Internet, either directly or through a Local Area Network or router.
  • a battery 14 provides power to the camera.
  • FIG. 2 is a block diagram of a cloud-based system used in to an embodiment of the invention.
  • Camera 100 connects wirelessly through the Internet 202 to a remote server 204.
  • Server 204 communicates wirelessly with a smart phone 206, or other user computing device.
  • Camera 100 can also connect locally to smart phone 206, or to a local computer 208.
  • the local computer can do some of the image processing, such as advanced motion detection and object recognition and tagging, and can return the processed video and tags to camera 100 for subsequent transmission to server 204, or local computer 208 could directly transmit to server 204, such as when camera 100 is in a low power, battery mode.
  • FIG. 3 is a flowchart illustrating the basic steps performed in the camera and the server according to an embodiment of the invention.
  • the steps above dotted line 300 are performed in the camera 100, while the steps below the dotted line are performed in the server 204.
  • the camera periodically captures a short video (e.g., 4 seconds) or a still image, such as every 8 minutes (302).
  • the captured short video is buffered and tagged.
  • Such camera tags include at least the time and date and the lack of motion.
  • the camera is programmed to detect motion (step 304) from image analysis. If the amount of motion, such as the number of pixels changing, is less than a predetermined amount (306), the video of the motion is discarded (308). If the amount of motion is greater than the threshold, it is determined whether the motion lasts for more than a predetermined amount of time (310). If the motion time is less than the predetermined time, it is discarded (308). If the motion lasts for more than the predetermined time, it is sent to a buffer and tagged with metadata (314). Such camera metadata tags include the time and date, the length of the video, and the amount of motion.
  • more advanced motion detection and object recognition can be done on the camera (315), or in a local computer.
  • the combined video events are then streamed wirelessly to the remote server (312).
  • the images and video can be higher resolution than the bandwidth used for streaming.
  • By locally buffering the images and video it can be streamed with a delay, and transmitted at a lower frame rate.
  • These can be buffered, and streamed over 20 minutes, for example. This provides a time-delayed stream, but with more resolution at lower bandwidth.
  • the remote server tags the received still images as having no motion.
  • the remote server filters (316) the received video.
  • the filtering is designed to eliminate video motion that is not of interest. For example, algorithms process the video to determine the type of motion. If the motion is a curtain moving, a moving shadow of a tree on a window, a fan in the room, etc., it can be filtered out and discarded.
  • a location detector 318 can be used to process the image to determine the type of location of the camera. In particular, is it inside or outside, is it in a dining room or a conference room, etc. Artificial intelligence can be applied to determine the location. For example, instead of a complex object recognition approach, a holistic review of the image is done. The image is provided to a neural network or other learning application. The application also has accessed to a database of stored images tagged as particular locations. For example, a wide variety of stored images of kitchens, dining rooms and bedrooms are provided. Those images are compared to the captured video or image, and a match is done to determine the location. Alternately, a user interface can allow a user to tag the type of location. The user interface can provide the user with the presumed location, which the user can correct, if necessary, or further tag (e.g., daughter's bedroom).
  • One example of a holistic image review process is set forth in "Modeling the shape of the scene: a holistic
  • the location may be a bedroom, while the scene is a sleeping baby.
  • the user is prompted to label the scene (e.g., as sleeping baby).
  • there can be automatic detection of the scene using a neural network or similar application with comparisons to images of particular scenes, and also comparisons to previously stored images and videos labelled by the user.
  • various cues are used in one embodiment to determine the type of scene. For example, for a "sleeping baby," the video may be matched to a baby in bed scene from examination of the video.
  • a birthday party can be detected holistically using different cues, including the comparison to birthday party images, motion indicating many individuals, singing (e.g., the song "Happy Birthday"), etc.
  • previous scenes for a user are stored, and used for the comparison. For example, a previous scene may be for "breakfast,” after having the user prompted to confirm.
  • the filtering parameters can be provided to filtering block 316.
  • the location /scene would set some priorities about what is expected and what, in that particular situation, is more relevant/ interesting to the user. What is interesting in one scene might not be interesting in another scene. For example, if the location is a living room, there would be suppression of constant motion at a particular spot which quit likely might be due to a TV or a fan. For an outdoor location, much more motion is expected due to wind or other weather conditions.
  • the parameters of the video processing e.g., thresholds
  • the parameters of the video processing are adapted in order to suppress such motions (moving leaves, etc.).
  • regular motion patterns in an outdoor setting are suppressed in one embodiment (e.g., cars passing by on the street).
  • the setting is a conference room and the scene is a meeting
  • spotting small motion is relevant to show people sitting together and discussing, but not moving much.
  • a different filtering is provided, to capture small movements of the baby, and not filter them out. For example, it is desirable to confirm that the baby is breathing or moving slightly.
  • the program determines if a human or animal is present (320).
  • the particular human can be identified using facial recognition (322).
  • the user can tag various individuals to initialize this process. Certain animals can be identified the same way, such as by the user providing a photo of the family pet, or tagging the pet in a video captured.
  • Video that passes through the filtering, and has a human or animal identified is then tagged (324) with context data.
  • the tag, or metadata includes the identity of the persons or animals, the time of day, the duration of the video, etc. In one embodiment, there is extraction of other meta-data which is helpful for further learning and personalization.
  • Examples include the "colorfulness,” the amount of motion, the direction/ position where motion appears, the internal state of the camera (e.g. if it is in night vision mode), the number of objects, etc. Most of this data is not accessible by the user. However, this (anonymous) data provides a foundation for gathering user-feedback and personalization.
  • supervised personalization is provided (user directed, or with user input). This personalization is done using various user input devices, such as sliders and switches or buttons in the application, as well as user feedback.
  • Unsupervised personalization is provided (user directed, or with user input). This personalization is done using various user input devices, such as sliders and switches or buttons in the application, as well as user feedback.
  • personalization is provided in another embodiment, where the application determines how to personalize for a particular user without user input (which is supplemented with actual user input, and/or corrections).
  • Examples of unsupervised personalization include using statistics of the scene and implicit user feedback. The use of cues to determine if there is a sleeping baby, as discussed above, in an example of unsupervised personalization.
  • Various types of user feedback can be used to assist or improve the process. For example, the user can be prompted to confirm that a "sleeping baby" has been correctly identified, and if not, the user can input a correct description. That description is then used to update the data for future characterizations,
  • a summary of a day or other period of time (e.g., since the last application launch ) is then generated (326) using the still images and video.
  • the summary is then condensed (328) to fit into a short time clip, such as 30 seconds. This condensing can reduce the number of still images used (such as where there is a long sequence without motion), and can also reduce, or fast forward the video at different rates, depending on the determined importance.
  • FIG. 4 is a diagram illustrating the transition to different user interface display camera views according to an embodiment of the invention.
  • a display 402 provides a live video stream (at a lower resolution than the time delayed summaries).
  • a signal is relayed through the server to the webcam to start the webcam streaming images. This provides the live view shown.
  • Certain data is overlaid on the display at position 404. In the example shown, that data is an indication of the location or other label given to the webcam (living room), an indication that it is a live streaming view (live), and a clock indicating the current time.
  • a view 408 which includes a series 410 of bubble indicators for stored video scenes.
  • View 408 also provides a series of icons 412.
  • Icon 414 is for sharing the video summary with others
  • icon 416 is for storing the video to a gallery
  • icon 418 is for activating a speaker to talk to whomever is in the room with the webcam, like a walkie-talkie push to talk function.
  • the series of bubble icons 410 includes a larger bubble 420 indicating "live view. ' " Icon 4 0 corresponds to what is currently being displayed, and is enlarged to show which view is selected.
  • Icons 422 and 424 indicate videos captured for important motion detection events, with the numbers in the bubbles indicating how long ago the video was captured (e.g., 2 minutes and 37 minutes in the example shown). Alternately, the bubbles can have a timestamp.
  • the color of bubbles 422 and 424 indicates the determined importance of the event captured. If the user were to select, for example, bubble 422, that bubble would be locked in and increase in size, while moving the middle of the series.
  • Bubble 426 is a "day brief which will display the condensed summary of the day, from step 328 in FIG. 3.
  • images or icons can provide more information about the scene indicated by a bubble, such as an image of a dog or cat to indicate a scene involving the family pet, or a picture or name tag of a person or persons in the scene.
  • Display 440 shows the "day brief bubble 426 after being selected (with the play icon eliminated). The video is then played, with a pause icon 442 provided. A timeline 444 is provided to show progress through the day brief.
  • FIG. 5 is a diagram illustrating the transition to different user interface display menus according to an embodiment of the invention.
  • a display 502 is activated by swiping to the right from the left side of the screen. This pulls up 3 menu icons 504, 506 and 508.
  • Tapping icon 504 brings up device menu screen 5 10.
  • Tapping icon 506 brings up
  • Display 510 On display 510 are a variety of icons for controlling the device (webcam). Icon 516 is used to turn the webcam on/off. Icon 518 is used to add or remove webcams. On display 512, icon 520 allows activation of pushing notifications to the smart phone, such as with a text message or simply providing a notification for an email. Icon 522 provides for email notification. Display 514 provides different account options, such as changing the password, and upgrade to cloud (obtaining cloud storage and other advanced features).
  • FIG. 6 is a diagram illustrating a split user interface display for multiple webcam s according to an embodiment of the invention.
  • Display 602 is the main, large display showing the living room webcam.
  • Display 604 shows a play room webcam and display 606 shows a study webcam.
  • the display of FIG. 6 is the default display provided when the application is launched.
  • a primary display provides streaming video, while the other displays provide a still image. Alternately, all displays can provide streaming video.
  • the primary display can be the first camera connected, or a camera designated by the user.
  • the UI upon the application being launched, provides a video summary of content since the last launch of the application.
  • the user can scroll through the video at a hyper-lapse speed, and then select a portion for a normal time lapse, or normal time view.
  • the user can also switch to real-time live streaming, at a lower resolution than the time-delayed summaries.
  • the summaries are continually updated and weighted. For example, a summary may contain 8 events with motion after 4 hours. When additional events are detected, they may be weighted higher, and some of the original 8 events may be eliminated to make room for the higher weighted events.
  • some of the original, lower- weighted events may be given a smaller portion of the summary, such as 2 seconds instead of 5 seconds.
  • the user can access a more detailed summary, or a second tier summary of events left out, or a longer summary of lower- weighted events.
  • Scene intuition is determining the relevance of detected events, in particular motion. At a basic level, it involves the elimination of minimal motion, or non-significant motion (curtains moving, a fan moving, shadows gradually moving with the sun during the day, etc). At a higher level, as discussed in more detail in examples below, it involves determining the camera location from objects detected (indoor or outdoor, kitchen or conference room). An activity can be detected from people or pets detected. A new scene may be tagged if a new- person enters or someone leaves, or alternately if an entirely different group of people is detected. Different detected events can be assigned different event bubbles in the UI example above.
  • the assignment of video to different summaries, represented by the bubbles involves grouping "meaningful" things together. For example, different activities have different lengths. Eating breakfast might be a rather long one, while entering a room might be short. In one embodiment, the application captures interesting moments which people would like to remember/ save/ share (e.g. kids having a pillow fight, etc.).
  • Primary cues for determining when a scene, or activity, begins and ends include the amount of time after movement stops (indicating the end of a scene), continuous movement for a long period of time (indicating part of the same scene, new motion in a different place (indicating a new scene), and a change in the number of objects, or a person leaving, or a new person entering.
  • the videos can be easily searched.
  • searching can be based on the type of scene.
  • the searching can also be based on time, duration of clips, people in the video, particular objects detected, particular camera location, etc.
  • the application generates default search options based on matching detected content with possible search terms. Those possible search terms can be input by the user, or can be obtained by interaction with other
  • the user may have tagged the names of family members, friends or work associates in a social media or other application, with images corresponding to the tags.
  • the present application can then compare those tagged images to faces in the videos to determine if there is a match, and apply the known name.
  • the default search terras would then include, for example, all the people tagged in the videos for the time period being searched.
  • tags are provided with later searching in mind.
  • Tags are provided for the typical things a user would likely want to search for.
  • One example is taking the names of people and pets.
  • Another example is tagging moving objects or people.
  • the type of object that is moving is tagged (car, ball, person, pet, etc.).
  • object detection is used for moving objects.
  • Other tags include the age of people, the mood (happy - smiles, laughing detected, or sad - frowns, furrowed brows detected).
  • video search results are ranked based on the weighting of the video summaries, as discussed below and elsewhere in this application. Where multiple search terms are used, the results with the highest weighting on the first search term are presented first in one embodiment. In another embodiment, the first term weighting is used to prioritize the results within groups of videos falling within a highest weighting range, a second highest weighting range, etc.
  • video search results also include events related to the searched term. For example, a search for "Mitch Birthday” will return video events tagged with both "Mitch” and "Birthday.” In addition, below those search results, other video events on the same date, tagged "Mitch,” but not tagged “Birthday,” would also be shown.
  • the "Birthday” tag may be applied to video clips including a birthday cake, presents, and guests. But other video events the same day may be of interest to the user, showing Mitch doing other things on his birthday.
  • video and images can be captured at high resolution, buffered, and then streamed over a longer period of time. This is possible since there is not constant live streaming, but only streaming of periodic no motion clips, and intermittent motion clips. For example, images can be captured at 2-3 megabytes, but then streamed at a bandwidth that would handle 500 kilobits live streaming.
  • the image data is stored in the camera memory, transcoded and transmitted.
  • the video summaries When the video summaries are subsequently viewed by the user, they can be streamed at high bandwidth, since they are only short summaries. Alternately, they can also be buffered in the user's smart phone, in a reverse process, with an additional time delay. Alternately, the video can be delivered at low resolution, followed by high resolution to provide more detail where the user slows down the time lapse to view in normal time, or to view individual images.
  • a webcam provides a coarse filtering and basic processing of video, which is transmitted to the "cloud" (a remote server over the Internet) for further processing and storing of the time-lapse video sequences. More processing can be done on the local camera to avoid cloud processing, while taking advantage of larger cloud storage capability.
  • a user can access the stored video, and also activate a live stream from the webcam, using an application on a smartphone.
  • the local camera detects not only motion, but the direction of the motion (e.g., left to right, into room or out of room). The origin of the motion can also be determined locally (from the door, window, chair, etc.)
  • the local camera, or a local computer or other device in communication with the camera, such as over a LAN can do some processing. For example, shape recognition and object or facial recognition and comparison to already tagged images in other use applications (e.g., Facebook) could be done locally. In one embodiment, all of the processing may be done locally, with access provided through the cloud (Internet).
  • the processing that is done on the camera is the processing that requires the higher resolution, denser images. This includes motion detection and some types of filtering (such as determining which images to perform motion detection on). Other functions, such as location detection, can be done on lower resolution images and video that are send to the cloud.
  • the camera can be plugged into line power, either directly or through a stand or another device, or it can operate on battery power.
  • the camera has a high power (line power) mode, and a low power (battery) mode.
  • line power line power
  • low power battery
  • power is conserved through a combination of techniques.
  • the number of frames analyzed for motion is reduced, such as ever ⁇ ' 5 th frame instead of a normal ever ⁇ ' 3 ui frame.
  • only basic motion detection is performed in the camera, with more complex motion recognition and object detection done by a processor in the remote server, or a local computer.
  • the camera is put into a sleep mode when there is no motion, and is woken periodically (e.g., every 8 minutes) to capture a short video or image.
  • Those videos/images can be stored locally, and only transmitted when there is also motion video to transmit, at some longer period of time, or upon request, such as upon application launch.
  • sleep mode everything is turned off except the parts of the processor needed for a timer and waking up the processor.
  • the camera is woken from sleep mode periodically, and the image sensor and memory are activated.
  • the transmitter and other circuitry not needed to capture and process an image remains asleep.
  • An image or video event is detected.
  • the image or video event is compared to a last recorded image, or video event. If there is no significant motion, the camera is returned to the sleep mode.
  • tags are included for each frame of data.
  • tags may be applied to a group of frames, or some tags may be for each frame, with other tags for a group of frames.
  • minimum tags include a time stamp and indication of motion present, along with the amount of motion. Additional tags include:
  • Type of motion e.g., walking running, cooking, playing, etc.
  • the product comprises at least one camera with at least a microphone, and an application that can be downloaded to a smart phone or other device.
  • the application executes a series of steps. It prompts the user to enter the a variety of information, including name, email, etc.
  • the application will automatically, or after a user prompt, access user data and other applications to build a profile for use in object, people and event detection.
  • a user's social media applications may be accessed to obtain tagged images identifying the user's family, friends, etc. That data can be uploaded to the cloud, or provided to the processor on the camera or another local processing device for use in examining videos.
  • the user's calendar application may be accessed to determine planned meetings, locations and participants to match with a camera location, where applicable.
  • the summaries or live streams can be shared with others using a variety of methods.
  • applications such as Periscope or Meercat can be used to share a stream, or set a time when video summaries will be viewable.
  • a video event can also be shared on social networking and other sites, or by email, instant message, etc.
  • the sharing icon when the sharing icon is selected, the user is presented with options regarding what method of sharing to use and also with whom to share. For example, a list of people identified in the video summary is presented for possible sharing.
  • the camera can be part of an epi sode capture device which includes other sensors, such as a microphone.
  • the camera in certain embodiments can monitor any type of event or interaction or change in an environment that can be detected by a sensor and subsequently recorded, including but not limited to an image recording device, whether in the form of an image, an audio file, a video file, data file or other data storage mechanism, including, but not limited to: motion, date and time, geographic location, and audio, a motion sensor, including the combination of a motion sensor with an algorithm capable of identifying certain types of motion, proximity sensor, temperature sensor, capacitive sensor, inductive sensor, magnet, microphone, optical sensor, antenna, Near Field Communication, a magnetometer, a GPS receiver and other sensors.
  • the cameras can be digital cameras, digital video cameras, cameras within smartphones, tablet computers, laptops or other mobile devices, webcams, and similar.
  • the present invention offers the ability to add tags with contextual relevance to a stream of data representing an event that has occurred.
  • a camera is set up to observe a kitchen from 6 AM to 6 PM. Events occur within the scene viewed by the camera, such as a family eating breakfast.
  • the recorded content is analyzed for context.
  • the camera analyses the data based on audio excerpts of the noise of plates being used, determining that it is placed in a kitchen and there is a meal taking place. Selecting audio data is merely one example of how this may be achieved, but other techniques will be apparent to the skilled person for performing this task. Further, the analysis may be performed within the camera, in another locally connected device, or remotely (such as in the cloud).
  • a contextual tag is then allocated to data recorded at the time the noise of plates is detected. For example, this may occur at 7: 15 AM, and the camera further recognises that the people present within the scene are family members, using facial recognition techniques. This creates the opportunity to add a further contextual tag based on the additional information due to the identification of the family members but also based on the time information, which is utilised to form a timestamp. Timestamp information may be used in correlation with the additional sensed information to distinguish an event from other events with similar actions, e.g. to identify the event as "breakfast” in contrast to "lunch” or
  • contextual tags allow the creation of a fully customi sable summary.
  • the summary may be based upon predetermined criteria or upon user preferences.
  • the scene is therefore monitored over an extended period of time, analysed and contextual tags and timestamps applied appropriately.
  • the contextual tags and timestamps enable the generation of a more specific summary focused on a particular context within the scene, or the context of a particular event.
  • a summary comprising a short video sequence, or a summary comprising a summary of relevant information to the event "breakfast", such as who was in attendance, how long did breakfast last and so on.
  • the information relevant to the event can also be displayed as text information overlaying the presented video sequence.
  • a summary comprising details of the same event occurring regularly within a scene, such as a summary of breakfasts occurring over the previous seven days.
  • the present invention therefore offers a completely flexible manner of producing a summary based upon the assignment of contextual tags to events occurring within a scene, which may be fully selectable and determined by a user, or determined dynamically by an episode capture device, or a combination of both. This is described further in a series of non-limiting examples below.
  • a video data recording device such as a camera, able to communicate with a communication network such as the internet, a local area network (LAN), or cellular network for transmitting data, is placed in a conference room.
  • a communication network such as the internet, a local area network (LAN), or cellular network for transmitting data
  • the camera observes the scene, that is, monitors all events occurring within the room within an episode, such as 24 hours, and records the scene using video capture for processing.
  • the episode therefore contains periods of activity (people entering and using a room) and inactivity (the room is empty).
  • This video capture forms the initial phase of the method of producing a summary in accordance with an exemplary embodiment of the present invention.
  • the data obtained during the video capture is sent to be processed to create an event log. This may be done either at the episode capture device, in this example, at the camera, or may be done remotely over a communications network such as the internet (at a remote server, in the Cloud) or at a processor in communication with the device, such as over a local area network (LAN).
  • the processing may be done live, that is during the video capture stage, or subsequently, once the video capture stage is complete, or at an offset, for example, 30 minutes post-video capture.
  • the sensory information may comprise data relating to the output of visual or non-visual sensors.
  • An event may be detected and/or identified by any of these sensors, for example, an optical beam motion detector detects the movement of a person through the door of the conference room. In this situation, the event is generated by an object, the person, and the presence of a person is identified in the room.
  • the episode capture device may also determine the presence of static items in the room, such as chairs, which information is fed into the event log when required.
  • Visual sensory information obtained from the visual sensors is logged. This may include:
  • Determining the identification of an object using a recognition technology for example, facial recognition methods.
  • Non-visual sensory information obtained from the visual sensors is logged. This may include:
  • the sensory information is used to create contextual tags, that when applied to the data allow a user to create meaningful summaries.
  • the contextual tag indicates the context of the event, and may be specific context or more general context. For example, the tag may be "at least one person present", or “more than one person present", or “more than one person present and that there is interaction between the people", or "a meeting is in progress". In the present example the contextual tag indicates that a particular event is a meeting.
  • the timestamp data may be appli ed separately to the event, or may be part of the contextual tag, or the contextual tag may in fact be the timestamp data.
  • a contextual tag indicating the start of a meeting is assigned.
  • the camera assigns a contextual tag indicating that the room is being used for a private call. If the camera is connected to a communications network over which a presentation in the meeting room is accessed, the camera may assign contextual tags representing the start of a meeting, the end of a meeting, a break occurring within a meeting, or specific parts of a presentation. In this way the contextual tags can be generated using information directly available via the camera (such as observing the video scene), but may also use information available via other sensors/systems (i.e. information related to use of a projector).
  • a summary is created with at least a subset of the events based upon the contextual tags.
  • the summary performs the function of a report to a conference room organiser showing the use of the facilities.
  • the summary report could take various forms.
  • the summary report may be a text based report, a video summary, or a text report with "clickable" thumbnails of significant events.
  • the conference room organiser may search the summary by time stamp data or contextual tag.
  • events observed in a scene may be matched to stored or input data in order to produce a more meaningful summary as part of the summary.
  • the episode capture device may be furnished with identity information about frequent occupants of the room, such that it can identify specific room occupants.
  • Contextual tags may be added in order to identify specific room occupants in a summary.
  • the stored or input data identifies an object, which may be a person, and the stored or input data may be used to choose and assign a contextual tag identifying the person. This enables a user to determine if only authorised people such as employees enter the conference room, or whether it is used frequently by non-employees, such as customers or clients.
  • the stored or input data matching step if the stored or input data matching step identifies a person, it may be desirable to use characteristic identification techniques, such as facial recognition techniques. This may then be used to determine the subset of events included in the summary, matching events observed in the scene to the stored or input data to create matched events based upon the contextual tags, such that the subset of events contains the matched events.
  • characteristic identification techniques such as facial recognition techniques
  • the facial recognition example outlined above is a special case of where an event is triggered by an object.
  • the episode capture device identifies the object within the scene (the person), and identifies a characteristic of the object (the name of the person), and both the identity of the object (that it is a person) and the characteristic (the name of the person) are included in the summary. This may be the case for other objects, such as identifying a burning candle in a room - initially the candle is identified and then that it is burning is inferred from its temperature.
  • Object monitoring In another example a camera may be used to monitor a room for theft. The contents, or objects, in the room may be logged. Settings may be configured such that events are only triggered if an object is removed from the scene or the position of the object changes. Thus people could enter or exit the scene without triggering an event, as long as the objects are not removed or moved,
  • the episode capture device is preferably configured to connect to a data network, such that it may interact and/or communicate with other devices, such as smartphones and tablet computers. Processing to create the event log and the summary may take place at the episode capture device or remotely. Sensors may be provided within the episode capture device, or within external devices, or worn on a person or provided within a scene may be programmed either to monitor events, monitor a scene or to trigger events. For example, a camera may be configured to interact with a movement sensor within a smartphone to record that a meeting attendee entered the scene at a walking pace and left the scene at a running pace.
  • the camera may record that a smartphone belonging to a particular user enters the region of a local area network (WiFi) that denotes the periphery of a scene, and therefore has entered the scene.
  • WiFi local area network
  • a camera is used as the episode capture device, and audio data is used to enhance the video data obtained.
  • other sensors may be used to capture events, such as, but not limited to, a motion sensor, including the combination of a motion sensor with an algorithm capable of identifying certain types of motion, proximity sensor, temperature sensor, capacitive sensor, inductive sensor, magnet, microphone, optical sensor, antenna. Near Field Communication and similar devices.
  • An episode capture device is therefore a device that is capable of recording an event, and the data obtained may be used appropriately to create a summary.
  • Typical episode capture devices include image capture devices (cameras, in the visible, infrared or ultraviolet spectra) that may be digital (including CCD and CMOS devices).
  • image capture devices cameras, in the visible, infrared or ultraviolet spectra
  • CCD and CMOS devices digital
  • Such devices are provided with visual and non-visual sensors either integral with the episode capture device (an accelerometer in a mobile phone having a camera) or separate to but in communication and connection with the episode capture device, so as to be in effect functionally integrated.
  • the sensor may detect that the temperature of a room increases at 6 AM, and decreases at 8 PM. It identifies these points as dawn and dusk, and applied contextual tags appropriately to each point.
  • Episode capture devices may be used separately or together to enhance a summary.
  • a shop monitors stock using magnetic tags, which trigger an alarm when passed through an induction loop, and uses a system. It would be possible to combine a first episode capture device, such as a camera and a second episode capture device, such as an induction sensor system and to assign contextual tags at certain events. An item bearing a tag may be taken through the induction sensor, thus triggering an alarm. At this point a contextual tag may be assigned to the video feed obtained from the camera system and a summary generated accordingly,
  • the format of the summary may be adapted to include any event information that is of interest to a user.
  • the summary may include details of attendees including their identity, still images, audio recordings, information on types of events, and details of use that flags some kind of warning.
  • Contextual tags added to the data captured by the episode capture device enable the summary to be as detailed or as concise as desired. This may be where the device is unable to determine the identity of a person, or unable to associate an event with an approved use of the room.
  • the user may select from various pre-programmed options, or provide various criteria matching the contextual tags on which the summary may be based.
  • This may include type of event, frequency of event, length of video sequence, date and time, geographic location, audio content, as examples, although many other criteria are possible.
  • Storing criteria or inputting criteria to the image capture device, either directly or remotely, to form stored or input criteria and generating the summary using the stored or input criteria allows the user complete freedom of use.
  • the user may build a bespoke summary format or choose from a pre-programmed selection.
  • the summary may be generated by the episode capture device, a device in which the camera is positioned or using a remote system.
  • the summary may take various formats, depending on user preference.
  • One format is to show a video feed of all events and periods of inactivity at a changeable speed, such as time-lapse or hyperlapse.
  • Another is to combine a subset of certain events into a single video feed, for example, where these events are chosen by a user, as above, or where the events are chosen using stored or input data to create matched events. It is possible to delete or remove unimportant events based upon user criteria. For example, a user may specify that only meetings where there are 4 or more people present must be included in the summary.
  • the episode capture device records all of the events during the episode, and then selects only those corresponding to a meeting with 4 or more people present, effectively discarding all other events recorded.
  • Weighting One further possibility is prioritising events using a weighting or other prioritisation method, such as a binary selection scheme.
  • a weighting is applied to an event, such that the subset of events in the summary is determined by the weighting.
  • the weighting itself is determined by a characteristic of an event, for example, the number of people in a meeting room, the identity of pets rather than persons, the temperature of an object. In the above example this is illustrated by considering that the meeting room has a maximum capacity of 6, and that an organiser is interested in finding out whether the room is being used to its maximum capacity.
  • One way of doing this is to assign a weighting to each event where fewer than 6 people attend a meeting, for example, and event- where one person uses the room has a weighting of 5, two people using the room has a weighting of 4, and so on. Initially the user may select a summary based upon events having a weighting of 5 or less.
  • the weighting determines the prioritisation of the events within the subset.
  • events may be listed in order of the highest weighting first.
  • a weighting scale of 0-1, or 1-10 is used for each element weighted.
  • the presence of significant motion is used as a filter before anything is weighted. After that filter is passed, the total of the weights are simply added together for each video event or image. For example, the presence of a lot of motion may contribute a weighting of 8 on a scale of 1- 10.
  • the presence of people tagged as important by the user may add a weight of 7 for each such person present.
  • the presence of other people may provide a weight factor of 4 each.
  • the duration of significant motion may add a weight of 1 for each minute, up to a total of 10 minutes.
  • the weighting is as follows for a 10 minute video event (note that individual parts of the clip may have different weights):
  • events that are considered for summarization are within a specified period of time (e.g., from midnight until now, or during the last 2 hours, etc.) and contains significant motion (after the filtering step).
  • a summary rather than being a specified period of time, can be defined by a number of events, a percentage of events recorded, all events above a certain score, etc.
  • event scoring is based on the following cues: 1.
  • event duration lower score for very short events.
  • motion location and size higher score for motion that is in the center and has a larger extent
  • motion anomaly a model of past motion detected is created. A new motion observation gets a higher score, if it is abnormal given the previous content. This can al so be seen as a notion of 'surprise.' e. number of objects: higher score if more objects are moving in the event.
  • detections some detected concepts lead to higher scores, such as a detected person, a detected face, regions of skin color, etc.
  • image quality contrast, sharpness of the image or distribution of colors.
  • scores are combined using a weighted average. Other methods for combinations are also possible.
  • scores and weights are adapted or added / omitted based on the user's general preferences or user specifications for one summary.
  • the weights don't include the time of day, when the event appears. This is handled in the second step:
  • 'filler' is added for long periods of no activity (for example, in a living room where a person is at work ail day, and the only motion is present in the morning and the evening). That is, the playback speeds are adjusted, as already discussed above. A time lapse with 1 frame every 6 min is used for no activity periods, whereas a 'hyper lapse' style video is played for motion events (e.g., speeding up normal speed by a factor of 8). Of course, other particular time periods and speeds can be used.
  • the episode capture device may make use of cloud data storage to create or enhance the episode capture device or within a cloud data storage facility. Data may then be downloaded from the cloud data storage as and when desired in creating a summary, such that at least one step in the method outlined above occurs using this data. This enables even devices with small memory capacity to be configured to create a summary, since at least one step outlined in the method above may take place remote from the episode capture device.
  • the ability to store and access large amounts of data relating to events and a scene also enables the creation of enhanced summaries.
  • a detailed summary may be considered as comprising many layers of information, summarising video data, audio data, geographic data and so on.
  • This layered approach allows a user to zoom into certain areas of interest.
  • a conference organiser receives a summary of a day's conference. This includes details of all participants, copies of presentations and handouts, all movement and geographical information as well as video and audio data of the events during the conference or of various conferences which took place in the respective conference room monitored by the event capture device.
  • the organiser is told that a certain event, such as a presentation, happened at a particular time.
  • the organiser can zoom into the summary at various times, and chooses to zoom into the event.
  • the detail within the summary allows the organiser to review and select a particular event, and to choose to have video data of the event streamed to a device to view.
  • This may be a device that the organiser chooses to view the summary on or another device.
  • the organiser may choose to view the summary on a smartphone.
  • the organiser may choose to view the summary on a smartphone.
  • the organiser prefers to use a tablet computer. Once the zoom into the summary is chosen using the smartphone, the organiser is able to stream video content of the event to the tablet computer.
  • the layering approach also facilitates an automatic edit of the summary depending on the amount of data a user can receive. For example, if a user is accessing the summary using a smartphone connected to a cellular data network, a short version of the summary containing only highlights with hyperlinks to further content is transmitted, since, for example, if the cellular data network is a 3G network, data transfer is relatively slow and the user may prefer not to receive and download a high volume of data.
  • summary information in text form for example, the. occurrence of a certain event or appearance of a certain person, may be transmitted to a mobile device of a user, in the form of a short message (such as SMS, MMS or text) and/or making use of push-functionality for notification.
  • the type of information provided to the user in this manner may be determined by a user or sent according to pre-determined criteria. However if a user is accessing the summary via a local area network (Wi-Fi) or other data connection, a more detailed summary may be transmitted.
  • the episode capture device may be pre-programmed with information specific to the room in which it is located. Alternatively a user may notify the camera of its location once it has been placed within a room.
  • Suitable episode capture devices include digital cameras, digital video cameras, cameras within smartphones, tablet computers, laptops or other mobile devices, webcam s, and similar. Such cameras should be adapted to communicate data via a network to a client computer, software program, an app on a mobile device or, in general, to a suitable storage device, wherein such storage devices may include additional processing capacities for subsequent image processing. Cameras may be dedicated devices or multipurpose, that is, with no fixed designation with regard to monitoring a scene for events
  • the episode capture device comprises a processor able to access a software module configured to perform the method outlined above
  • the software module is based on the determination of certain criteria, either predefined or selectable by a user, for the identification of certain events. Subsequently, for example, upon selection by the user, a summary comprising a summary is created based on selected criteria, such as a certain event, optionally in combination with another constraint, for example, the maximum length of the summarising video sequence or a predetermined data volume. This results in a parameter-dependent automated video analysis method, in which significantly less video data has to be evaluated to determine if an event has occurred within a scene.
  • a method of providing a video summary from a camera comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant; recording in a memory of the camera a periodic image of at least one frame during periods of inactivity having no more than insignificant motion; identifying events from periods of activity having significant detected motion and creating event tags; recording in a memory of the camera video from the identified events and the event tags; and intermittently transmitting the images and video in the memory to a remote computing device using a transmitter in the camera.
  • the method of claim 1 wherein the periodic image during periods of inactivity comprises a video of between 1-10 seconds.
  • the method of claim I further comprising determining, by one of the processor in the camera and the remote computing device, the end of an event and the start of a new event based on the amount of time after movement stops. [0100] 5, The method of claim 1 further comprising determining, by one of the processor in the camera and the remote computing device, the end of an event and the start of a new event based on new motion in a different place.
  • the method of claim 1 further comprising determining, by one of the processor in the camera and the remote computing device, one of the end of an event and the start of a new event based on a change in one of the number of moving objects in the video and the number of people in the video.
  • the method of claim 1 further comprising creating, with the remote computing device, a summary video from multiple video events provided by the camera, comprising; creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion,
  • the method of claim 7 further comprising weighting the video events according to importance, and providing one of a slower time lapse and more time to higher weighted video events deemed more important.
  • [0106] 1 1. The method of claim 7 further comprising weighting the video events based on: an amount of inactivity before the video event: the duration of motion in the video event; the proximity of the motion in the video event to the center of the video event, the amount of difference between the motion in the video event and motion from previous video events, and the number of objects moving in the video event.
  • a method of providing a video summar' from a camera comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant; recording in a memory of the camera a periodic image of at least one frame during periods of inactivity having no more than insignificant motion; identifying events from periods of activity having significant detected motion and creating event tags; recording in a memory of the camera video from the identified events and the event tags; intermittently transmitting the images and video in the memory to a remote computing device using a transmitter in the camera; creating, with the remote computing device, a summary video from multiple video events provided by the camera, comprising; creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion; providing the video events with contextual tabs; and weighting the video events based on at least one of the number of people detected, the
  • a system for providing a video summary comprising: a camera having a processor configured to analyze pixels in video captured by the camera to detect motion in a video; the processor being configured to determine whether the motion is significant; a memory of the camera configured to record a periodic image of at least one frame during periods of inactivity having no more than insignificant motion; the processor being configured to identify events from periods of activity having significant detected motion and create event tags; the processor being further configured to record in the memory of the camera video from the identified events and the event tags; and a transmitter configured to intermittently transmit the images and video in the memory to a remote computing device.
  • the system of claim 13 further comprising one of the processor in the camera and the remote computing device being configured to determine the end of an event and the start of a new event based on new motion in a different place in the video.
  • the system of claim 13 further one of the processor in the camera and the remote computing device being configured to determine one of the end of an event and the start of a new event based on a change in one of the number of moving objects in the video and the number of people in the video.
  • the remote computing device is further configured to create a summary video from multiple video events provided by the camera, comprising: creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion; and allocating more time, at a slower time lapse, to videos with significant motion [0115] 20.
  • a method for determining the location of a camera comprising: capturing images at a camera at a location, transmitting the images from the camera to a remote server; holistically comparing images from the camera, at the server, to multiple stored images, from a database coupled to the server, corresponding to known locations; determining which stored images provide a best match, and determining a type of location of the camera from tags associated with the images providing a best match.
  • the method of claim 2 further comprising wherein the camera is determined to be in an indoor location, determining the type of room, wherein the type of room includes at least one of a conference room, a dining room, a kitchen, a living room, a bedroom, an office and a hallway.
  • the method of claim 1 further comprising: detecting substantial motion in the video above a threshold amount of motion; detecting at least one of an object and a person in the substantial motion in the video; holistically comparing images from the substantial motion to stored images corresponding to known different events; determining which stored images provide a best match; and determining a type of event from tags associated with the images providing a best match; and tagging the video with the type of event.
  • the method of claim 5 further comprising: detecting sounds from a microphone in the camera; comparing detected sounds with a stored database of sounds; determining at least one best match of sounds; comparing a tag associated with the best match of sounds to the tags associated with the images; and determining a type of event based on tags from the images and the sound.
  • a method for determining a type of event in video from a camera comprising: detecting substantial motion in the video above a threshold amount of motion; detecting at least one of an object and a person in the substantial motion in the video, hoiistically comparing images from the substantial motion to stored images corresponding to different events; determining which stored images provide a best match, and determining a type of event from tags associated with the images providing a best match; and tagging the video with the type of event.
  • the method of claim 7 further comprising: determining a type of location of the camera by: hoiistically comparing images from the camera to multiple stored images corresponding to known locations; determining which stored images provide a best match; and determining a type of location of the camera from tags associated with the images providing a best match.; and using the type of location in determining the type of event.
  • a system for determining the location of a camera comprising: a camera configured to capture images at a location; a transmitter in the camera for transmitting the images from the camera to a remote server; a server configured to holistically compare images from the camera to multiple stored images corresponding to known locations; a database, coupled to the server, for storing the multiple stored images; the server being configured to determine which stored images provide a best match; and the server being configured to determine a type of location of the camera from tags associated with the images providing a best match.
  • [0126] 1 The system of claim 10 further comprising wherein the camera is determined to be in an indoor location, the server being configured to determine the type of room; wherein the type of room includes at least one of a conference room, a dining room, a kitchen, a living room, a bedroom, an office and a hallway.
  • the system of claim 9 further comprising: the server being configured to filter out a type of motion, the type of motion being dependent upon the determined type of location of the camera.
  • the system of claim 9 further comprising: the camera being configured to detect substantial motion in the video above a threshold amount of motion; the server being configured to detect at least one of an object and a person in the substantial motion in the video, the server being configured to holistically compare images from the substantial motion to stored images corresponding to known different events; the server being configured to determine which stored images provide a best match; the server being configured to determine a type of event from tags associated with the images providing a best match; and the server being configured to tag the video with the type of event. [0129] 14.
  • the system of claim 13 further comprising: a microphone in the camera for detecting sounds; the server being configured to compare detected sounds with a stored database of sounds; the server being configured to determine at least one best match of sounds; the server being configured to compare a tag associated with the best match of sounds to the tags associated with the images; and the server being configured to determine a type of event based on tags from the images and the sound,
  • the system of claim 14 further comprising: the server being configured to prompt a user to confirm the location and type of event. [0131] 16. The system of claim 14 further comprising: the server being configured to compare images and sounds to scenes previously recorded and stored for a particular user
  • a method of searching video from a camera comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant, and filtering out video without significant motion; transmitting the video in the memory to a remote computing device using a transmitter in the camera; organizing the video into separate video events; creating, with the remote computing device, a plurality of summary videos from multiple video events provided by the camera; tagging each summary video with a plurality of tags corresponding to the events in the video summary; in response to search terms entered by a user, matching the search terms to the tags; and displaying indicators of video summaries with a best match to the search terms, ranked in order of best match.
  • creating a summary video comprises: creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion.
  • search terms include at least one of time, duration of video, people in the video, objects in the video and camera location.
  • the method of claim 1 further comprising: providing, with the search results, indications of videos without tags corresponding to the search terms, but that are proximate in time to videos with the tags.
  • the method of claim 1 further comprising: providing, with the search results, indications of videos without tags corresponding to the search terms, but with other tags that correspond to non-searched tags in the videos in the search results.
  • a method of searching video from a camera comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant, and filtering out video without significant motion; transmitting the video in the memory to a remote computing device using a transmitter in the camera; organizing the video into separate video events; tagging each video event with a plurality of tags corresponding to at least two of time, duration of video, people in the video, objects in the video and camera location; weighting each video event based on the significance of the tags: in response to search terms entered by a user, matching the search terms to the tags; and displaying indicators of video events with a best match to the search terms, ranked in order of best match and the weighting of the video events,
  • the method of claim 7 further comprising: creating, with the remote computing device, a plurality of summary videos from multiple video events provided by the camera; tagging each summary' video with a plurality of tags corresponding to the events in the video summary; weighting each video summary based on the significance of the tags: in response to search terms entered by a user, matching the search terms to the tags; and displaying indicators of video summaries with a best match to the search terms, ranked in order of best match and the weighting of the video events.
  • the method of claim 7 further comprising: providing, with the search results, indications of videos without tags corresponding to the search terms, but that are one of proximate in time to videos with the tags and have other tags that correspond to non-searched tags in the videos in the search results.
  • a system for searching video from a camera comprising: a processor in the camera configured to detect motion; the processor further configured to determine whether the motion is significant, and filtering out video without significant motion; a memory in the camera for storing the video: a transmitter in the camera configured to transmit the video in the memory; a remote computing device configured to receive the transmitted video; the remote computing device being configured to organize the video into separate video events; the remote computing device being configured to tag each video event with a plurality of tags corresponding to at least two of time, duration of video, people in the video, objects in the video and camera location; the remote computing device being configured to weight each video event based on the significance of the tags; the remote computing device being configured to , in response to search terms entered by a user, match the search terms to the tags; and the remote computing device being configured to display indicators of video events with a best match to the search terms, ranked in order of best match and the weighting of the video events,
  • the system of claim 10 further comprising: the remote computing device being configured to create a plurality of summary videos from multiple video events provided by the camera; the remote computing device being configured to tag each summary video with a plurality of tags corresponding to the events in the video summary; the remote computing device being configured to weight each video summary based on the significance of the tags: the remote computing device being configured to, in response to search terms entered by a user, match the search terms to the tags; and the remote computing device being configured to display indicators of video summaries with a best match to the search terms, ranked in order of best match and the weighting of the video events,
  • the remote computing device is a smart phone, configured to communicate with the camera using a server over the Internet.
  • the remote computing device is further configured to create a summary video by: creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion.
  • search terms include at least one of time, duration of video, people in the video, objects in the video and camera location.
  • the system of claim 10 further comprising: the remote computing device being further configured to provide, with the search results, indications of videos without tags corresponding to the search terms, but that are proximate in time to videos with the tags.
  • the remote computing device is further configured to provide, with the search results, indications of videos without tags corresponding to the search terms, but with other tags that correspond to non-searched tags in the videos in the search results.

Abstract

In one embodiment of the present invention, a remote video camera intermittently transmits video clips, or video events, where motion is detected to a remote server. The remote server provides video summaries to an application on a user device, such as a smartphone. In one embodiment, the User Interface (UI) provides a live stream from the webcam, with markers on the side indicating the stored, detected important events (such as by using a series of bubbles indicating how long ago an event occurred).

Description

USER INTERFACE FOR VIDEO SUMMARIES
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] The present application is a PCT application of and claims priority to U.S.
Application No. 14/853,965, entitled "User Interface for Video Summaries," filed
September 14, 2015, issued as U.S. Patent No. 9,313, 556 on April 12, 2016; U.S. Patent Application No. 14/853,943, entitled "Temporal Video Streaming and Summaries," filed September 14, 2015; U.S. Patent No, 14/853,980, entitled "Automatically Determining Camera Location and Determining Type of Scene," filed September 14, 2015; and U.S. Patent Application No. 14/853,989, entitled "Video Searching for Filtered and Tagged Motion," filed September 14, 2015, which are hereby incorporated by reference in their entirety.
BACKGROUND OF THE INVENTION
[0002 ] The invention generally relates to improvements in methods of automatic video editing, and more specifically to methods used in automatically creating summaries based on webcam video content, as determined by image analysis.
[0003] Devices such as video cameras and microphones are often used for monitoring an area or a room. Existing video editing and monitoring systems typically record events when motion is detected, and provide alerts to a user over the Internet. The user can then view just the stored portions of the monitored area when motion was detected. A summary can, for example, provide a series of still images from each video, to give the user a sense of whether the motion is worth viewing. For example, the user can see if a person is in the scene, or if the motion appears to have been a drape moving, a bird, etc.
[0004] Magisto Pub. No. 20150015735 describes capturing images, as opposed to editing, based on various factors, and detecting important objects and deciding whether to take a video or snapshot based on importance (e.g., whether someone is smiling), BriefCam has patents that describe detecting an amount of activity, or objects, moving in an image, and overlaying different object movements on the same image, as a mosaic. See, e.g., Pub. 2009- 0219300 (refers to different sampling rates on the image acquisition side) and Pub. 2010- 0092037 (refers to "adaptive fast-forward"). Pub. No. 20150189402 describes creating a video summary of just detected important events in a video, such as shots in a soccer match. See also Pub. No. 20050160457, which describes detecting baseball hits visually and from excited announcer sound,
[0005] Pub. No. 20100315497 is an example of systems capturing the images based on face recognition, with a target face profile. Object Video Pub. No. 20070002141 describes a video-based human verification system that processes video to verify a human presence, a non-human presence, and/or motion. See also Wells Fargo Alarm Services Pat. No.
6,069,655, Pub. No. 2004-0027242 also describes detecting humans, and other objects. "Examples include vehicles, animals, plant growth (e.g., a system that detects when it is time to trim hedges), falling objects (e.g., a system that detects when a recyclable can is dropped into a garbage chute), and microscopic entities (e.g., a system that detects when a microbe has permeated a cell wall)."
[0006] Pub. No. 20120308077 describes determining a location of an image by comparing it to images from tagged locations on a social networking site. Pub. No. 201 0285842 describes determining a location for a vehicle navigation system by using landmark recognition, such as a sign, or a bridge, tunnel, tower, pole, building, or other structure
[0007] Sony Pub. No. 2008-0018737 describes filtering images based on
appearance/disappearance of an object, an object passing a boundary line, a number of objects exceeding a capacity, an object loitering longer than a predetermined time, etc.
[0008] Object Video Pub. No. 2008-0100704 describes object recognition for a variety of purposes. It describes detecting certain types of movement (climbing fence, move in wrong direction), monitoring assets (e.g., for removal from a museum, or, for example: detecting if a single person takes a suspiciously large number of a given item in a retail store), detecting if a person slips and falls, detecting if a vehicle parks in a no parking area, etc.
[0009] Pub. No. 2005-0168574 describes "passback" [e.g., entering through airport exit] detection. There is automatic learning a normal direction of motion in the video monitored area, which may be learned as a function of time, and be different for different time periods. "The analysis system 3 may then automatically change the passback direction based on the time of day, the day of the week, and/or relative time (e.g., beginning of a sporting event, and ending of sporting event). The learned passback directions and times may be displayed for the user, who may verify and/or modify them." [0010] Logitech Pat. 6995794 describe image processing split between a camera and host (color processing and scaling moved to the host). Intel Pat. 6,803,945 describes motion detection processing in a webcam to upload only interesting "interesting" pictures, in particular a threshold amount of motion (threshold number of pixels changing).
[0011] Yahoo! Pub. No. 20140355907 is an example of examining image and video content to identify features to tag for subsequent searching. Examples of objects recognized include facial recognition, facial features (smile, frown, etc.), object recognition (e.g., cars, bicycles, group of individuals), and scene recognition (beach, mountain). See paragraphs 0067-0076. See also Disney Enterprises Pub. No. 20100082585, paragraph 0034.
BRIEF SUMMARY OF THE INVENTION
[0012] In one embodiment of the present invention, a remote video camera intermittently transmits video clips, or video events, where motion is detected to a remote server. The remote server provides video summaries to an application on a user device, such as a smartphone.
(A) USER INTERFACE FOR VIDEO SUMMARIES
[0013] In one embodiment, the User Interface (UT) provides a live stream from the webcam, with markers on the side indicating the stored, detected important events (such as by using a series of bubbles indicating how long ago an event occurred). The indicators are marked to indicate the relative importance, such as with color coding. Upon selection of an indicator by the user, the time-lapse summary is displayed, along with a time of day- indication. Alternately, the user can select to have a time-lapse display of ail the events in sequence, using a more condensed time lapse, with less important events having less time or being left out.
[0014] In another embodiment, the UI, upon the application being launched, provides a video summary of content since the last launch of the application. The user can scroll through the video at a hyper-lapse speed, and then select a portion for a normal time lapse, or normal time view.
(B) TEMPORAL VIDEO STREAMING AND SUMMARIES
[0015] In one embodiment of the present invention, a video camera selectively streams to a remote server. Still images or short video events are intermittently transmitted when there is no significant motion detected. When significant motion is detected, video is streamed to the remote server. The images and video can be higher resolution than the bandwidth used, by locally buffering the images and video, and transmitting it at a lower frame rate that extends to when there is no live streaming. This provides a time-delayed stream, but with more resolution at lower bandwidth.
[0016] Embodiments of the present invention are directed to automatically editing videos from a remote camera using artificial intelligence to focus on important events. In one embodiment, multiple videos/images over a period of time (e.g., a day) is condensed into a short summary video (e.g., 30 seconds). Image recognition techniques are used to identify important events (e.g., the presence of people), for which a time lapse video is generated, while less important events and lack of activity are provided with a much greater time interval for the time-lapse. This creates a weighted video summary with different time-lapse speeds that focuses on important events. The characteristics of events are logged into an event log, and this event log is used to generate the summaiy. Each event may be assigned a contextual tag such that events may be summarized easily,
(C) AUTOMATICALLY DETERMINING CAMERA LOCATION AND
DETERMINING TYPE OF SCENE
[0017] In one embodiment, image recognition is used to determine the type of location where the camera is mounted, such as indoors or outdoors, in a conference room or in a dining room. A filter for selecting the types of events for a summary has parameters varied depending on the type of location. For example, an indoor location may tag events where humans are detected, and ignore animals (pets). An outdoor location can have the parameters set to detect both human and animal movement,
[0018] Determining the type of scene in one embodiment involves determining the relevance of detected events, in particular motion. At a basic level, it involves the elimination of minimal motion, or non-significant motion (curtains moving, a fan moving, shadows gradually moving with the sun during the day, etc.). At a higher level, it involves grouping "meaningful" things together, for scenes such as breakfast, kids having a pillow fight, etc. Some primary cues for determining when a scene, or activity, begins and ends include the amount of time after movement stops (indicating the end of a scene), continuou s movement for a long period of time (indicating part of the same scene, new motion in a different place (indicating a new scene), and a change in the number of objects, or a person leaving, or a new person entering. (D) VIDEO SEARCHING FOR FILTERED AND TAGGED MOTION
[0019] In one embodiment, captured video summaries are tagged with metadata so the videos can be easily searched. The videos are classified into different scenes, depending on the type of action in the video, so searching can be based on the type of scene. In one embodiment, tags are provided for moving objects or people. The type of object that is moving is tagged (car, ball, person, pet, etc.). Video search results are ranked based on the weighting of the video events or video summaries. The video event weighting provides a score for a video event based on weights assigned to tags for the event. For example, high weights are assigned to a time duration tag that is a long time, a motion tag indicating a lot of motion, or centered motion, a person tag based on a close relationship to the user, etc. The video summary weighting focuses on important events, with multiple videos/images over a period of time condensed into a short summary video. This creates a weighted video summary with different time-lapse speeds that focuses on important events.
[0020 ] In one embodiment, a processor in a camera does the initial filtering of video, at least based on the presence of significant motion. The creation of video events and summaries is done by a server from video transmitted by the camera over the Internet. A smart phone, with a downloaded application, provides the display and user interface for the searching, which is done in cooperation with the server.
[0021] In one embodiment, the search results provide videos that don't have tags matching the search terms, but are proximate in time. For example, a search for "birthday" may return video summaries or video events that don't include birthday, but include the birthday boy on the same day. Alternately, other tags in the videos forming the search results may be used to provide similar video events. For example, a search for "pool parties" may return, below the main search results, other videos with people in the pool parties found.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] FIG. 1 is a block diagram of a camera used in an embodiment of the invention.
[0023] FIG. 2 is a block diagram of a cloud-based system used in to an embodiment of the invention.
[0024] FIG. 3 is a flowchart illustrating the basic steps performed in the camera and the server according to an embodiment of the invention. [0025] FIG. 4 is a diagram illustrating the transition to different user interface display camera views according to an embodiment of the invention.
[0026] FIG. 5 is a diagram illustrating the transition to different user interface display menus according to an embodiment of the invention.
[0027] FIG. 6 is a diagram illustrating a split user interface display for multiple webcams according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
Camera diagram.
[0028] FIG . 1 is a block diagram of a camera used in an embodiment of the invention. A camera 100 has an image sensor 102 which provides images to a memory 104 under control of microprocessor 106, operating under a program in a program memory 107. A microphone 110 is provided to detect sounds, and a speaker 1 12 is provided to allow remote
communication. A transceiver 108 provides a wireless connection to the Internet, either directly or through a Local Area Network or router. A battery 14 provides power to the camera.
System diagram.
[0029] FIG. 2 is a block diagram of a cloud-based system used in to an embodiment of the invention. Camera 100 connects wirelessly through the Internet 202 to a remote server 204. Server 204 communicates wirelessly with a smart phone 206, or other user computing device. Camera 100 can also connect locally to smart phone 206, or to a local computer 208. The local computer can do some of the image processing, such as advanced motion detection and object recognition and tagging, and can return the processed video and tags to camera 100 for subsequent transmission to server 204, or local computer 208 could directly transmit to server 204, such as when camera 100 is in a low power, battery mode.
Flowchart of operation.
[0030] FIG. 3 is a flowchart illustrating the basic steps performed in the camera and the server according to an embodiment of the invention. The steps above dotted line 300 are performed in the camera 100, while the steps below the dotted line are performed in the server 204. When there is no significant motion detected, the camera periodically captures a short video (e.g., 4 seconds) or a still image, such as every 8 minutes (302). The captured short video is buffered and tagged. Such camera tags include at least the time and date and the lack of motion.
[0031] The camera is programmed to detect motion (step 304) from image analysis. If the amount of motion, such as the number of pixels changing, is less than a predetermined amount (306), the video of the motion is discarded (308). If the amount of motion is greater than the threshold, it is determined whether the motion lasts for more than a predetermined amount of time (310). If the motion time is less than the predetermined time, it is discarded (308). If the motion lasts for more than the predetermined time, it is sent to a buffer and tagged with metadata (314). Such camera metadata tags include the time and date, the length of the video, and the amount of motion.
[0032] In one embodiment, more advanced motion detection and object recognition can be done on the camera (315), or in a local computer. The combined video events are then streamed wirelessly to the remote server (312). The images and video can be higher resolution than the bandwidth used for streaming. By locally buffering the images and video, it can be streamed with a delay, and transmitted at a lower frame rate. Thus, for example, there may be 15 video events of no motion, that are 4 seconds each, and a 5 minute second motion video. These can be buffered, and streamed over 20 minutes, for example. This provides a time-delayed stream, but with more resolution at lower bandwidth.
[0033] The remote server tags the received still images as having no motion. The remote server filters (316) the received video. The filtering is designed to eliminate video motion that is not of interest. For example, algorithms process the video to determine the type of motion. If the motion is a curtain moving, a moving shadow of a tree on a window, a fan in the room, etc., it can be filtered out and discarded.
[0034] A location detector 318 can be used to process the image to determine the type of location of the camera. In particular, is it inside or outside, is it in a dining room or a conference room, etc. Artificial intelligence can be applied to determine the location. For example, instead of a complex object recognition approach, a holistic review of the image is done. The image is provided to a neural network or other learning application. The application also has accessed to a database of stored images tagged as particular locations. For example, a wide variety of stored images of kitchens, dining rooms and bedrooms are provided. Those images are compared to the captured video or image, and a match is done to determine the location. Alternately, a user interface can allow a user to tag the type of location. The user interface can provide the user with the presumed location, which the user can correct, if necessary, or further tag (e.g., daughter's bedroom). One example of a holistic image review process is set forth in "Modeling the shape of the scene: a holistic
representation of the spatial envelope," Aude Oliva, Antonio Torralba, International Journal of Computer Vision, Vol. 42(3): 145-175, 2001.
[0035] In addition to determining a location, a more specific determination of a "scene'" is done. For example, the location may be a bedroom, while the scene is a sleeping baby. In one embodiment, the user is prompted to label the scene (e.g., as sleeping baby). Alternately, there can be automatic detection of the scene using a neural network or similar application, with comparisons to images of particular scenes, and also comparisons to previously stored images and videos labelled by the user. In addition, various cues are used in one embodiment to determine the type of scene. For example, for a "sleeping baby," the video may be matched to a baby in bed scene from examination of the video. This is combined with other cues, such as the time of day indicating night time, the camera being in night mode, a microphone detecting sounds associated with sleeping, etc. Similarly, a birthday party can be detected holistically using different cues, including the comparison to birthday party images, motion indicating many individuals, singing (e.g., the song "Happy Birthday"), etc. In one embodiment, previous scenes for a user are stored, and used for the comparison. For example, a previous scene may be for "breakfast," after having the user prompted to confirm. By using similar scenes from the same location for the same user, the accuracy of
identification can be improved over time.
[0036] Once the location type is determined, the filtering parameters can be provided to filtering block 316. In general, the location /scene would set some priorities about what is expected and what, in that particular situation, is more relevant/ interesting to the user. What is interesting in one scene might not be interesting in another scene. For example, if the location is a living room, there would be suppression of constant motion at a particular spot which quit likely might be due to a TV or a fan. For an outdoor location, much more motion is expected due to wind or other weather conditions. Hence the parameters of the video processing (e.g., thresholds) are adapted in order to suppress such motions (moving leaves, etc.). Also, regular motion patterns in an outdoor setting are suppressed in one embodiment (e.g., cars passing by on the street). In contrast, if the setting is a conference room and the scene is a meeting, spotting small motion is relevant to show people sitting together and discussing, but not moving much. In another example, where the scene is a sleeping baby, a different filtering is provided, to capture small movements of the baby, and not filter them out. For example, it is desirable to confirm that the baby is breathing or moving slightly.
[0037] Once extraneous motion is eliminated, the program determines if a human or animal is present (320). The particular human can be identified using facial recognition (322). The user can tag various individuals to initialize this process. Certain animals can be identified the same way, such as by the user providing a photo of the family pet, or tagging the pet in a video captured.
[0038] Video that passes through the filtering, and has a human or animal identified, is then tagged (324) with context data. The tag, or metadata, includes the identity of the persons or animals, the time of day, the duration of the video, etc. In one embodiment, there is extraction of other meta-data which is helpful for further learning and personalization.
Examples include the "colorfulness," the amount of motion, the direction/ position where motion appears, the internal state of the camera (e.g. if it is in night vision mode), the number of objects, etc. Most of this data is not accessible by the user. However, this (anonymous) data provides a foundation for gathering user-feedback and personalization.
[0039] In one embodiment, supervised personalization is provided (user directed, or with user input). This personalization is done using various user input devices, such as sliders and switches or buttons in the application, as well as user feedback. Unsupervised
personalization is provided in another embodiment, where the application determines how to personalize for a particular user without user input (which is supplemented with actual user input, and/or corrections). Examples of unsupervised personalization include using statistics of the scene and implicit user feedback. The use of cues to determine if there is a sleeping baby, as discussed above, in an example of unsupervised personalization.
[0040] Various types of user feedback can be used to assist or improve the process. For example, the user can be prompted to confirm that a "sleeping baby" has been correctly identified, and if not, the user can input a correct description. That description is then used to update the data for future characterizations,
[0041] A summary of a day or other period of time (e.g., since the last application launch ) is then generated (326) using the still images and video. The summary is then condensed (328) to fit into a short time clip, such as 30 seconds. This condensing can reduce the number of still images used (such as where there is a long sequence without motion), and can also reduce, or fast forward the video at different rates, depending on the determined importance.
User Interface with day summary; bubble icons.
[0042] FIG. 4 is a diagram illustrating the transition to different user interface display camera views according to an embodiment of the invention. A display 402 provides a live video stream (at a lower resolution than the time delayed summaries). In one embodiment, when the user activates the application on the smart phone or other user computing device, a signal is relayed through the server to the webcam to start the webcam streaming images. This provides the live view shown. Certain data is overlaid on the display at position 404. In the example shown, that data is an indication of the location or other label given to the webcam (living room), an indication that it is a live streaming view (live), and a clock indicating the current time.
[0043] When the user taps on the screen (406), the display transitions to a view 408 which includes a series 410 of bubble indicators for stored video scenes. View 408 also provides a series of icons 412. Icon 414 is for sharing the video summary with others, icon 416 is for storing the video to a gallery, and icon 418 is for activating a speaker to talk to whomever is in the room with the webcam, like a walkie-talkie push to talk function.
[0044] The series of bubble icons 410 includes a larger bubble 420 indicating "live view.'" Icon 4 0 corresponds to what is currently being displayed, and is enlarged to show which view is selected. Icons 422 and 424 indicate videos captured for important motion detection events, with the numbers in the bubbles indicating how long ago the video was captured (e.g., 2 minutes and 37 minutes in the example shown). Alternately, the bubbles can have a timestamp. The color of bubbles 422 and 424 indicates the determined importance of the event captured. If the user were to select, for example, bubble 422, that bubble would be locked in and increase in size, while moving the middle of the series. A still image from that event would be displayed as the user is scrolling through the bubbles, and the video starts to play once the event is locked in, or the user activates a play button. Bubble 426 is a "day brief which will display the condensed summary of the day, from step 328 in FIG. 3. In one embodiment, images or icons can provide more information about the scene indicated by a bubble, such as an image of a dog or cat to indicate a scene involving the family pet, or a picture or name tag of a person or persons in the scene.
[0045] When the user swipes the timeline (428) on display 408, the series of bubbles moves as indicated in view 430. As shown, the bubbles have moved downward, with the 37 minute bubble 424 about to disappear, and a 1 hr. bubble 432 currently enlarged. A semicircle 434 indicates the actual view being displayed is the live view. Alternately, as each bubble becomes enlarged, upon reaching the middle of the side of the screen, a still image from that video is displayed. Thus, a still image from the motion 1 hour ago would be displayed for button 432. When the user releases his/her finger, the video for that event 1 hour ago would begin to play. In one embodiment, certain tags could be displayed along with the still, preview image. For example, the names of persons in the event, as determined by facial recognition, could be displayed. Additionally, the event could be categorized based on time and object recognition (e.g., breakfast), or interaction with a calendar (e.g., client X meeting).
[0046] Display 440 shows the "day brief bubble 426 after being selected (with the play icon eliminated). The video is then played, with a pause icon 442 provided. A timeline 444 is provided to show progress through the day brief.
GUI menus.
[0047] FIG. 5 is a diagram illustrating the transition to different user interface display menus according to an embodiment of the invention. A display 502 is activated by swiping to the right from the left side of the screen. This pulls up 3 menu icons 504, 506 and 508. Tapping icon 504 brings up device menu screen 5 10. Tapping icon 506 brings up
notifications menu 512. Tapping icon 514 brings up account menu 514.
[0048] On display 510 are a variety of icons for controlling the device (webcam). Icon 516 is used to turn the webcam on/off. Icon 518 is used to add or remove webcams. On display 512, icon 520 allows activation of pushing notifications to the smart phone, such as with a text message or simply providing a notification for an email. Icon 522 provides for email notification. Display 514 provides different account options, such as changing the password, and upgrade to cloud (obtaining cloud storage and other advanced features).
Multiple cameras, split, view display [0049] FIG. 6 is a diagram illustrating a split user interface display for multiple webcam s according to an embodiment of the invention. Display 602 is the main, large display showing the living room webcam. Display 604 shows a play room webcam and display 606 shows a study webcam. In one embodiment, the display of FIG. 6 is the default display provided when the application is launched. In one embodiment, a primary display provides streaming video, while the other displays provide a still image. Alternately, all displays can provide streaming video. The primary display can be the first camera connected, or a camera designated by the user.
User Interface with initial launch of summary since last activity
[0050] In another embodiment, the UI, upon the application being launched, provides a video summary of content since the last launch of the application. The user can scroll through the video at a hyper-lapse speed, and then select a portion for a normal time lapse, or normal time view. The user can also switch to real-time live streaming, at a lower resolution than the time-delayed summaries. The summaries are continually updated and weighted. For example, a summary may contain 8 events with motion after 4 hours. When additional events are detected, they may be weighted higher, and some of the original 8 events may be eliminated to make room for the higher weighted events. Alternately, some of the original, lower- weighted events may be given a smaller portion of the summary, such as 2 seconds instead of 5 seconds. In one embodiment, the user can access a more detailed summary, or a second tier summary of events left out, or a longer summary of lower- weighted events.
cene Intuition.
[0051] Scene intuition is determining the relevance of detected events, in particular motion. At a basic level, it involves the elimination of minimal motion, or non-significant motion (curtains moving, a fan moving, shadows gradually moving with the sun during the day, etc). At a higher level, as discussed in more detail in examples below, it involves determining the camera location from objects detected (indoor or outdoor, kitchen or conference room). An activity can be detected from people or pets detected. A new scene may be tagged if a new- person enters or someone leaves, or alternately if an entirely different group of people is detected. Different detected events can be assigned different event bubbles in the UI example above.
[0052] The assignment of video to different summaries, represented by the bubbles, involves grouping "meaningful" things together. For example, different activities have different lengths. Eating breakfast might be a rather long one, while entering a room might be short. In one embodiment, the application captures interesting moments which people would like to remember/ save/ share (e.g. kids having a pillow fight, etc.). Primary cues for determining when a scene, or activity, begins and ends include the amount of time after movement stops (indicating the end of a scene), continuous movement for a long period of time (indicating part of the same scene, new motion in a different place (indicating a new scene), and a change in the number of objects, or a person leaving, or a new person entering.
Search.
[0053] By providing tags, or metadata, the videos can be easily searched. By classifying videos into different scenes, searching can be based on the type of scene. The searching can also be based on time, duration of clips, people in the video, particular objects detected, particular camera location, etc. In one embodiment, the application generates default search options based on matching detected content with possible search terms. Those possible search terms can be input by the user, or can be obtained by interaction with other
applications and data of the user. For example, the user may have tagged the names of family members, friends or work associates in a social media or other application, with images corresponding to the tags. The present application can then compare those tagged images to faces in the videos to determine if there is a match, and apply the known name. The default search terras would then include, for example, all the people tagged in the videos for the time period being searched.
In one embodiment, tags are provided with later searching in mind. Tags are provided for the typical things a user would likely want to search for. One example is taking the names of people and pets. Another example is tagging moving objects or people. The type of object that is moving is tagged (car, ball, person, pet, etc.). In one embodiment, while a holistic approach is used rather than object detection for determining a scene, object detection is used for moving objects. Other tags include the age of people, the mood (happy - smiles, laughing detected, or sad - frowns, furrowed brows detected).
55] In one embodiment, video search results are ranked based on the weighting of the video summaries, as discussed below and elsewhere in this application. Where multiple search terms are used, the results with the highest weighting on the first search term are presented first in one embodiment. In another embodiment, the first term weighting is used to prioritize the results within groups of videos falling within a highest weighting range, a second highest weighting range, etc.
[0056] In one embodiment, video search results also include events related to the searched term. For example, a search for "Mitch Birthday" will return video events tagged with both "Mitch" and "Birthday." In addition, below those search results, other video events on the same date, tagged "Mitch," but not tagged "Birthday," would also be shown. The "Birthday" tag may be applied to video clips including a birthday cake, presents, and guests. But other video events the same day may be of interest to the user, showing Mitch doing other things on his birthday.
Temporal (time delayed) streaming.
[0057] As described above, video and images can be captured at high resolution, buffered, and then streamed over a longer period of time. This is possible since there is not constant live streaming, but only streaming of periodic no motion clips, and intermittent motion clips. For example, images can be captured at 2-3 megabytes, but then streamed at a bandwidth that would handle 500 kilobits live streaming. In one embodiment, the image data is stored in the camera memory, transcoded and transmitted.
[0058] When the video summaries are subsequently viewed by the user, they can be streamed at high bandwidth, since they are only short summaries. Alternately, they can also be buffered in the user's smart phone, in a reverse process, with an additional time delay. Alternately, the video can be delivered at low resolution, followed by high resolution to provide more detail where the user slows down the time lapse to view in normal time, or to view individual images.
Split of processing between local camera and remote server
[0059] In one embodiment, a webcam provides a coarse filtering and basic processing of video, which is transmitted to the "cloud" (a remote server over the Internet) for further processing and storing of the time-lapse video sequences. More processing can be done on the local camera to avoid cloud processing, while taking advantage of larger cloud storage capability. A user can access the stored video, and also activate a live stream from the webcam, using an application on a smartphone.
)] In one embodiment, the local camera detects not only motion, but the direction of the motion (e.g., left to right, into room or out of room). The origin of the motion can also be determined locally (from the door, window, chair, etc.) In addition, the local camera, or a local computer or other device in communication with the camera, such as over a LAN, can do some processing. For example, shape recognition and object or facial recognition and comparison to already tagged images in other use applications (e.g., Facebook) could be done locally. In one embodiment, all of the processing may be done locally, with access provided through the cloud (Internet).
[0061] In one embodiment, the processing that is done on the camera is the processing that requires the higher resolution, denser images. This includes motion detection and some types of filtering (such as determining which images to perform motion detection on). Other functions, such as location detection, can be done on lower resolution images and video that are send to the cloud.
Low power, battery mode.
[0062] In one embodiment, the camera can be plugged into line power, either directly or through a stand or another device, or it can operate on battery power. Thus, the camera has a high power (line power) mode, and a low power (battery) mode. In the batter}' mode, power is conserved through a combination of techniques. The number of frames analyzed for motion is reduced, such as ever}' 5th frame instead of a normal ever}' 3ui frame. Also, only basic motion detection is performed in the camera, with more complex motion recognition and object detection done by a processor in the remote server, or a local computer. The camera is put into a sleep mode when there is no motion, and is woken periodically (e.g., every 8 minutes) to capture a short video or image. Those videos/images can be stored locally, and only transmitted when there is also motion video to transmit, at some longer period of time, or upon request, such as upon application launch. In one embodiment, in sleep mode everything is turned off except the parts of the processor needed for a timer and waking up the processor. The camera is woken from sleep mode periodically, and the image sensor and memory are activated. The transmitter and other circuitry not needed to capture and process an image remains asleep. An image or video event is detected. The image or video event is compared to a last recorded image, or video event. If there is no significant motion, the camera is returned to the sleep mode.
Tags.
[0063] In one embodiment, tags are included for each frame of data. Alternately, tags may be applied to a group of frames, or some tags may be for each frame, with other tags for a group of frames. As described above, minimum tags include a time stamp and indication of motion present, along with the amount of motion. Additional tags include:
Object identification
Person identification
Camera location
Speed of motion
Direction of motion
- Location of motion (e.g., a person entering the room)
Type of motion (e.g., walking running, cooking, playing, etc.).
Initialization.
[0064] In one embodiment, the product comprises at least one camera with at least a microphone, and an application that can be downloaded to a smart phone or other device. Upon the initial launch, the application executes a series of steps. It prompts the user to enter the a variety of information, including name, email, etc.
[0065] The application will automatically, or after a user prompt, access user data and other applications to build a profile for use in object, people and event detection. For example, a user's social media applications may be accessed to obtain tagged images identifying the user's family, friends, etc. That data can be uploaded to the cloud, or provided to the processor on the camera or another local processing device for use in examining videos. Also, the user's calendar application may be accessed to determine planned meetings, locations and participants to match with a camera location, where applicable.
Sharing.
[0066] In one embodiment, the summaries or live streams can be shared with others using a variety of methods. For example, applications such as Periscope or Meercat can be used to share a stream, or set a time when video summaries will be viewable. A video event can also be shared on social networking and other sites, or by email, instant message, etc. In one embodiment, when the sharing icon is selected, the user is presented with options regarding what method of sharing to use and also with whom to share. For example, a list of people identified in the video summary is presented for possible sharing.
Sensor variations.
[0067] The camera can be part of an epi sode capture device which includes other sensors, such as a microphone. The camera in certain embodiments can monitor any type of event or interaction or change in an environment that can be detected by a sensor and subsequently recorded, including but not limited to an image recording device, whether in the form of an image, an audio file, a video file, data file or other data storage mechanism, including, but not limited to: motion, date and time, geographic location, and audio, a motion sensor, including the combination of a motion sensor with an algorithm capable of identifying certain types of motion, proximity sensor, temperature sensor, capacitive sensor, inductive sensor, magnet, microphone, optical sensor, antenna, Near Field Communication, a magnetometer, a GPS receiver and other sensors. The cameras can be digital cameras, digital video cameras, cameras within smartphones, tablet computers, laptops or other mobile devices, webcams, and similar.
Breakfast example.
[0068] The present invention offers the ability to add tags with contextual relevance to a stream of data representing an event that has occurred. One example is where a camera is set up to observe a kitchen from 6 AM to 6 PM. Events occur within the scene viewed by the camera, such as a family eating breakfast. The recorded content is analyzed for context. For example, the camera analyses the data based on audio excerpts of the noise of plates being used, determining that it is placed in a kitchen and there is a meal taking place. Selecting audio data is merely one example of how this may be achieved, but other techniques will be apparent to the skilled person for performing this task. Further, the analysis may be performed within the camera, in another locally connected device, or remotely (such as in the cloud). A contextual tag is then allocated to data recorded at the time the noise of plates is detected. For example, this may occur at 7: 15 AM, and the camera further recognises that the people present within the scene are family members, using facial recognition techniques. This creates the opportunity to add a further contextual tag based on the additional information due to the identification of the family members but also based on the time information, which is utilised to form a timestamp. Timestamp information may be used in correlation with the additional sensed information to distinguish an event from other events with similar actions, e.g. to identify the event as "breakfast" in contrast to "lunch" or
"dinner". Using such contextual tags allows the creation of a fully customi sable summary. The summary may be based upon predetermined criteria or upon user preferences. The scene is therefore monitored over an extended period of time, analysed and contextual tags and timestamps applied appropriately.
[0069] When an event or a portion of the summary is selected by user, the contextual tags and timestamps enable the generation of a more specific summary focused on a particular context within the scene, or the context of a particular event. Taking the breakfast example it is possible to select a summary comprising a short video sequence, or a summary comprising a summary of relevant information to the event "breakfast", such as who was in attendance, how long did breakfast last and so on. The information relevant to the event can also be displayed as text information overlaying the presented video sequence. Another possibility is a summary comprising details of the same event occurring regularly within a scene, such as a summary of breakfasts occurring over the previous seven days. The present invention therefore offers a completely flexible manner of producing a summary based upon the assignment of contextual tags to events occurring within a scene, which may be fully selectable and determined by a user, or determined dynamically by an episode capture device, or a combination of both. This is described further in a series of non-limiting examples below.
Conference Room example.
[0070] A video data recording device, such as a camera, able to communicate with a communication network such as the internet, a local area network (LAN), or cellular network for transmitting data, is placed in a conference room. Initially the camera observes the scene, that is, monitors all events occurring within the room within an episode, such as 24 hours, and records the scene using video capture for processing. The episode therefore contains periods of activity (people entering and using a room) and inactivity (the room is empty). During the episode it may be possible to observe groups of people entering, using and exiting the room, using the room for various purposes, such as meetings or telephone conferences. This video capture forms the initial phase of the method of producing a summary in accordance with an exemplary embodiment of the present invention. )
[0071] The data obtained during the video capture is sent to be processed to create an event log. This may be done either at the episode capture device, in this example, at the camera, or may be done remotely over a communications network such as the internet (at a remote server, in the Cloud) or at a processor in communication with the device, such as over a local area network (LAN). The processing may be done live, that is during the video capture stage, or subsequently, once the video capture stage is complete, or at an offset, for example, 30 minutes post-video capture.
[0072] Once events are identified an event log can be created. The sensory information may comprise data relating to the output of visual or non-visual sensors. An event may be detected and/or identified by any of these sensors, for example, an optical beam motion detector detects the movement of a person through the door of the conference room. In this situation, the event is generated by an object, the person, and the presence of a person is identified in the room. The episode capture device may also determine the presence of static items in the room, such as chairs, which information is fed into the event log when required.
[0073] Visual sensory information obtained from the visual sensors is logged. This may include:
Determining whether motion occurs, what type of motion occurs, how much motion occurs, the direction and speed of any motion; Determining whether there are any objects present, the number of objects present;
Determining the classification of any objects, including person, pet, inanimate object such as a chai , and
Determining the identification of an object using a recognition technology, for example, facial recognition methods.
[0074] Non-visual sensory information obtained from the visual sensors is logged. This may include:
Logging the position of any objects using GPS (global positioning system) coordinates, geo-fencing or other positioning mechanism;
Logging audio data in any applicable format;
Logging temperature; and
Logging acceleration, direction and height above sea level (altitude).
[0075] The sensory information is used to create contextual tags, that when applied to the data allow a user to create meaningful summaries. The contextual tag indicates the context of the event, and may be specific context or more general context. For example, the tag may be "at least one person present", or "more than one person present", or "more than one person present and that there is interaction between the people", or "a meeting is in progress". In the present example the contextual tag indicates that a particular event is a meeting. The timestamp data may be appli ed separately to the event, or may be part of the contextual tag, or the contextual tag may in fact be the timestamp data. When a group of people enter the room, a contextual tag indicating the start of a meeting is assigned. If a single person enters the room and uses the telephone, the camera assigns a contextual tag indicating that the room is being used for a private call. If the camera is connected to a communications network over which a presentation in the meeting room is accessed, the camera may assign contextual tags representing the start of a meeting, the end of a meeting, a break occurring within a meeting, or specific parts of a presentation. In this way the contextual tags can be generated using information directly available via the camera (such as observing the video scene), but may also use information available via other sensors/systems (i.e. information related to use of a projector).
[0076] A summary is created with at least a subset of the events based upon the contextual tags. In the present example, the summary performs the function of a report to a conference room organiser showing the use of the facilities. The summary report could take various forms. For example, the summary report may be a text based report, a video summary, or a text report with "clickable" thumbnails of significant events. The conference room organiser may search the summary by time stamp data or contextual tag. By providing information regarding a subset of events to a user a summary allows the user to monitor the episode and the scene effectively. Note that it may also be desirable to include periods of inactivity in the episode summary. For example, a facilities manager may find information about how frequently conference rooms are vacant to be useful. In another example, a healthcare worker may use the summary report to understand the activity (or lack of activity) of a patient.
[0077] As part of the summary of events, events observed in a scene may be matched to stored or input data in order to produce a more meaningful summary as part of the summary. The episode capture device may be furnished with identity information about frequent occupants of the room, such that it can identify specific room occupants. Contextual tags may be added in order to identify specific room occupants in a summary. The stored or input data identifies an object, which may be a person, and the stored or input data may be used to choose and assign a contextual tag identifying the person. This enables a user to determine if only authorised people such as employees enter the conference room, or whether it is used frequently by non-employees, such as customers or clients. As part of the identification process, if the stored or input data matching step identifies a person, it may be desirable to use characteristic identification techniques, such as facial recognition techniques. This may then be used to determine the subset of events included in the summary, matching events observed in the scene to the stored or input data to create matched events based upon the contextual tags, such that the subset of events contains the matched events.
Other examples.
[0078] The facial recognition example outlined above is a special case of where an event is triggered by an object. In this situation, the episode capture device identifies the object within the scene (the person), and identifies a characteristic of the object (the name of the person), and both the identity of the object (that it is a person) and the characteristic (the name of the person) are included in the summary. This may be the case for other objects, such as identifying a burning candle in a room - initially the candle is identified and then that it is burning is inferred from its temperature.
[0079] Object monitoring. In another example a camera may be used to monitor a room for theft. The contents, or objects, in the room may be logged. Settings may be configured such that events are only triggered if an object is removed from the scene or the position of the object changes. Thus people could enter or exit the scene without triggering an event, as long as the objects are not removed or moved,
[0080] Interaction with smart hone. The episode capture device is preferably configured to connect to a data network, such that it may interact and/or communicate with other devices, such as smartphones and tablet computers. Processing to create the event log and the summary may take place at the episode capture device or remotely. Sensors may be provided within the episode capture device, or within external devices, or worn on a person or provided within a scene may be programmed either to monitor events, monitor a scene or to trigger events. For example, a camera may be configured to interact with a movement sensor within a smartphone to record that a meeting attendee entered the scene at a walking pace and left the scene at a running pace. Further, the camera may record that a smartphone belonging to a particular user enters the region of a local area network (WiFi) that denotes the periphery of a scene, and therefore has entered the scene. In the above example, a camera is used as the episode capture device, and audio data is used to enhance the video data obtained. However, other sensors may be used to capture events, such as, but not limited to, a motion sensor, including the combination of a motion sensor with an algorithm capable of identifying certain types of motion, proximity sensor, temperature sensor, capacitive sensor, inductive sensor, magnet, microphone, optical sensor, antenna. Near Field Communication and similar devices.
[0081] Other sensors. An episode capture device is therefore a device that is capable of recording an event, and the data obtained may be used appropriately to create a summary. Typical episode capture devices include image capture devices (cameras, in the visible, infrared or ultraviolet spectra) that may be digital (including CCD and CMOS devices). Such devices are provided with visual and non-visual sensors either integral with the episode capture device (an accelerometer in a mobile phone having a camera) or separate to but in communication and connection with the episode capture device, so as to be in effect functionally integrated. In the case of a temperature sensor, the sensor may detect that the temperature of a room increases at 6 AM, and decreases at 8 PM. It identifies these points as dawn and dusk, and applied contextual tags appropriately to each point. Episode capture devices may be used separately or together to enhance a summary. Consider the situation where a shop monitors stock using magnetic tags, which trigger an alarm when passed through an induction loop, and uses a system. It would be possible to combine a first episode capture device, such as a camera and a second episode capture device, such as an induction sensor system and to assign contextual tags at certain events. An item bearing a tag may be taken through the induction sensor, thus triggering an alarm. At this point a contextual tag may be assigned to the video feed obtained from the camera system and a summary generated accordingly,
[0082] User criteria for events. The format of the summary may be adapted to include any event information that is of interest to a user. In the case of a summary indicating the use of a conference room, the summary may include details of attendees including their identity, still images, audio recordings, information on types of events, and details of use that flags some kind of warning. Contextual tags added to the data captured by the episode capture device enable the summary to be as detailed or as concise as desired. This may be where the device is unable to determine the identity of a person, or unable to associate an event with an approved use of the room. The user may select from various pre-programmed options, or provide various criteria matching the contextual tags on which the summary may be based. This may include type of event, frequency of event, length of video sequence, date and time, geographic location, audio content, as examples, although many other criteria are possible. Storing criteria or inputting criteria to the image capture device, either directly or remotely, to form stored or input criteria and generating the summary using the stored or input criteria allows the user complete freedom of use. The user may build a bespoke summary format or choose from a pre-programmed selection. The summary may be generated by the episode capture device, a device in which the camera is positioned or using a remote system.
[0083] Summary formats. The summary may take various formats, depending on user preference. One format is to show a video feed of all events and periods of inactivity at a changeable speed, such as time-lapse or hyperlapse. Another is to combine a subset of certain events into a single video feed, for example, where these events are chosen by a user, as above, or where the events are chosen using stored or input data to create matched events. It is possible to delete or remove unimportant events based upon user criteria. For example, a user may specify that only meetings where there are 4 or more people present must be included in the summary. The episode capture device records all of the events during the episode, and then selects only those corresponding to a meeting with 4 or more people present, effectively discarding all other events recorded.
[0084] Weighting. One further possibility is prioritising events using a weighting or other prioritisation method, such as a binary selection scheme. Using a weighting method, a weighting is applied to an event, such that the subset of events in the summary is determined by the weighting. The weighting itself is determined by a characteristic of an event, for example, the number of people in a meeting room, the identity of pets rather than persons, the temperature of an object. In the above example this is illustrated by considering that the meeting room has a maximum capacity of 6, and that an organiser is interested in finding out whether the room is being used to its maximum capacity. One way of doing this is to assign a weighting to each event where fewer than 6 people attend a meeting, for example, and event- where one person uses the room has a weighting of 5, two people using the room has a weighting of 4, and so on. Initially the user may select a summary based upon events having a weighting of 5 or less.
[0085] However, the user may wish to prioritise entries within the summary. In this situation the weighting determines the prioritisation of the events within the subset. In the meeting room example, events may be listed in order of the highest weighting first. In one embodiment, a weighting scale of 0-1, or 1-10 is used for each element weighted. The presence of significant motion is used as a filter before anything is weighted. After that filter is passed, the total of the weights are simply added together for each video event or image. For example, the presence of a lot of motion may contribute a weighting of 8 on a scale of 1- 10. The presence of people tagged as important by the user may add a weight of 7 for each such person present. The presence of other people may provide a weight factor of 4 each. The duration of significant motion may add a weight of 1 for each minute, up to a total of 10 minutes. Thus, in one example, the weighting is as follows for a 10 minute video event (note that individual parts of the clip may have different weights):
Two unknown people (4 points each = 8 points.
One important person = 7 points
Significant motion ::= 8 points
Duration of motion is five minutes = 5 points
Total = 28 point weighting
[0086] In one embodiment, events that are considered for summarization are within a specified period of time (e.g., from midnight until now, or during the last 2 hours, etc.) and contains significant motion (after the filtering step). Alternately, a summary, rather than being a specified period of time, can be defined by a number of events, a percentage of events recorded, all events above a certain score, etc.
[0087] In one embodiment, event scoring is based on the following cues: 1. Event scoring. a. gap before event: event gets a higher score if there was nothing happening before that event for a long period. b. event duration: lower score for very short events. c. motion location and size: higher score for motion that is in the center and has a larger extent, d. motion anomaly: a model of past motion detected is created. A new motion observation gets a higher score, if it is abnormal given the previous content. This can al so be seen as a notion of 'surprise.' e. number of objects: higher score if more objects are moving in the event. f. detections: some detected concepts lead to higher scores, such as a detected person, a detected face, regions of skin color, etc. g. image quality: contrast, sharpness of the image or distribution of colors.
In one embodiment, scores are combined using a weighted average. Other methods for combinations are also possible. In an alternate embodiment, scores and weights are adapted or added / omitted based on the user's general preferences or user specifications for one summary.
[0088] In one embodiment, the weights don't include the time of day, when the event appears. This is handled in the second step:
2. select events, using a greedy approach, pseudocode: while total SummaryDurati on < targetDuration do:
A. select the highest weighted event and add it to the summary
B. reweight all the other events, according to their temporal distance with respect to the selected event. [0089] This will ensure not to choose two events which happen one after the other, but rather select events that are diverse across the full time range. In one embodiment, some heuristics are added to ensure some regular distribution over time. The reweight factor depends on the total summary time range: e.g., reweighting is different for a 1 hour period than for a 24 hour period.
[0090] In one embodiment, for long periods of no activity (for example, in a living room where a person is at work ail day, and the only motion is present in the morning and the evening), 'filler' is added. That is, the playback speeds are adjusted, as already discussed above. A time lapse with 1 frame every 6 min is used for no activity periods, whereas a 'hyper lapse' style video is played for motion events (e.g., speeding up normal speed by a factor of 8). Of course, other particular time periods and speeds can be used.
[0091] Cloud storage. The episode capture device may make use of cloud data storage to create or enhance the episode capture device or within a cloud data storage facility. Data may then be downloaded from the cloud data storage as and when desired in creating a summary, such that at least one step in the method outlined above occurs using this data. This enables even devices with small memory capacity to be configured to create a summary, since at least one step outlined in the method above may take place remote from the episode capture device. The ability to store and access large amounts of data relating to events and a scene also enables the creation of enhanced summaries.
[0092] Enhanced summaries. A detailed summary may be considered as comprising many layers of information, summarising video data, audio data, geographic data and so on. This layered approach allows a user to zoom into certain areas of interest. For example, in the conference room scenario above, a conference organiser receives a summary of a day's conference. This includes details of all participants, copies of presentations and handouts, all movement and geographical information as well as video and audio data of the events during the conference or of various conferences which took place in the respective conference room monitored by the event capture device. The organiser is told that a certain event, such as a presentation, happened at a particular time. The organiser can zoom into the summary at various times, and chooses to zoom into the event. The detail within the summary allows the organiser to review and select a particular event, and to choose to have video data of the event streamed to a device to view. This may be a device that the organiser chooses to view the summary on or another device. For example, the organiser may choose to view the summary on a smartphone. However, in order to view video data the organiser prefers to use a tablet computer. Once the zoom into the summary is chosen using the smartphone, the organiser is able to stream video content of the event to the tablet computer.
[0093] The layering approach also facilitates an automatic edit of the summary depending on the amount of data a user can receive. For example, if a user is accessing the summary using a smartphone connected to a cellular data network, a short version of the summary containing only highlights with hyperlinks to further content is transmitted, since, for example, if the cellular data network is a 3G network, data transfer is relatively slow and the user may prefer not to receive and download a high volume of data. Furthermore, summary information in text form, for example, the. occurrence of a certain event or appearance of a certain person, may be transmitted to a mobile device of a user, in the form of a short message (such as SMS, MMS or text) and/or making use of push-functionality for notification. The type of information provided to the user in this manner may be determined by a user or sent according to pre-determined criteria. However if a user is accessing the summary via a local area network (Wi-Fi) or other data connection, a more detailed summary may be transmitted. The episode capture device may be pre-programmed with information specific to the room in which it is located. Alternatively a user may notify the camera of its location once it has been placed within a room.
[0094] Alternate embodiments. The present invention is not limited to the exemplary embodiment described above. It is possible to utilise the invention in a wide variety of applications, for example, home security, surveillance, monitoring (such as a baby monitor or pet monitor), room or facility usage (such as designated equipment or apparatus), indeed any situation where it is required to be able to monitor a scene remotely to determine the occurrence of events. Suitable episode capture devices include digital cameras, digital video cameras, cameras within smartphones, tablet computers, laptops or other mobile devices, webcam s, and similar. Such cameras should be adapted to communicate data via a network to a client computer, software program, an app on a mobile device or, in general, to a suitable storage device, wherein such storage devices may include additional processing capacities for subsequent image processing. Cameras may be dedicated devices or multipurpose, that is, with no fixed designation with regard to monitoring a scene for events
[0095] In general, the episode capture device comprises a processor able to access a software module configured to perform the method outlined above In an exemplary embodiment the software module is based on the determination of certain criteria, either predefined or selectable by a user, for the identification of certain events. Subsequently, for example, upon selection by the user, a summary comprising a summary is created based on selected criteria, such as a certain event, optionally in combination with another constraint, for example, the maximum length of the summarising video sequence or a predetermined data volume. This results in a parameter-dependent automated video analysis method, in which significantly less video data has to be evaluated to determine if an event has occurred within a scene.
The below sumarizes the features of the various embodiments:
i] 1. A method of providing a video summary from a camera, comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant; recording in a memory of the camera a periodic image of at least one frame during periods of inactivity having no more than insignificant motion; identifying events from periods of activity having significant detected motion and creating event tags; recording in a memory of the camera video from the identified events and the event tags; and intermittently transmitting the images and video in the memory to a remote computing device using a transmitter in the camera.
[0097] 2, The method of claim 1 wherein the periodic image during periods of inactivity comprises a video of between 1-10 seconds.
[0098] 3. The method of claim 1 further comprising capturing images at a high resolution, then transmitting the images over a longer time period than the real time video using a lower resolution bandwidth.
[0099] 4. The method of claim I further comprising determining, by one of the processor in the camera and the remote computing device, the end of an event and the start of a new event based on the amount of time after movement stops. [0100] 5, The method of claim 1 further comprising determining, by one of the processor in the camera and the remote computing device, the end of an event and the start of a new event based on new motion in a different place.
[0101] 6. The method of claim 1 further comprising determining, by one of the processor in the camera and the remote computing device, one of the end of an event and the start of a new event based on a change in one of the number of moving objects in the video and the number of people in the video.
[0102] 7. The method of claim 1 further comprising creating, with the remote computing device, a summary video from multiple video events provided by the camera, comprising; creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion,
[0103] 8, The method of claim 7 further comprising weighting the video events according to importance, and providing one of a slower time lapse and more time to higher weighted video events deemed more important.
[0104] 9. The method of claim 8 wherein the video events have contextual tabs, and the weighting is based on at least one of the number of people detected, the identity of people detected, the duration of the motion and the amount of the motion,
[0105] 10. The method of claim 7 further comprising providing additional detailed video events, at a time lapse speed less than the second time lapse speed, for portions of the summary video selected by a user.
[0106] 1 1. The method of claim 7 further comprising weighting the video events based on: an amount of inactivity before the video event: the duration of motion in the video event; the proximity of the motion in the video event to the center of the video event, the amount of difference between the motion in the video event and motion from previous video events, and the number of objects moving in the video event.
[0107 ] 12. A method of providing a video summar' from a camera, comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant; recording in a memory of the camera a periodic image of at least one frame during periods of inactivity having no more than insignificant motion; identifying events from periods of activity having significant detected motion and creating event tags; recording in a memory of the camera video from the identified events and the event tags; intermittently transmitting the images and video in the memory to a remote computing device using a transmitter in the camera; creating, with the remote computing device, a summary video from multiple video events provided by the camera, comprising; creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion; providing the video events with contextual tabs; and weighting the video events based on at least one of the number of people detected, the identity of people detected, the duration of the motion and the amount of the motion.
[0108] 13. A system for providing a video summary, comprising: a camera having a processor configured to analyze pixels in video captured by the camera to detect motion in a video; the processor being configured to determine whether the motion is significant; a memory of the camera configured to record a periodic image of at least one frame during periods of inactivity having no more than insignificant motion; the processor being configured to identify events from periods of activity having significant detected motion and create event tags; the processor being further configured to record in the memory of the camera video from the identified events and the event tags; and a transmitter configured to intermittently transmit the images and video in the memory to a remote computing device.
[0109] 14. The system of claim 13 wherein the periodic image during periods of inactivity comprises a video of between 1-10 seconds.
[0110] 15. The system of claim 13 wherein the processor is further configured to capture images at a high resolution, then transmit to the transmitter the images over a longer time period than the real time video using a lower resolution bandwidth.
[0111] 16. The system of claim 13 further comprising determining, by one of the processor in the camera and the remote computing device, the end of an event and the start of a new event based on the amount of time after movement stops.
[0112] 17. The system of claim 13 further comprising one of the processor in the camera and the remote computing device being configured to determine the end of an event and the start of a new event based on new motion in a different place in the video.
[0113] 18. The system of claim 13 further one of the processor in the camera and the remote computing device being configured to determine one of the end of an event and the start of a new event based on a change in one of the number of moving objects in the video and the number of people in the video.
[0114] 19. The system of claim 13 wherein the remote computing device is further configured to create a summary video from multiple video events provided by the camera, comprising: creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion; and allocating more time, at a slower time lapse, to videos with significant motion [0115] 20. The system of claim 19 wherein the remote computing device is further configured to weight the video events according to importance, and provide one of a slower time lapse and more time to higher weighted video events deemed more important.
(C) AUTOMATICALLY DETERMINING CAMERA LOCATION AND
DETERMINING TYPE OF SCENE
[0116] 1. A method for determining the location of a camera, comprising: capturing images at a camera at a location, transmitting the images from the camera to a remote server; holistically comparing images from the camera, at the server, to multiple stored images, from a database coupled to the server, corresponding to known locations; determining which stored images provide a best match, and determining a type of location of the camera from tags associated with the images providing a best match.
[0117] 2. The method of claim 1, further comprising: determining whether the location is an indoor location or an outdoor location.
[0118] 3. The method of claim 2 further comprising wherein the camera is determined to be in an indoor location, determining the type of room, wherein the type of room includes at least one of a conference room, a dining room, a kitchen, a living room, a bedroom, an office and a hallway.
[0119] 4. The method of claim 1 further comprising: filtering out a type of motion, the type of motion being dependent upon the determined type of location of the camera.
[0120] 5. The method of claim 1 further comprising: detecting substantial motion in the video above a threshold amount of motion; detecting at least one of an object and a person in the substantial motion in the video; holistically comparing images from the substantial motion to stored images corresponding to known different events; determining which stored images provide a best match; and determining a type of event from tags associated with the images providing a best match; and tagging the video with the type of event.
[0121] 6. The method of claim 5 further comprising: detecting sounds from a microphone in the camera; comparing detected sounds with a stored database of sounds; determining at least one best match of sounds; comparing a tag associated with the best match of sounds to the tags associated with the images; and determining a type of event based on tags from the images and the sound.
[0122] 7. A method for determining a type of event in video from a camera comprising: detecting substantial motion in the video above a threshold amount of motion; detecting at least one of an object and a person in the substantial motion in the video, hoiistically comparing images from the substantial motion to stored images corresponding to different events; determining which stored images provide a best match, and determining a type of event from tags associated with the images providing a best match; and tagging the video with the type of event.
[0123] 8. The method of claim 7 further comprising: determining a type of location of the camera by: hoiistically comparing images from the camera to multiple stored images corresponding to known locations; determining which stored images provide a best match; and determining a type of location of the camera from tags associated with the images providing a best match.; and using the type of location in determining the type of event.
[0124] 9. A system for determining the location of a camera, comprising: a camera configured to capture images at a location; a transmitter in the camera for transmitting the images from the camera to a remote server; a server configured to holistically compare images from the camera to multiple stored images corresponding to known locations; a database, coupled to the server, for storing the multiple stored images; the server being configured to determine which stored images provide a best match; and the server being configured to determine a type of location of the camera from tags associated with the images providing a best match.
[0125] 10. The system of claim 9, further comprising: the server being configured to determine whether the location is an indoor location or an outdoor location.
[0126] 1 1. The system of claim 10 further comprising wherein the camera is determined to be in an indoor location, the server being configured to determine the type of room; wherein the type of room includes at least one of a conference room, a dining room, a kitchen, a living room, a bedroom, an office and a hallway.
[0127] 12. The system of claim 9 further comprising: the server being configured to filter out a type of motion, the type of motion being dependent upon the determined type of location of the camera.
[0128] 13. The system of claim 9 further comprising: the camera being configured to detect substantial motion in the video above a threshold amount of motion; the server being configured to detect at least one of an object and a person in the substantial motion in the video, the server being configured to holistically compare images from the substantial motion to stored images corresponding to known different events; the server being configured to determine which stored images provide a best match; the server being configured to determine a type of event from tags associated with the images providing a best match; and the server being configured to tag the video with the type of event. [0129] 14. The system of claim 13 further comprising: a microphone in the camera for detecting sounds; the server being configured to compare detected sounds with a stored database of sounds; the server being configured to determine at least one best match of sounds; the server being configured to compare a tag associated with the best match of sounds to the tags associated with the images; and the server being configured to determine a type of event based on tags from the images and the sound,
[0130] 15. The system of claim 14 further comprising: the server being configured to prompt a user to confirm the location and type of event. [0131] 16. The system of claim 14 further comprising: the server being configured to compare images and sounds to scenes previously recorded and stored for a particular user
(D) VIDEO SEARCHING FOR FILTERED AND TAGGED MOTION [0132] 1. A method of searching video from a camera, comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant, and filtering out video without significant motion; transmitting the video in the memory to a remote computing device using a transmitter in the camera; organizing the video into separate video events; creating, with the remote computing device, a plurality of summary videos from multiple video events provided by the camera; tagging each summary video with a plurality of tags corresponding to the events in the video summary; in response to search terms entered by a user, matching the search terms to the tags; and displaying indicators of video summaries with a best match to the search terms, ranked in order of best match.
[0133] 2. The method of claim 1 wherein creating a summary video comprises: creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion.
[0134] 3. The method of claim 1 wherein the search terms include at least one of time, duration of video, people in the video, objects in the video and camera location.
[0135] 4. The method of claim I further comprising ranking video search results based on a weighting of the video summaries.
[0136] 5. The method of claim 1 further comprising: providing, with the search results, indications of videos without tags corresponding to the search terms, but that are proximate in time to videos with the tags.
[0137] 6. The method of claim 1 further comprising: providing, with the search results, indications of videos without tags corresponding to the search terms, but with other tags that correspond to non-searched tags in the videos in the search results.
[0138] 7. A method of searching video from a camera, comprising: detecting motion using a processor in the camera; determining, using the processor, whether the motion is significant, and filtering out video without significant motion; transmitting the video in the memory to a remote computing device using a transmitter in the camera; organizing the video into separate video events; tagging each video event with a plurality of tags corresponding to at least two of time, duration of video, people in the video, objects in the video and camera location; weighting each video event based on the significance of the tags: in response to search terms entered by a user, matching the search terms to the tags; and displaying indicators of video events with a best match to the search terms, ranked in order of best match and the weighting of the video events,
[0139] 8. The method of claim 7 further comprising: creating, with the remote computing device, a plurality of summary videos from multiple video events provided by the camera; tagging each summary' video with a plurality of tags corresponding to the events in the video summary; weighting each video summary based on the significance of the tags: in response to search terms entered by a user, matching the search terms to the tags; and displaying indicators of video summaries with a best match to the search terms, ranked in order of best match and the weighting of the video events.
[0140] 9. The method of claim 7 further comprising: providing, with the search results, indications of videos without tags corresponding to the search terms, but that are one of proximate in time to videos with the tags and have other tags that correspond to non-searched tags in the videos in the search results.
[0141] 10. A system for searching video from a camera, comprising: a processor in the camera configured to detect motion; the processor further configured to determine whether the motion is significant, and filtering out video without significant motion; a memory in the camera for storing the video: a transmitter in the camera configured to transmit the video in the memory; a remote computing device configured to receive the transmitted video; the remote computing device being configured to organize the video into separate video events; the remote computing device being configured to tag each video event with a plurality of tags corresponding to at least two of time, duration of video, people in the video, objects in the video and camera location; the remote computing device being configured to weight each video event based on the significance of the tags; the remote computing device being configured to , in response to search terms entered by a user, match the search terms to the tags; and the remote computing device being configured to display indicators of video events with a best match to the search terms, ranked in order of best match and the weighting of the video events,
[0142] 1 1 . The system of claim 10 further comprising: the remote computing device being configured to create a plurality of summary videos from multiple video events provided by the camera; the remote computing device being configured to tag each summary video with a plurality of tags corresponding to the events in the video summary; the remote computing device being configured to weight each video summary based on the significance of the tags: the remote computing device being configured to, in response to search terms entered by a user, match the search terms to the tags; and the remote computing device being configured to display indicators of video summaries with a best match to the search terms, ranked in order of best match and the weighting of the video events,
[0143] 12. The system of claim 10 wherein the remote computing device is a server.
[0144] 13. The system of claim 10 wherein the remote computing device is a smart phone, configured to communicate with the camera using a server over the Internet.
[0145] 14. The system of claim 10 wherein the remote computing device is further configured to create a summary video by: creating a time lapse video having significant motion video events and no significant motion images over a period of time; allocating less time, at a faster time lapse, to the images with no significant motion, and allocating more time, at a slower time lapse, to videos with significant motion.
[0146] 15. The system of claim 10 wherein the search terms include at least one of time, duration of video, people in the video, objects in the video and camera location.
[0147] 16. The system of claim 10 wherein the remote computing device is further configured to rank video search results based on a weighting of the video summaries.
[0148] 17. The system of claim 10 further comprising: the remote computing device being further configured to provide, with the search results, indications of videos without tags corresponding to the search terms, but that are proximate in time to videos with the tags.
[0149] 18. The system of claim 10 further comprising: the remote computing device is further configured to provide, with the search results, indications of videos without tags corresponding to the search terms, but with other tags that correspond to non-searched tags in the videos in the search results.
[0150] 19. The system of claim 10 wherein the remote computing device is the combination of a server and a smartphone.
[0151] These and other embodiments not departing from the spirit and scope of the present invention will be apparent from the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for displaying video summaries to a user comprising:
upon launch of an application on a computing device having a display, providing one of the group of: a live video stream from a remote camera, a video event from the remote camera, a summary of video events from the remote camera, and an image from the remote camera;
providing by a processor in the computing device multiple indicators on the display indicating stored, detected important video events;
upon detection by the processor of selection on the display of an indicator by the user, providing a time-lapse summary of the selected event, and
providing a time of day indication on the display along with the selected event.
2. The method of claim 1 wherein the indicators are a series of bubbles, with each bubble including an indication of when an event occurred,
3 , The method of claim 1 wherein the indicators further indicate the relative importance of the events with color coding.
4. The method of claim 1 wherein one of the indicators is for a time-lapse display of all the events in sequence in a designated timer period, using a more condensed time lapse than the time lapse for individual video events,
wherein less important events have less time; and
applying a weighting to the events such that events with a higher weight are provided one of more time and a slower time lapse.
5. The method of claim 1 wherein the images provided upon launch include multiple images from multiple remote cameras.
6. The method of claim 1 further comprising;
scrolling through the indicators in response to a user swipe action on a display; enlarging a current indicator; and
provide a display of at least one image from the video event corresponding to the current indicator.
7. The method of claim 1 wherein one of the indicators is provided for a summary of video events, the summary consisting of video events for a day.
8, The method of claim 1 wherein one of the indicators is provided for a summary of video events, the summary consisting of video events since a last launch of an application for implementing the method of claim 1.
9. The method of claim 1 wherein a live video stream from a remote camera is provided upon launch, with the live video stream having a lower resolution than the time- lapse summary of the selected event.
10. A method for displaying video summaries to a user comprising: upon launch of an application on a computing device having a display, providing one of the group of: a live video stream from a remote camera, a video event from the remote camera, a summary of video events from the remote camera, and an image from the remote camera;
playing a summary of video events;
wherein the summary of video events comprises a series of video events from a remote camera over a designated period of time;
wherein the summary video is a time-lapse summary of intermittent video events where motion was detected; and
revising the playback speed of portions of the summary selected by the user.
11. A computing device having a display for displaying video summaries to a user comprising:
a processor configured, upon launch of an application on the computing device, to provide one of the group of: a live video stream from a remote camera, a video event from the remote camera, a summary of video events from the remote camera, and an image from the remote camera;
the processor being further configured to provide multiple indicators on the display indicating stored, detected important video events;
the processor being configured, upon detection of selection on the display of an indicator by the user, to provide a time-lapse summary of the selected event; and
the processor being configured to provide a time of day indication on the display along with the selected event.
12. The device of claim 11 wherein the indicators are a series of bubbles, with each bubble including an indication of how long ago an event occurred.
13. The device of claim 11 wherein the indicators further indicate the relative importance of the events with color coding.
14. The device of claim 11 wherein one of the indicators is for
a time-lapse display of all the events in sequence in a designated timer period, using a more condensed time lapse than the time lapse for individual video events;
wherein less important events have less time; and
the processor is configured to apply a weighting to the events such that events with a higher weight are provided one of more time and a slower time lapse.
15. The device of claim 11 wherein the images provided upon launch include multiple images from multiple remote cameras.
16. The device of claim 1 further comprising;
the processor being configured to scroll through the indicators in response to a user swipe action on a display,
enlarging a current indicator; and
provide a display of at least one image from the video event corresponding to the current indicator.
17. The device of claim 11 wherein on of the indicators is provided for a summary of video events, the summary consisting of video events for a day.
18. The device of claim 1 1 wherein on of the indicators is provided for a summary of video events, the summary consisting of video events since a last launch of an application for implementing the method of claim 1.
19. The device of claim J 1 wherein the processor is configured to provide a live video stream from a remote camera upon launch, with the live video stream having a lower resolution than the time-lapse summary of the selected event.
20. The device of claim 11 wherein the processor is configured to display video summaries to a user by playing a summary of video events; wherein the summary of video events comprises a series of video events from a remote camera over a designated period of time;
wherein the summary video i s a time-lapse summary of intermittent video events where motion was detected; and
the processor being configured to revise the playback speed of portions of the summary selected by the user.
21. A method of providing a video summary from a camera, comprising: detecting motion using a processor in the camera;
determining, using the processor, whether the motion is significant;
recording in a memory of the camera a periodic image of at least one frame during periods of inactivity having no more than insignificant motion;
identifying events from periods of activity having significant detected motion and creating event tags,
recording in a memory of the camera video from the identified events and the event tags; and
intermittently transmitting the images and video in the memory to a remote computing device using a transmitter in the camera.
22. A method for determining the location of a camera, comprising: capturing images at a camera at a location;
transmitting the images from the camera to a remote server;
holistically comparing images from the camera, at the server, to multiple stored images, from a database coupled to the server, corresponding to known locations;
determining which stored images provide a best match; and
determining a type of location of the camera from tags associated with the images providing a best match.
23. A method of searching video from a camera, comprising:
detecting motion using a processor in the camera;
determining, using the processor, whether the motion is significant, and filtering out video without significant motion;
transmitting the video in the memory to a remote computing device using a transmitter in the camera;
organizing the video into separate video events; creating, with the remote computing device, a plurality of summary videos from multiple video events provided by the camera;
tagging each summary video with a plurality of tags corresponding to the events in the video summary;
in response to search terms entered by a user, matching the search terms to the tags; and
displaying indicators of video summaries with a best match to the search terms, ranked in order of best match.
PCT/IB2016/055456 2015-09-14 2016-09-13 User interface for video summaries WO2017046704A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680066486.6A CN108351965B (en) 2015-09-14 2016-09-13 User interface for video summary
DE112016004160.8T DE112016004160T5 (en) 2015-09-14 2016-09-13 UI for video summaries

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US14/853,943 2015-09-14
US14/853,980 2015-09-14
US14/853,965 US9313556B1 (en) 2015-09-14 2015-09-14 User interface for video summaries
US14/853,965 2015-09-14
US14/853,980 US20170076156A1 (en) 2015-09-14 2015-09-14 Automatically determining camera location and determining type of scene
US14/853,943 US9805567B2 (en) 2015-09-14 2015-09-14 Temporal video streaming and summaries
US14/853,989 US10299017B2 (en) 2015-09-14 2015-09-14 Video searching for filtered and tagged motion
US14/853,989 2015-09-14

Publications (1)

Publication Number Publication Date
WO2017046704A1 true WO2017046704A1 (en) 2017-03-23

Family

ID=56985651

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2016/055456 WO2017046704A1 (en) 2015-09-14 2016-09-13 User interface for video summaries

Country Status (3)

Country Link
CN (1) CN108351965B (en)
DE (1) DE112016004160T5 (en)
WO (1) WO2017046704A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019063409A1 (en) * 2017-09-29 2019-04-04 Canon Kabushiki Kaisha Method and device for optimizing the search for samples at a video management system
CN110557589A (en) * 2018-05-30 2019-12-10 百度(美国)有限责任公司 System and method for integrating recorded content
EP3672233A1 (en) * 2018-12-21 2020-06-24 Axis AB Method for carrying out a health check of cameras and a camera system
CN111355948B (en) * 2018-12-21 2024-04-12 安讯士有限公司 Method for performing an operation status check of a camera and camera system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11861906B2 (en) * 2014-02-28 2024-01-02 Genius Sports Ss, Llc Data processing systems and methods for enhanced augmentation of interactive video content
CN114079820A (en) * 2020-08-19 2022-02-22 安霸国际有限合伙企业 Interval shooting video generation centered on an event/object of interest input on a camera device by means of a neural network

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069655A (en) 1997-08-01 2000-05-30 Wells Fargo Alarm Services, Inc. Advanced video security system
US20040027242A1 (en) 2001-10-09 2004-02-12 Venetianer Peter L. Video tripwire
US6803945B1 (en) 1999-09-21 2004-10-12 Intel Corporation Motion detecting web camera system
US20050160457A1 (en) 1999-09-13 2005-07-21 Microsoft Corporation Annotating programs for automatic summary generations
US20050168574A1 (en) 2004-01-30 2005-08-04 Objectvideo, Inc. Video-based passback event detection
US6995794B2 (en) 1999-06-30 2006-02-07 Logitech Europe S.A. Video camera with major functions implemented in host software
US20070002141A1 (en) 2005-04-19 2007-01-04 Objectvideo, Inc. Video-based human, non-human, and/or motion verification system and method
US20080018737A1 (en) 2006-06-30 2008-01-24 Sony Corporation Image processing apparatus, image processing system, and filter setting method
US20080100704A1 (en) 2000-10-24 2008-05-01 Objectvideo, Inc. Video surveillance system employing video primitives
US20090219300A1 (en) 2005-11-15 2009-09-03 Yissum Research Deveopment Company Of The Hebrew University Of Jerusalem Method and system for producing a video synopsis
US20100082585A1 (en) 2008-09-23 2010-04-01 Disney Enterprises, Inc. System and method for visual search in a video media player
US20100092037A1 (en) 2007-02-01 2010-04-15 Yissum Research Develpoment Company of the Hebrew University of Jerusalem Method and system for video indexing and video synopsis
US20100315497A1 (en) 2008-06-20 2010-12-16 Janssen Alzheimer Immunotherapy Monitoring system and method
US20110285842A1 (en) 2002-06-04 2011-11-24 General Electric Company Mobile device positioning system and method
US8300890B1 (en) * 2007-01-29 2012-10-30 Intellivision Technologies Corporation Person/object image and screening
US20120308077A1 (en) 2011-06-03 2012-12-06 Erick Tseng Computer-Vision-Assisted Location Check-In
US20140085480A1 (en) * 2008-03-03 2014-03-27 Videolq, Inc. Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system
US20140355907A1 (en) 2013-06-03 2014-12-04 Yahoo! Inc. Photo and video search
US20150015735A1 (en) 2013-07-11 2015-01-15 Sightera Technologies Ltd. Method and system for capturing important objects using a camera based on predefined metrics
US20150189402A1 (en) 2012-08-24 2015-07-02 Alcatel Lucent Process for summarising automatically a video content for a user of at least one video service provider in a network
WO2015157440A1 (en) * 2014-04-08 2015-10-15 Assaf Glazer Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis
US20150350611A1 (en) * 2013-05-30 2015-12-03 Manything Systems Limited Methods and systems for monitoring environments using smart devices

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100546379C (en) * 2006-04-24 2009-09-30 中国科学院自动化研究所 Personalized customization method and device thereof based on the sports video of mobile device
US20080269924A1 (en) * 2007-04-30 2008-10-30 Huang Chen-Hsiu Method of summarizing sports video and apparatus thereof
US8204273B2 (en) * 2007-11-29 2012-06-19 Cernium Corporation Systems and methods for analysis of video content, event notification, and video content provision
US8839109B2 (en) * 2011-11-14 2014-09-16 Utc Fire And Security Americas Corporation, Inc. Digital video system with intelligent video selection timeline
US20140181668A1 (en) * 2012-12-20 2014-06-26 International Business Machines Corporation Visual summarization of video for quick understanding
KR102070924B1 (en) * 2014-01-20 2020-01-29 한화테크윈 주식회사 Image Recoding System
US9313556B1 (en) 2015-09-14 2016-04-12 Logitech Europe S.A. User interface for video summaries

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069655A (en) 1997-08-01 2000-05-30 Wells Fargo Alarm Services, Inc. Advanced video security system
US6995794B2 (en) 1999-06-30 2006-02-07 Logitech Europe S.A. Video camera with major functions implemented in host software
US20050160457A1 (en) 1999-09-13 2005-07-21 Microsoft Corporation Annotating programs for automatic summary generations
US6803945B1 (en) 1999-09-21 2004-10-12 Intel Corporation Motion detecting web camera system
US20080100704A1 (en) 2000-10-24 2008-05-01 Objectvideo, Inc. Video surveillance system employing video primitives
US20040027242A1 (en) 2001-10-09 2004-02-12 Venetianer Peter L. Video tripwire
US20110285842A1 (en) 2002-06-04 2011-11-24 General Electric Company Mobile device positioning system and method
US20050168574A1 (en) 2004-01-30 2005-08-04 Objectvideo, Inc. Video-based passback event detection
US20070002141A1 (en) 2005-04-19 2007-01-04 Objectvideo, Inc. Video-based human, non-human, and/or motion verification system and method
US20090219300A1 (en) 2005-11-15 2009-09-03 Yissum Research Deveopment Company Of The Hebrew University Of Jerusalem Method and system for producing a video synopsis
US20080018737A1 (en) 2006-06-30 2008-01-24 Sony Corporation Image processing apparatus, image processing system, and filter setting method
US8300890B1 (en) * 2007-01-29 2012-10-30 Intellivision Technologies Corporation Person/object image and screening
US20100092037A1 (en) 2007-02-01 2010-04-15 Yissum Research Develpoment Company of the Hebrew University of Jerusalem Method and system for video indexing and video synopsis
US20140085480A1 (en) * 2008-03-03 2014-03-27 Videolq, Inc. Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system
US20100315497A1 (en) 2008-06-20 2010-12-16 Janssen Alzheimer Immunotherapy Monitoring system and method
US20100082585A1 (en) 2008-09-23 2010-04-01 Disney Enterprises, Inc. System and method for visual search in a video media player
US20120308077A1 (en) 2011-06-03 2012-12-06 Erick Tseng Computer-Vision-Assisted Location Check-In
US20150189402A1 (en) 2012-08-24 2015-07-02 Alcatel Lucent Process for summarising automatically a video content for a user of at least one video service provider in a network
US20150350611A1 (en) * 2013-05-30 2015-12-03 Manything Systems Limited Methods and systems for monitoring environments using smart devices
US20140355907A1 (en) 2013-06-03 2014-12-04 Yahoo! Inc. Photo and video search
US20150015735A1 (en) 2013-07-11 2015-01-15 Sightera Technologies Ltd. Method and system for capturing important objects using a camera based on predefined metrics
WO2015157440A1 (en) * 2014-04-08 2015-10-15 Assaf Glazer Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AUDE OLIVA; ANTONIO TORRALBA: "Modeling the shape of the scene: a holistic representation of the spatial envelope", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 42, no. 3, 2001, pages 145 - 175
DIMOU A ET AL: "A user-centric approach for event-driven summarization of surveillance videos", 6TH INTERNATIONAL CONFERENCE ON IMAGING FOR CRIME PREVENTION AND DETECTION (ICDP-15) IET STEVENAGE, UK, 17 July 2015 (2015-07-17), pages 6 pp., XP002765032, ISBN: 978-1-78561-131-5 *
TAKAHASHI Y ET AL: "Automatic video summarization of sports videos using metadata", ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2004. 5TH PACIFIC RIM CONFERENCE ON MULTIMEDIA. PROCEEDINGS, PART II (LECTURE NOTES IN COMPUTER SCIENCE VOL.3332) SPRINGER-VERLAG BERLIN, GERMANY, 2004, pages 272 - 280, XP002765033, ISBN: 3-540-23977-4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019063409A1 (en) * 2017-09-29 2019-04-04 Canon Kabushiki Kaisha Method and device for optimizing the search for samples at a video management system
CN110557589A (en) * 2018-05-30 2019-12-10 百度(美国)有限责任公司 System and method for integrating recorded content
EP3672233A1 (en) * 2018-12-21 2020-06-24 Axis AB Method for carrying out a health check of cameras and a camera system
US20200204792A1 (en) * 2018-12-21 2020-06-25 Axis Ab Method for carrying out a health check of cameras and a camera system
CN111355948A (en) * 2018-12-21 2020-06-30 安讯士有限公司 Method of performing an operational condition check of a camera and camera system
US11290707B2 (en) 2018-12-21 2022-03-29 Axis Ab Method for carrying out a health check of cameras and a camera system
CN111355948B (en) * 2018-12-21 2024-04-12 安讯士有限公司 Method for performing an operation status check of a camera and camera system

Also Published As

Publication number Publication date
DE112016004160T5 (en) 2018-05-30
CN108351965B (en) 2022-08-02
CN108351965A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
US9588640B1 (en) User interface for video summaries
US10299017B2 (en) Video searching for filtered and tagged motion
US9805567B2 (en) Temporal video streaming and summaries
US20170076156A1 (en) Automatically determining camera location and determining type of scene
US10789821B2 (en) Methods and systems for camera-side cropping of a video feed
US10977918B2 (en) Method and system for generating a smart time-lapse video clip
US20210125475A1 (en) Methods and devices for presenting video information
CN108351965B (en) User interface for video summary
US9158974B1 (en) Method and system for motion vector-based video monitoring and event categorization
EP1793580B1 (en) Camera for automatic image capture having plural capture modes with different capture triggers
EP2996016B1 (en) Information processing device and application execution method
CN102577367A (en) Time shifted video communications
US20100205203A1 (en) Systems and methods for video analysis
US20120159326A1 (en) Rich interactive saga creation
JP6941950B2 (en) Image providing system, image providing method, and image providing program
US20180232384A1 (en) Methods and apparatus for information capture and presentation
JP4018967B2 (en) Recorded video automatic generation system, recorded video automatic generation method, recorded video automatic generation program, and recording video automatic generation program recording medium
WO2017193343A1 (en) Media file sharing method, media file sharing device and terminal
US10878676B1 (en) Methods and systems for customization of video monitoring systems
WO2010090622A1 (en) Systems and methods for video analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16770367

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112016004160

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16770367

Country of ref document: EP

Kind code of ref document: A1