US9271035B2 - Detecting key roles and their relationships from video - Google Patents

Detecting key roles and their relationships from video Download PDF

Info

Publication number
US9271035B2
US9271035B2 US13/085,288 US201113085288A US9271035B2 US 9271035 B2 US9271035 B2 US 9271035B2 US 201113085288 A US201113085288 A US 201113085288A US 9271035 B2 US9271035 B2 US 9271035B2
Authority
US
United States
Prior art keywords
video
key
roles
role
community
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/085,288
Other versions
US20120263433A1 (en
Inventor
Tao Mei
Xian-Sheng Hua
Shipeng Li
Yan Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Priority to US13/085,288 priority Critical patent/US9271035B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, SHIPENG, MEI, TAO, WANG, YAN, HUA, XIAN-SHENG
Publication of US20120263433A1 publication Critical patent/US20120263433A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Application granted granted Critical
Publication of US9271035B2 publication Critical patent/US9271035B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • G06K9/00718
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Definitions

  • Promotional materials for videos are helpful in informing a potential audience about the content of the videos. For instance, video trailers, still-image posters, and the like may be helpful in letting users know about the theme or plot of a movie, television show, or other type of video.
  • video trailers, still-image posters, and the like may be helpful in letting users know about the theme or plot of a movie, television show, or other type of video.
  • Creating promotional posters for videos may be helpful for marketing these videos. Displaying the main characters from a video is a cornerstone for promotional posters in some instances.
  • Tools and techniques for automatically acquiring key roles from a video free from use of metadata e.g., cast lists, scripts, and/or crowd-sourcing knowledge from the web are described herein.
  • These techniques include discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, personal video, etc.) as a community.
  • a video e.g., a movie, television program, music video, personal video, etc.
  • the techniques segment a video into a hierarchical structure that includes levels for scenes, shots, and key frames.
  • the techniques perform face detection and grouping on the detected key frames.
  • the techniques exploit the key roles and their correlations in this video to discover a community.
  • the discovered community provides for a wide variety of applications, including the automatic generation of visual summaries (e.g., video posters) based on the acquired key roles.
  • FIG. 1 illustrates an example computing environment including a computing device that acquires key roles from video.
  • FIG. 2 illustrates example components for acquiring a key role from a video via community discovery.
  • FIG. 3 illustrates example components for determining a face cluster of a key role.
  • FIG. 4 illustrates an example excerpted from several face cluster results from a video.
  • FIG. 5 illustrates an example of a community graph discovered from key roles acquired from a video.
  • FIG. 6 illustrates example user interface (UI) presentations in the form of posters created using key roles acquired from a video.
  • UI user interface
  • FIGS. 7 and 8 are flow diagrams illustrating example approaches for acquiring key roles and their relationships from video for presentation.
  • FIG. 9 is a flow diagram of an example process for acquiring a key role via face grouping.
  • FIG. 10 is a flow diagram of an example process employing key-role acquisition from video to generate presentations.
  • Promotional posters are helpful in marketing videos, and often display the main characters from a video.
  • the techniques described below automatically create a presentation that includes images of the characters that are determined, automatically, to be the main characters in the video. These techniques may make this automatic determination by analyzing the video to determine how often each character appears in the video.
  • the techniques described herein identify key roles of a video by analyzing the video itself. That is, the techniques use facial recognition techniques to identify the main characters of a video. From this information, the techniques may then automatically create a visual presentation (e.g., a poster or other visual summary) for the video that includes the main characters.
  • a visual presentation e.g., a poster or other visual summary
  • the techniques may identify the main characters in any number of ways. For instance, the techniques may determine how often a face appears on screen, how often a character is spoken about, and the like. Furthermore, the techniques may create a community graph based on the analysis of the movie, which may also be used to identify the key roles. The community graph may depict the interrelationships between characters in the movie, as well as a strength of these interrelationships.
  • these example techniques are able to discover key roles within a video that is free from typically-used rich metadata, such as cast lists, scripts, and/or crowd-sourced information obtained from the world-wide-web.
  • These techniques include automatically discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, personal video, etc.) as a community.
  • a video e.g., a movie, television program, music video, personal video, etc.
  • the techniques segment a video into a hierarchical structure (including shot, key frame, and scene).
  • the techniques perform face detection and grouping on the detected key frames.
  • the techniques create a community by exploiting the key roles and their correlations or relationships in the video segments.
  • the discovered community provides for a wide variety of applications. In particular, the discovered community enables automatic generation of visual summaries or video posters based on the acquired key roles from the community.
  • characters of a video are the center of attention within the video, and the interactions among these characters help to narrate a story. Because these characters (or “roles”) and their interactions are the center of audience interest, indentifying key roles and analyzing their relationships to discover a community is useful for understanding the content of a movie or other video.
  • discovering a community is challenging due to the complex environment in movies. For example, the variation of characters' poses, wardrobe changes, and various illumination conditions may make the identification of characters within a video difficult.
  • correlations or relationships between roles are difficult to analyze thoroughly because roles can interact in different ways, including direct interactions (e.g., dialogs with each other) and indirect interactions (e.g., talking about other roles). Thus, being able to automatically acquire key roles for indexing, while useful, is not straightforward.
  • the techniques described below first structure the incoming video, whether the video is streaming or stored.
  • the first structural unit that the techniques identify is a shot, which includes a continuous section of video shot by one camera.
  • the second structural unit that the techniques identify is a key frame, which, as used herein, includes an image extracted from a shot that includes at least one face and that represents the shot in terms of color, background image, and/or action.
  • a key frame may include more than one image from a shot. This definition of a “key frame” may differ from traditional uses of the term “key frame” in some instances.
  • the third structural unit that the techniques build is a scene, which include shots that are similar to one another and that the techniques groups together to form the scene. In various implementations, shot similarity is determined based on the shots having similarity to each other greater than a predetermined or configurable threshold value.
  • the techniques detect faces that appear in the key frames and groups the faces into face clusters according to role.
  • the techniques then construct a community graph based on co-occurrence of the faces in the video.
  • key roles are presented as nodes/vertices and relationships between the key roles are presented as edges.
  • the community graph of key roles has a wide variety of applications including automatic generation of visual summaries such as video posters, images to accompany reviews, or the like.
  • the techniques described herein generate a visual summary (e.g., a movie poster) by detecting key roles from a discovered community, selecting representative images for each key role, selecting a typical background image of the video, and creating the poster according to at least one of four different visualization techniques based on the representative key roles and the background.
  • Example Computing Environment describes one non-limiting environment that may implement the described techniques.
  • Example Components describes non-limiting components that may implement the described techniques in the example environment or other environments.
  • a third section entitled “Example Approach to Community Discovery from a Video” illustrates and describes one example technique for discovering community from a video without employing metadata.
  • a fourth section entitled “Example Video Poster Generation,” illustrates an example application for acquiring a key role and presenting the key role via community discovery from video.
  • Example Processes presents several example processes for acquiring a key role and presenting the key role via community discovery from video. A brief conclusion ends the discussion.
  • FIG. 1 illustrates an example computing environment 100 in which techniques for acquiring a key role and presenting the key role via community discovery from video independent of metadata may be implemented.
  • the environment 100 includes a network 102 over which the video may be received by a computing device 104 .
  • the environment 100 may include a variety of computing devices 104 as video source and/or presentation destination devices.
  • the computing device 104 includes one or more processors 106 and memory 108 , which stores an operating system 110 and one or more applications including a video application 112 , a generation application 114 , and other applications 116 running thereon.
  • FIG. 1 illustrates the computing device 104 A as a laptop-style personal computer
  • other implementations may employ a personal computer 104 B, a personal digital assistant (PDA) 104 c , a thin client 104 D, a mobile telephone 104 E, a portable music player, a game-type console (such as Microsoft Corporation's XboxTM game console), a television with an integrated set-top box 104 F or a separate set-top box, or any other sort of suitable computing device or architecture.
  • PDA personal digital assistant
  • the computing device 104 When the computing device 104 is embodied in a television or a set-top box, the device may be connected to a head-end or the internet, or may receive programming via a broadcast or satellite connection.
  • the memory 108 may include computer-readable storage media.
  • Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communications media.
  • Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
  • communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism.
  • computer storage media does not include communication media.
  • the applications 112 , 114 , and 116 may represent desktop applications, web applications provided over a network 102 , and/or any other type of application capable of running on the computing device 104 .
  • the network 102 is representative of any one or combination of multiple different types of networks, interconnected with each other and functioning as a single large network (e.g., the Internet or an intranet).
  • the network 102 may include wire-based networks (e.g., cable) and wireless networks (e.g., cellular, satellite, etc.).
  • the computing device 104 implements a video application 112 that functions to structure streaming or stored video for acquiring a key role and community discovery for presentation from a generation application 114 .
  • the generation application 114 may be integrated in the video application 112 .
  • Various components may be employed to automatically generate video presentations by acquiring key roles from the video without employing rich metadata.
  • the described components discover a community to represent the video. The components then use the community to determine the key roles, which the components then use to create a poster or other type of promotional material that accurately portrays the contents of the video.
  • the poster may include images of the key roles identified with reference to the discovered community.
  • FIG. 2 illustrates example components for discovering a community from a video to acquire key roles independent of rich metadata such as cast lists and scripts at 200 .
  • the described approach includes discovering key roles and their relationships based on content analysis.
  • a video tool 202 (e.g., which may include the video application 112 or similar logic) includes a video structuring component 204 that receives a video 206 .
  • the video structuring component 204 analyzes and segments the video into hierarchical levels.
  • the video structuring component 204 then outputs the video structure information 208 as hierarchically structured levels that include scenes, shots, and key frames for further processing by other components included in the video tool 202 .
  • a face grouping component 210 detects faces from the key frames and performs face grouping to output a face cluster 212 for each role in the video. Based on the roles represented by each face cluster 212 and the video structure information 208 , the community discovery component 214 identifies nodes (e.g., according to co-occurrence of the roles in a scene) and constructs a community graph 216 .
  • the community graph 216 is input to the generation tool 218 , which in FIG. 2 is shown integrated in the video tool 202 . In other implementations, for example as shown in the environment of FIG. 1 , the generation tool 218 may be separate from and operate independently of the video tool 202 .
  • each node represents a key role within the video and the weight of each edge indicates a significance of the relationship between each pair of roles.
  • the size of particular nodes in the community graph 216 corresponds to how “key” the community discovery component 214 determines the role is in the community.
  • a node 220 represents the most key role
  • a node 222 represents the next most key role
  • the nodes 224 and 226 represent other key roles that interact with the roles represented by the nodes 220 and 222 , but appear less often in the video. Accordingly, the nodes 220 and 222 likely represent characters played by the stars of the video while the nodes 224 and 226 likely represent major supporting roles.
  • FIG. 3 illustrates, at 300 , example components for determining a face cluster 212 .
  • the face grouping component 210 includes a face detection component 302 that receives one or more key frames 304 , such as from the structured video 208 .
  • the face detection component 302 detects faces from the key frames 304 to get the face information 306 and includes bounding face rectangles as face images.
  • the face detection component 302 may detect multiple face areas from each key frame 304 , in some instances, since a video can contain a large number of characters per shot.
  • the face grouping component 210 groups each face image detected to be the same person together to form several groups. The higher number of face images per group, the more often the detected face appears in shots of the video.
  • a feature extraction component 308 extracts features from the face information 306 .
  • the feature extraction component 308 includes a face image normalization component 310 that normalizes the detected faces into (e.g., 64 ⁇ 64) gray scale images 312 .
  • a feature concatenation component 314 concatenates the gray value of each pixel as a 4096-dimensional vector 316 for each detected face image, in some instances.
  • a face descriptor component 318 creates a description for each detected face image based on the vector 316 .
  • the face descriptor component 318 includes a distance matrix component 320 that receives each vector 316 and compares the vectors using learning based encoding and principal component analysis (LE-PCA) to produce a similarity matrix 322 .
  • a clustering component 324 then takes similarity matrix 322 as input and outputs a face cluster 212 with an exemplar 326 for each cluster, which is used by generation tool 218 .
  • clustering component 324 employs an Affinity Propagation (AP) clustering algorithm.
  • AP Affinity Propagation
  • K-Means or other clustering algorithm may be employed.
  • the exemplar 326 is a face image that is first identified as belonging to the face cluster 212 . Although, in other instances, the exemplar 326 is selected based on other or additional criteria such as having a forward facing pose or the illumination conditions of the particular face image. The exemplar 326 is used as the node representation in community graph 216 in some implementations.
  • Various approaches may be employed to automatically generate video presentations by acquiring key roles from a video without employing rich metadata.
  • One such approach includes discovering a community to represent the video.
  • the described approach includes automatically identifying key roles and their relationships based on video content analysis without employing metadata.
  • the approach includes identifying key roles from the video. Key roles are those characters, identified by the faces that appear most often in the video. The faces that appear most often are likely to represent the main characters of the video. Once the key roles are identified, the approach discovers a community based on relationships between the identified roles.
  • FIG. 4 illustrates, at 400 , example face images excerpted from several face clusters 212 from a video.
  • Each of rows 402 , 404 , 406 , and 408 represent a respective four clusters and include seven images from the respective four clusters. The number of images per cluster will vary per video and per role.
  • the similarity of each two vectors representing each face image is calculated using their Euclidean distance.
  • the clustering component 324 propagates two types of information for each pair ⁇ i and ⁇ j .
  • the first type of information propagates from ⁇ i to ⁇ j and indicates how well ⁇ j would serve as an exemplar of among all of the potential exemplars of ⁇ i .
  • the first type of information is termed responsibility and denoted r(i,j).
  • the second type of information propagates from ⁇ j to ⁇ i and indicates how appropriately ⁇ j would act as an exemplar of ⁇ i by considering other potential representative face images that may choose ⁇ j as an exemplar.
  • the second type of information is termed availability and denoted a(i,j).
  • the clustering component 324 clusters faces with the same exemplar 326 as a face cluster 212 , for example as shown in the excerpted rows 402 , 404 , 406 , and 408 with each cluster containing the images of one role as shown in the excerpts.
  • FIG. 5 illustrates, at 500 , an example of a community graph, such as community graph 216 .
  • the community graph 500 is discovered from key roles identified from face clusters generated from the same video as the cluster excerpts shown in FIG. 4 .
  • the nodes 502 , 504 , 506 , and 508 of FIG. 5 are exemplars that correspond to the clusters of FIGS. 4 , 402 , 404 , 406 , and 408 , respectively. Meanwhile, the nodes 510 and 512 are exemplars from clusters that were omitted from the sample presented in FIG. 4 in the interest of brevity.
  • the community graph 500 depicts interactions among roles in a video using social network analysis, which is a field of research in sociology that models interactions among people as a complex network among entities and seeks to discover hidden properties.
  • people or roles are represented by nodes/vertices in a social network, while correlations or relationships among the roles are modeled as weighted edges. Because characters in videos interact in different ways such as through physical contact, verbal interaction, appearing together in frames of the video, and speaking about other characters that are not in the current frame, a community graph may use various correlations.
  • the community discovery component 214 uses a “visually accompanying” correlation for roles that co-occur in a scene. In other examples one or more different correlations such as “physical contact” and “verbal interaction” may be used.
  • the “visually accompanying” correlation means that when two roles appear in the scene, they need not appear together in a frame in order to have the “visually accompanying” correlation. Roles appearing closer together in a time line of the scene indicate a stronger relationship in accordance with the “visually accompanying” correlation.
  • d ⁇ ( a , b ) ⁇ c ⁇ / ⁇ ( 1 + ⁇ ⁇ ⁇ T ) when ⁇ ⁇ face ⁇ ⁇ a ⁇ ⁇ and ⁇ ⁇ face ⁇ ⁇ b ⁇ ⁇ are ⁇ ⁇ in ⁇ ⁇ the ⁇ ⁇ same ⁇ ⁇ scene 0 otherwise ( 4 )
  • the community discovery component 214 collects correlations or relationships of all of the faces from each detected role and calculates the weight of the edge between each face cluster A and B in the graph to obtain an adjacency matrix W A,B in accordance with equation 5.
  • the face detection component 302 often detects around 500 faces from key frames of two hours of video.
  • the community discovery component 214 calculates d(a, b) about C 500 2 ⁇ 10 5 times for such a two-hour video.
  • face pair correlations d(a, b) are calculated scene by scene. Although in other implementations face pair correlations d(a, b) may be calculated on a per video basis or across multiple videos, for example in the case of a television or movie series.
  • the community graph 500 includes nodes of differing sizes that illustrate the size of the corresponding face cluster.
  • the node 506 being larger than the other nodes indicates that the cluster 406 includes more face images than the other clusters for the example video.
  • the weights of the edges between the nodes illustrate the strength of the correlation.
  • FIG. 5 shows the weights both numerically and graphically by the width of the edge line, both need not be shown.
  • a parameter can be set in various implementations to control a minimum strength of correlation as well as a number or percentage of roles/nodes to be included in a community graph 216 , such as the graph 500 .
  • Configurable parameter entries may result in the top configurable amount or percentage of identified key roles with correlation weights above a configurable amount or percentage being included in the community graph. While other parameter entries may result in the top 5 or 25% of identified key roles with the highest 25% of correlation weights or weights of 0.2 or higher being included in the community graph. In some instances all nodes connected by edges with the threshold correlation weight are illustrated, and other parameter entries may be included.
  • FIG. 6 illustrates example user interface (UI) presentations in the form of posters created by the generation application 114 , for example as embodied by the generation tool 218 using key-role acquisitions from a video.
  • Key roles and their relationships such as those discovered by the community graph 216 , provide a basis for a wide variety of applications.
  • visual summaries or video posters may be generated based on acquired key roles.
  • FIG. 6 illustrates four different styles of poster visualizations based on the example community graph 500 .
  • visual summaries and video posters include static previews, including either an existing image or a synthesized image of video content.
  • content includes movies, television programs, music videos, and personal videos, as well as movie series and television series.
  • Digital or printed posters with graphical images and often containing text are designed to promote the video content.
  • Promotional posters serve the purpose of attracting the attention of the possible audiences as well as revealing key information about the content to entice the potential audience to view the video.
  • the generation tool 218 automatically creates a presentation or poster containing identified key roles such as selected from one of the community graphs 216 or 500 .
  • the key roles will generally appear frequently in the video and have many interactions with other roles in the video.
  • the generation tool 218 identifies nodes/vertices that contain the most frequently captured faces with edges to other vertices having a correlation weight meeting a minimum or configurable threshold.
  • the generation tool 218 employs a role importance function ⁇ (v) on a vertex v where FaceNum(v) denotes the number of faces in the cluster represented by vertex v and Degree(v) is the degree of the vertex v in the community graph, e.g., the sum of the weight of the edges connected to v.
  • the terms FaceNum(v) and Degree(v) may be in different levels of granularity.
  • ⁇ ( v ) FaceNum( v )+ ⁇ ⁇ Degree( v ) (6)
  • Various implementations of the generation tool 218 are configurable to select a number or percentage of roles with the largest ⁇ (v) as the key roles for presentation. For example, the 3-5 roles with the largest ⁇ (v) may be selected, roles with an ⁇ (v) above a threshold may be selected, or the roles with the top 25% of the calculated ⁇ (v) may be selected. In at least one embodiment, the roles selected may be based on an organic separation, that is a natural breaking point where there is a noticeably larger separation between the ⁇ (v) values in the range of ⁇ (v) represented by the community graph 216 .
  • FIG. 6 illustrates a representative frame style poster.
  • the generation tool 218 selects a key frame that contains key roles. For example key frames in contention to be selected may be the key frames containing the most key roles or key frames containing a number of key roles above a configurable threshold.
  • the generation tool 218 also quantifies one or more of how well the contending key frame represents the entire video in terms of color and/or theme as well as the visual quality of the contending key frame, including whether the frame and the characters contained therein are “in-focus.”
  • the generation tool 218 employs a representation function r( ⁇ i ) on each contending key frame ⁇ i and selects the frame with the largest r.
  • Representation function r( ⁇ i ) is shown in equation 7, below.
  • Equation 7 j indicates the face index in the frame ⁇ i , S( ⁇ i (j) ) denotes the area of the j- th face, h( ⁇ i ) indicates the color histogram of key frame ⁇ i , and h is the average color histogram of the video.
  • S( ⁇ i (j) ) denotes the area of the j- th face
  • h( ⁇ i ) indicates the color histogram of key frame ⁇ i
  • h is the average color histogram of the video.
  • FIG. 6 illustrates two collage style posters at 604 and 606 .
  • the generation tool 218 extracts a representative face image for each key role and employs a collage technique to organize the faces into a visually appealing presentation.
  • the generation tool 218 selects candidate face images using the role importance function ⁇ (v) shown in equation 6.
  • the generation tool 218 selects the number of roles to be included in the collage from the values assigned to nodes by the role importance function ⁇ (v) shown in equation 6.
  • the representative faces extracted from the candidate face images are also extracted based on being front-facing, of acceptable visual quality, e.g., clear as opposed to blurry, and/or not occluded by other characters, scenery, and in some instances clothing such as hats, scarves, or dark-glasses.
  • the collage technique used by the generation tool 218 to create the picture collage style shown at 604 detects the face region as the region-of-interest (ROI).
  • the generation tool 218 employs the Markov Chain Monte Carlo (MCMC) to assemble a picture collage in which all ROIs are visible while other parts of the image are overlaid.
  • MCMC Markov Chain Monte Carlo
  • the collage technique used by the generation tool 218 to create the video collage style shown at 606 concatenates the images by smoothing the boundaries to assemble a naturally appealing collage.
  • FIG. 6 illustrates a synthesized style poster at 608 .
  • the generation tool 218 seamlessly embeds images of the key roles on a representative background.
  • the synthesized style poster contains a representative background which introduces typical surroundings and context in addition to prominently featuring key roles to entice potential viewers to watch the video.
  • the generation tool 218 selects a key frame that contains a representative background and filters out or extracts objects from the background based on character interaction with the objects.
  • the generation tool 218 selects the background key frame using a process equivalent to that of selecting a representative frame as a poster as discussed regarding 602 of FIG. 6 .
  • the generation tool 218 selects the frame with the smallest r( ⁇ i ) as defined by equation 7.
  • the generation tool 218 selects a frame in which a minimal number of faces appear, to avoid viewer distraction and to minimize object/face removal processing.
  • the generation tool 218 seamlessly inserts face images of key roles on the filtered background.
  • the position and scale of the face images are based on the size of the corresponding cluster 212 represented by the node in the community graph 216 . For example, images from the largest clusters are featured more prominently than those from smaller clusters.
  • FIGS. 7 and 8 are flow diagrams illustrating example processes 700 and 800 for performing key-role acquisition from video as represented in FIGS. 2-6 .
  • the process 700 (as well as each process described herein) is illustrated as a collection of acts in a logical flow graph, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof.
  • the blocks represent computer instructions stored on one or more computer-readable media that, when executed by one or more processors, perform the recited operations.
  • the order in which the process is described is not intended to be construed as a limitation, and any number of the described acts can be combined in any order to implement the process, or an alternate process. Additionally, individual blocks may be deleted from the process without departing from the spirit and scope of the subject matter described herein. In various implementations one or more acts of process 700 may be replaced by acts from the other processes described herein.
  • the process 700 includes, at 702 , the video tool 202 receiving a video.
  • the received video may be a video streamed over a network 102 or stored on a computing device 104 .
  • the video tool 202 performs video structuring.
  • the received video is structured by segmenting the video into a hierarchical structure that includes levels for scenes, shots, and key frames.
  • the video tool 202 processes the faces from the structured video. For instance, faces from the key frames are processed by detecting and grouping.
  • the video tool 202 discovers a community based on the processed faces.
  • the video tool 202 automatically generates a presentation of the video based on the discovered community. In several implementations, the presentation is generated without relying on rich metadata such as cast lists, scripts, or crowd-sourced information such as that obtained from the world-wide-web.
  • the process 800 includes, at 802 , the video tool 202 receiving a video.
  • the video structuring component 204 hierarchically structures the video into the video structure information 208 including scene, shot, and key frame segments. For instance, the video structuring component 204 may first detect shots as a continuous section of video taken by a single camera, extract a key frame from each shot, and detect similar shots that the video structuring component 204 groups to form a scene.
  • the community discovery component 214 and the face grouping component 210 receive the scene, shot, and key frame segments.
  • the face grouping component 210 performs face grouping by detecting faces from the key frames to form the face clusters 212 .
  • the community discovery component 214 constructs a community graph 216 by identifying nodes (e.g., according to co-occurrence of the roles in a scene) based on the roles represented by the face clusters 212 and the video structure information 208 .
  • the generation tool 218 receives the community graph 216 .
  • the generation tool 218 identifies important roles by using a role importance function such as that shown in equation 6. For instance, the generation tool 218 calculates role importance based on the nodes/vertices of the community graph 216 that contain the most frequently captured faces and have an appropriate number of edges connecting to other nodes/vertices.
  • the generation tool 218 generates one or more presentations in accordance with those shown in FIG. 6 .
  • FIG. 9 is a flow diagram of an example process for acquiring key roles via face grouping.
  • the process 900 of FIG. 9 includes, at 902 , the face grouping component 210 receiving the key frames 304 .
  • the face detection component 302 detects the face information 306 from the key frames 304 .
  • the feature extraction component 308 receives the detected face information 306 .
  • the face image normalization component 310 normalizes the detected faces into (e.g., 64 ⁇ 64) gray scale images 312 .
  • the feature concatenation component 314 concatenates the gray value of the pixels of the gray scale images 312 as a 4096-dimensional vector 316 , in some instances.
  • the face descriptor component 318 receives the vector 316 .
  • the distance matrix component 320 produces a similarity matrix 322 by comparing received vectors using learning-based encoding and principal component analysis (LE-PCA).
  • the clustering component 324 generates face clusters, like face cluster 212 , and selects an exemplar 326 for each cluster.
  • FIG. 10 is a flow diagram of an example process employing key-role acquisition from video to generate a presentation.
  • the process 1000 of FIG. 10 illustrates the generation tool 218 automatically creating a presentation or poster containing identified key roles selected from a community graph such as the community graphs 216 or 500 .
  • the generation tool 218 identifies nodes/vertices containing the most-frequently captured faces and that have edges to other vertices with a correlation weight meeting a minimum threshold by using a role importance function. For instance, the generation tool 218 may use a role importance function such as that shown in equation 6 to identify the desired nodes/vertices.
  • the generation tool 218 selects one or more presentation styles for generation.
  • a key frame style presentation such as the example shown at 602
  • a representative frame containing key roles is selected as the presentation by using a representation function such as that shown in equation 7.
  • the generation tool 218 selects a collage style presentation, such as the picture collage style example shown at 604 or a video collage style example shown at 606
  • the generation tool 218 selects candidate face images by using a role importance function.
  • the generation tool 218 uses a role importance function, such as that shown in equation 6 to select candidate face images.
  • processing for the two example collage styles diverges.
  • the generation tool 218 selects a picture collage style presentation
  • the generation tool 218 assembles a picture collage in which each face region-of-interest is visible, while other parts of the face images are overlaid.
  • the generation tool 218 selects a video collage style presentation
  • the generation tool 218 creates a video collage by detecting the face regions-of-interest and concatenating the images with smoothed boundaries to assemble a naturally appealing collage.
  • the generation tool 218 when the generation tool 218 selects a synthesized style presentation such as the example shown at 608 , the generation tool 218 synthesizes a presentation by embedding images of the key roles on a representative background. For example, the representative background frame with the smallest r( ⁇ i ) as defined by equation 7 is selected. To complete the synthesized style presentation, the generation tool 218 embeds face images of identified key roles on the filtered background.
  • the generation tool 218 provides the selected presentation styles for display.
  • the presentations are displayed electronically, e.g., on a computer screen or digital billboard, although the presentations may also be provided for use in print media.

Abstract

Tools and techniques for acquiring key roles and their relationships from a video independent of metadata, such as cast lists and scripts, are described herein. These techniques include discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, and personal video, etc.) as a community. For instance, a video is segmented into a hierarchical structure that includes levels for scenes, shots, and key frames. In some implementations, the techniques include performing face detection and grouping on the detected key frames. In some implementations, the techniques include exploiting the key roles and their correlations in this video to discover a community. The discovered community provides for a wide variety of applications, including the automatic generation of visual summaries or video posters including acquired key roles.

Description

BACKGROUND
Promotional materials for videos are helpful in informing a potential audience about the content of the videos. For instance, video trailers, still-image posters, and the like may be helpful in letting users know about the theme or plot of a movie, television show, or other type of video. In order to create quality promotional materials, it is often useful to analyze the content of a particular video to determine the plot, key character roles within the video, and the like. With this information, the creator of the promotional material is able to create the trailer, poster, or other type of content in a way that adequately portrays the contents of the video.
Conventional approaches to movie content analysis depend on metadata provided by cast lists, scripts, and/or crowd-sourcing knowledge from the web without regard to correlations among roles. For instance, these traditional techniques may identify main characters from a video by manually identifying the characters and using metadata (e.g., cast lists, scripts, and/or crowd-sourcing knowledge from the web) associated with the movies. Some attempts have been made to associate names with the corresponding roles in news videos based on co-occurrence, as well as using face appearance, clothes appearance, speaking status, scripts, and image search results. One approach attempts to match an affinity network of faces and a second affinity network of names in order to assign a name to each face. However, such an approach has limited applicability for generating promotional posters since the matching merely matches faces to names.
While these traditional techniques may work in instances where the analyzed video includes rich metadata, such conventional approaches are not practical when little metadata is available, which may be true for internet protocol television (IPTV) and video on demand (VOD) systems. In contrast to metadata-rich videos, these videos often only include a brief title of each video section. In addition, the current process of creating promotional posters is time intensive and expensive because the current process requires the skills of graphics artists and designers. Promotional posters are characterized by: (1) having a conspicuous main theme and object; (2) grabbing attention through the use of colors and textures; (3) being self-contained and self-explained; and (4) being specially designed for viewing from a distance. Accordingly, as the amount of movies and other videos increase, manual techniques become difficult to effectively administer. In addition, not all of these movies and videos will have a sufficient amount of metadata available for analysis to create a high-quality poster or other types of promotional content.
SUMMARY
Creating promotional posters for videos may be helpful for marketing these videos. Displaying the main characters from a video is a cornerstone for promotional posters in some instances. Tools and techniques for automatically acquiring key roles from a video free from use of metadata (e.g., cast lists, scripts, and/or crowd-sourcing knowledge from the web) are described herein.
These techniques include discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, personal video, etc.) as a community. First, the techniques segment a video into a hierarchical structure that includes levels for scenes, shots, and key frames. Second, the techniques perform face detection and grouping on the detected key frames. Third, the techniques exploit the key roles and their correlations in this video to discover a community. Fourth, the discovered community provides for a wide variety of applications, including the automatic generation of visual summaries (e.g., video posters) based on the acquired key roles.
This summary is provided to introduce concepts relating to acquiring and presenting key roles via community discovery from video. These techniques are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.
FIG. 1 illustrates an example computing environment including a computing device that acquires key roles from video.
FIG. 2 illustrates example components for acquiring a key role from a video via community discovery.
FIG. 3 illustrates example components for determining a face cluster of a key role.
FIG. 4 illustrates an example excerpted from several face cluster results from a video.
FIG. 5 illustrates an example of a community graph discovered from key roles acquired from a video.
FIG. 6 illustrates example user interface (UI) presentations in the form of posters created using key roles acquired from a video.
FIGS. 7 and 8 are flow diagrams illustrating example approaches for acquiring key roles and their relationships from video for presentation.
FIG. 9 is a flow diagram of an example process for acquiring a key role via face grouping.
FIG. 10 is a flow diagram of an example process employing key-role acquisition from video to generate presentations.
DETAILED DESCRIPTION
Promotional posters are helpful in marketing videos, and often display the main characters from a video. The techniques described below automatically create a presentation that includes images of the characters that are determined, automatically, to be the main characters in the video. These techniques may make this automatic determination by analyzing the video to determine how often each character appears in the video.
The techniques described herein identify key roles of a video by analyzing the video itself. That is, the techniques use facial recognition techniques to identify the main characters of a video. From this information, the techniques may then automatically create a visual presentation (e.g., a poster or other visual summary) for the video that includes the main characters.
The techniques may identify the main characters in any number of ways. For instance, the techniques may determine how often a face appears on screen, how often a character is spoken about, and the like. Furthermore, the techniques may create a community graph based on the analysis of the movie, which may also be used to identify the key roles. The community graph may depict the interrelationships between characters in the movie, as well as a strength of these interrelationships.
By discovering relationships within a community in this way, these example techniques are able to discover key roles within a video that is free from typically-used rich metadata, such as cast lists, scripts, and/or crowd-sourced information obtained from the world-wide-web. These techniques include automatically discovering key roles and their relationships by treating a video (e.g., a movie, television program, music video, personal video, etc.) as a community. First, the techniques segment a video into a hierarchical structure (including shot, key frame, and scene). Second, the techniques perform face detection and grouping on the detected key frames. Third, the techniques create a community by exploiting the key roles and their correlations or relationships in the video segments. Finally, the discovered community provides for a wide variety of applications. In particular, the discovered community enables automatic generation of visual summaries or video posters based on the acquired key roles from the community.
For context, the entertainment industry has boomed in recent years, resulting in a huge increase in the number of videos, such as movies, television programs, music videos, personal videos, and the like. As the numbers of videos grow, it becomes important to index and search video libraries. In addition, because people respond favorably to images, such as those in promotional posters, being able to present a pleasant visual summary is important for promotional purposes. As such, the techniques described herein may be helpful in creating a poster or other image that visually represents a respective video in a manner that is consistent with the content of the video.
Generally, characters of a video are the center of attention within the video, and the interactions among these characters help to narrate a story. Because these characters (or “roles”) and their interactions are the center of audience interest, indentifying key roles and analyzing their relationships to discover a community is useful for understanding the content of a movie or other video. However, discovering a community is challenging due to the complex environment in movies. For example, the variation of characters' poses, wardrobe changes, and various illumination conditions may make the identification of characters within a video difficult. In addition, correlations or relationships between roles are difficult to analyze thoroughly because roles can interact in different ways, including direct interactions (e.g., dialogs with each other) and indirect interactions (e.g., talking about other roles). Thus, being able to automatically acquire key roles for indexing, while useful, is not straightforward.
In order to automatically detect key roles from video, the techniques described below first structure the incoming video, whether the video is streaming or stored. The first structural unit that the techniques identify is a shot, which includes a continuous section of video shot by one camera. The second structural unit that the techniques identify is a key frame, which, as used herein, includes an image extracted from a shot that includes at least one face and that represents the shot in terms of color, background image, and/or action. In some implementations a key frame may include more than one image from a shot. This definition of a “key frame” may differ from traditional uses of the term “key frame” in some instances. The third structural unit that the techniques build is a scene, which include shots that are similar to one another and that the techniques groups together to form the scene. In various implementations, shot similarity is determined based on the shots having similarity to each other greater than a predetermined or configurable threshold value.
The techniques detect faces that appear in the key frames and groups the faces into face clusters according to role. The techniques then construct a community graph based on co-occurrence of the faces in the video. In the community graph, key roles are presented as nodes/vertices and relationships between the key roles are presented as edges.
Once discovered, the community graph of key roles has a wide variety of applications including automatic generation of visual summaries such as video posters, images to accompany reviews, or the like. In one specific example of many, the techniques described herein generate a visual summary (e.g., a movie poster) by detecting key roles from a discovered community, selecting representative images for each key role, selecting a typical background image of the video, and creating the poster according to at least one of four different visualization techniques based on the representative key roles and the background.
The discussion begins with a section entitled “Example Computing Environment,” which describes one non-limiting environment that may implement the described techniques. Next, a section entitled “Example Components” describes non-limiting components that may implement the described techniques in the example environment or other environments. A third section, entitled “Example Approach to Community Discovery from a Video” illustrates and describes one example technique for discovering community from a video without employing metadata. A fourth section, entitled “Example Video Poster Generation,” illustrates an example application for acquiring a key role and presenting the key role via community discovery from video. A fifth section, entitled “Example Processes,” presents several example processes for acquiring a key role and presenting the key role via community discovery from video. A brief conclusion ends the discussion.
This brief introduction, including section titles and corresponding summaries, is provided for the reader's convenience and is intended to limit neither the scope of the claims nor the following sections.
Example Computing Environment
FIG. 1 illustrates an example computing environment 100 in which techniques for acquiring a key role and presenting the key role via community discovery from video independent of metadata may be implemented. The environment 100 includes a network 102 over which the video may be received by a computing device 104. The environment 100 may include a variety of computing devices 104 as video source and/or presentation destination devices. As illustrated, the computing device 104 includes one or more processors 106 and memory 108, which stores an operating system 110 and one or more applications including a video application 112, a generation application 114, and other applications 116 running thereon.
While FIG. 1 illustrates the computing device 104A as a laptop-style personal computer, other implementations may employ a personal computer 104B, a personal digital assistant (PDA) 104 c, a thin client 104D, a mobile telephone 104E, a portable music player, a game-type console (such as Microsoft Corporation's Xbox™ game console), a television with an integrated set-top box 104F or a separate set-top box, or any other sort of suitable computing device or architecture. When the computing device 104 is embodied in a television or a set-top box, the device may be connected to a head-end or the internet, or may receive programming via a broadcast or satellite connection.
The memory 108, meanwhile, may include computer-readable storage media. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communications media.
Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media.
The applications 112, 114, and 116 may represent desktop applications, web applications provided over a network 102, and/or any other type of application capable of running on the computing device 104. The network 102, meanwhile, is representative of any one or combination of multiple different types of networks, interconnected with each other and functioning as a single large network (e.g., the Internet or an intranet). The network 102 may include wire-based networks (e.g., cable) and wireless networks (e.g., cellular, satellite, etc.).
As illustrated, the computing device 104 implements a video application 112 that functions to structure streaming or stored video for acquiring a key role and community discovery for presentation from a generation application 114. In other implementations the generation application 114 may be integrated in the video application 112.
Example Components
Various components may be employed to automatically generate video presentations by acquiring key roles from the video without employing rich metadata. In at least one instance, the described components discover a community to represent the video. The components then use the community to determine the key roles, which the components then use to create a poster or other type of promotional material that accurately portrays the contents of the video. For instance, the poster may include images of the key roles identified with reference to the discovered community.
FIG. 2, for instance, illustrates example components for discovering a community from a video to acquire key roles independent of rich metadata such as cast lists and scripts at 200. The described approach includes discovering key roles and their relationships based on content analysis.
As shown in FIG. 2, a video tool 202 (e.g., which may include the video application 112 or similar logic) includes a video structuring component 204 that receives a video 206. In response, the video structuring component 204 analyzes and segments the video into hierarchical levels. The video structuring component 204 then outputs the video structure information 208 as hierarchically structured levels that include scenes, shots, and key frames for further processing by other components included in the video tool 202.
A face grouping component 210, in the illustrated instance, detects faces from the key frames and performs face grouping to output a face cluster 212 for each role in the video. Based on the roles represented by each face cluster 212 and the video structure information 208, the community discovery component 214 identifies nodes (e.g., according to co-occurrence of the roles in a scene) and constructs a community graph 216. The community graph 216 is input to the generation tool 218, which in FIG. 2 is shown integrated in the video tool 202. In other implementations, for example as shown in the environment of FIG. 1, the generation tool 218 may be separate from and operate independently of the video tool 202.
In a community graph 216, each node represents a key role within the video and the weight of each edge indicates a significance of the relationship between each pair of roles. In some instances the size of particular nodes in the community graph 216, corresponds to how “key” the community discovery component 214 determines the role is in the community.
In the illustrated example of community graph 216, the four illustrated roles are identified as most important based on their interactions, although any number of roles may make up the community graph 216 in other instances. In this example, a node 220 represents the most key role, while a node 222 represents the next most key role, and the nodes 224 and 226 represent other key roles that interact with the roles represented by the nodes 220 and 222, but appear less often in the video. Accordingly, the nodes 220 and 222 likely represent characters played by the stars of the video while the nodes 224 and 226 likely represent major supporting roles.
FIG. 3 illustrates, at 300, example components for determining a face cluster 212. As shown at 300, the face grouping component 210 includes a face detection component 302 that receives one or more key frames 304, such as from the structured video 208. The face detection component 302 detects faces from the key frames 304 to get the face information 306 and includes bounding face rectangles as face images. The face detection component 302 may detect multiple face areas from each key frame 304, in some instances, since a video can contain a large number of characters per shot. Based on face images detected from each face area, the face grouping component 210 groups each face image detected to be the same person together to form several groups. The higher number of face images per group, the more often the detected face appears in shots of the video.
A feature extraction component 308 extracts features from the face information 306. The feature extraction component 308 includes a face image normalization component 310 that normalizes the detected faces into (e.g., 64×64) gray scale images 312. A feature concatenation component 314 concatenates the gray value of each pixel as a 4096-dimensional vector 316 for each detected face image, in some instances.
A face descriptor component 318 creates a description for each detected face image based on the vector 316. The face descriptor component 318 includes a distance matrix component 320 that receives each vector 316 and compares the vectors using learning based encoding and principal component analysis (LE-PCA) to produce a similarity matrix 322. A clustering component 324 then takes similarity matrix 322 as input and outputs a face cluster 212 with an exemplar 326 for each cluster, which is used by generation tool 218. In various implementations, clustering component 324 employs an Affinity Propagation (AP) clustering algorithm. However, in other implementations a K-Means or other clustering algorithm may be employed. In some instances the exemplar 326 is a face image that is first identified as belonging to the face cluster 212. Although, in other instances, the exemplar 326 is selected based on other or additional criteria such as having a forward facing pose or the illumination conditions of the particular face image. The exemplar 326 is used as the node representation in community graph 216 in some implementations.
Example Approach to Community Discovery from a Video
Various approaches may be employed to automatically generate video presentations by acquiring key roles from a video without employing rich metadata. One such approach includes discovering a community to represent the video. The described approach includes automatically identifying key roles and their relationships based on video content analysis without employing metadata. The approach includes identifying key roles from the video. Key roles are those characters, identified by the faces that appear most often in the video. The faces that appear most often are likely to represent the main characters of the video. Once the key roles are identified, the approach discovers a community based on relationships between the identified roles.
FIG. 4 illustrates, at 400, example face images excerpted from several face clusters 212 from a video. Each of rows 402, 404, 406, and 408 represent a respective four clusters and include seven images from the respective four clusters. The number of images per cluster will vary per video and per role. For each cluster in FIG. 4, the similarity of each two vectors representing each face image is calculated using their Euclidean distance. To obtain clusters as exemplified in FIG. 4, the clustering component 324 iteratively calculates an exemplar for each cluster starting by initially treating each of n face images,
Figure US09271035-20160223-P00001
={ƒi}i=1 n, as a potential exemplar of itself. The clustering component 324 propagates two types of information for each pair ƒi and ƒj. The first type of information propagates from ƒi to ƒj and indicates how well ƒj would serve as an exemplar of among all of the potential exemplars of ƒi. The first type of information is termed responsibility and denoted r(i,j). The second type of information propagates from ƒj to ƒi and indicates how appropriately ƒj would act as an exemplar of ƒi by considering other potential representative face images that may choose ƒj as an exemplar. The second type of information is termed availability and denoted a(i,j).
Given a similarity matrix Sn×n={Si,j|si,j is similarity between ƒi and ƒj}, such as a similarity matrix 322, the two types of information are propagated iteratively as shown in equation 1, below.
r(i,j)←S i,j−maxj≠j′ {A(i,j′)+s i,j′}
a(i,j)←min{0,r(j,j)}+Σi′∉{i,j}max{0,r(i′,j)}  (1)
Self availability is determined by equation 2, below.
a(j,j)←Σi′≠j max{0,r(i′,j)}  (2)
The iteration process stops when convergence is reached, and the exemplar for each face ƒi is extracted by solving equation 3, presented below.
arg maxj {r(i,j)+a(j,j)}  (3)
The clustering component 324 clusters faces with the same exemplar 326 as a face cluster 212, for example as shown in the excerpted rows 402, 404, 406, and 408 with each cluster containing the images of one role as shown in the excerpts.
FIG. 5 illustrates, at 500, an example of a community graph, such as community graph 216. In this example, the community graph 500 is discovered from key roles identified from face clusters generated from the same video as the cluster excerpts shown in FIG. 4.
The nodes 502, 504, 506, and 508 of FIG. 5 are exemplars that correspond to the clusters of FIGS. 4, 402, 404, 406, and 408, respectively. Meanwhile, the nodes 510 and 512 are exemplars from clusters that were omitted from the sample presented in FIG. 4 in the interest of brevity.
The community graph 500 depicts interactions among roles in a video using social network analysis, which is a field of research in sociology that models interactions among people as a complex network among entities and seeks to discover hidden properties. In the community graph 500, people or roles are represented by nodes/vertices in a social network, while correlations or relationships among the roles are modeled as weighted edges. Because characters in videos interact in different ways such as through physical contact, verbal interaction, appearing together in frames of the video, and speaking about other characters that are not in the current frame, a community graph may use various correlations.
In the example of the community graph 500, the community discovery component 214 uses a “visually accompanying” correlation for roles that co-occur in a scene. In other examples one or more different correlations such as “physical contact” and “verbal interaction” may be used.
Specifically, the “visually accompanying” correlation means that when two roles appear in the scene, they need not appear together in a frame in order to have the “visually accompanying” correlation. Roles appearing closer together in a time line of the scene indicate a stronger relationship in accordance with the “visually accompanying” correlation. According to the analysis performed by the community discovery component 214, correlations d(a, b) between two faces a and b are represented by equation 4, in which c is a constant in seconds and ΔT=|time (a)−time (b)| measures the temporal distance of the two faces a and b.
d ( a , b ) = { c / ( 1 + Δ T ) when face a and face b are in the same scene 0 otherwise ( 4 )
The community discovery component 214 collects correlations or relationships of all of the faces from each detected role and calculates the weight of the edge between each face cluster A and B in the graph to obtain an adjacency matrix WA,B in accordance with equation 5.
W A,B =w(A,B)=Σa∈AΣb∈B d(a,b)  (5)
For example, the face detection component 302 often detects around 500 faces from key frames of two hours of video. Thus, the community discovery component 214 calculates d(a, b) about C500 2≈105 times for such a two-hour video.
In at least one implementation, face pair correlations d(a, b) are calculated scene by scene. Although in other implementations face pair correlations d(a, b) may be calculated on a per video basis or across multiple videos, for example in the case of a television or movie series.
The community graph 500 includes nodes of differing sizes that illustrate the size of the corresponding face cluster. For example, the node 506 being larger than the other nodes indicates that the cluster 406 includes more face images than the other clusters for the example video. In addition, the weights of the edges between the nodes illustrate the strength of the correlation. Although FIG. 5 shows the weights both numerically and graphically by the width of the edge line, both need not be shown.
A parameter can be set in various implementations to control a minimum strength of correlation as well as a number or percentage of roles/nodes to be included in a community graph 216, such as the graph 500. Configurable parameter entries may result in the top configurable amount or percentage of identified key roles with correlation weights above a configurable amount or percentage being included in the community graph. While other parameter entries may result in the top 5 or 25% of identified key roles with the highest 25% of correlation weights or weights of 0.2 or higher being included in the community graph. In some instances all nodes connected by edges with the threshold correlation weight are illustrated, and other parameter entries may be included.
Example Video Poster Generation
FIG. 6 illustrates example user interface (UI) presentations in the form of posters created by the generation application 114, for example as embodied by the generation tool 218 using key-role acquisitions from a video. Key roles and their relationships, such as those discovered by the community graph 216, provide a basis for a wide variety of applications. For example, visual summaries or video posters may be generated based on acquired key roles. FIG. 6 illustrates four different styles of poster visualizations based on the example community graph 500. As described herein, visual summaries and video posters include static previews, including either an existing image or a synthesized image of video content.
In the video domain, content includes movies, television programs, music videos, and personal videos, as well as movie series and television series. Digital or printed posters with graphical images and often containing text are designed to promote the video content. Promotional posters serve the purpose of attracting the attention of the possible audiences as well as revealing key information about the content to entice the potential audience to view the video.
The generation tool 218 automatically creates a presentation or poster containing identified key roles such as selected from one of the community graphs 216 or 500. The key roles will generally appear frequently in the video and have many interactions with other roles in the video.
The generation tool 218 identifies nodes/vertices that contain the most frequently captured faces with edges to other vertices having a correlation weight meeting a minimum or configurable threshold. The generation tool 218 employs a role importance function ƒ(v) on a vertex v where FaceNum(v) denotes the number of faces in the cluster represented by vertex v and Degree(v) is the degree of the vertex v in the community graph, e.g., the sum of the weight of the edges connected to v. The terms FaceNum(v) and Degree(v) may be in different levels of granularity. Thus, the generation tool 218 employs λ=num of faces/Σv Degree(v) to balance these two terms in the role importance function presented as equation 6, below.
ƒ(v)=FaceNum(v)+λ λDegree(v)  (6)
Various implementations of the generation tool 218 are configurable to select a number or percentage of roles with the largest ƒ(v) as the key roles for presentation. For example, the 3-5 roles with the largest ƒ(v) may be selected, roles with an ƒ(v) above a threshold may be selected, or the roles with the top 25% of the calculated ƒ(v) may be selected. In at least one embodiment, the roles selected may be based on an organic separation, that is a natural breaking point where there is a noticeably larger separation between the ƒ(v) values in the range of ƒ(v) represented by the community graph 216.
FIG. 6, at 602, illustrates a representative frame style poster. To create this style of poster, the generation tool 218 selects a key frame that contains key roles. For example key frames in contention to be selected may be the key frames containing the most key roles or key frames containing a number of key roles above a configurable threshold. The generation tool 218 also quantifies one or more of how well the contending key frame represents the entire video in terms of color and/or theme as well as the visual quality of the contending key frame, including whether the frame and the characters contained therein are “in-focus.”
The generation tool 218 employs a representation function r(ƒi) on each contending key frame ƒi and selects the frame with the largest r. Representation function r(ƒi) is shown in equation 7, below.
r ( f i ) = j log S ( f i ( j ) ) | h ( f i ) - h _ | ( 7 )
In equation 7, j indicates the face index in the frame ƒi, S(ƒi (j)) denotes the area of the j-th face, h(ƒi) indicates the color histogram of key frame ƒi, and h is the average color histogram of the video. Other features related to video quality are integrated in various implementations.
FIG. 6 illustrates two collage style posters at 604 and 606. To create these styles of poster, the generation tool 218 extracts a representative face image for each key role and employs a collage technique to organize the faces into a visually appealing presentation. The generation tool 218 selects candidate face images using the role importance function ƒ(v) shown in equation 6. In addition, the generation tool 218 selects the number of roles to be included in the collage from the values assigned to nodes by the role importance function ƒ(v) shown in equation 6.
In various implementations, the representative faces extracted from the candidate face images are also extracted based on being front-facing, of acceptable visual quality, e.g., clear as opposed to blurry, and/or not occluded by other characters, scenery, and in some instances clothing such as hats, scarves, or dark-glasses.
The collage technique used by the generation tool 218 to create the picture collage style shown at 604 detects the face region as the region-of-interest (ROI). The generation tool 218 employs the Markov Chain Monte Carlo (MCMC) to assemble a picture collage in which all ROIs are visible while other parts of the image are overlaid. Similarly, after detecting the face region as the ROI, the collage technique used by the generation tool 218 to create the video collage style shown at 606 concatenates the images by smoothing the boundaries to assemble a naturally appealing collage.
FIG. 6 illustrates a synthesized style poster at 608. To create this style of poster, the generation tool 218 seamlessly embeds images of the key roles on a representative background. Thus, the synthesized style poster contains a representative background which introduces typical surroundings and context in addition to prominently featuring key roles to entice potential viewers to watch the video.
To create the synthesized style of poster, the generation tool 218 selects a key frame that contains a representative background and filters out or extracts objects from the background based on character interaction with the objects. In various implementations the generation tool 218 selects the background key frame using a process equivalent to that of selecting a representative frame as a poster as discussed regarding 602 of FIG. 6. However, when selecting a background key frame, the generation tool 218 selects the frame with the smallest r(ƒi) as defined by equation 7. When selecting a background frame, the generation tool 218 selects a frame in which a minimal number of faces appear, to avoid viewer distraction and to minimize object/face removal processing.
The generation tool 218 seamlessly inserts face images of key roles on the filtered background. In at least one implementation, the position and scale of the face images are based on the size of the corresponding cluster 212 represented by the node in the community graph 216. For example, images from the largest clusters are featured more prominently than those from smaller clusters.
Example Processes
FIGS. 7 and 8 are flow diagrams illustrating example processes 700 and 800 for performing key-role acquisition from video as represented in FIGS. 2-6.
The process 700 (as well as each process described herein) is illustrated as a collection of acts in a logical flow graph, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer instructions stored on one or more computer-readable media that, when executed by one or more processors, perform the recited operations. Note that the order in which the process is described is not intended to be construed as a limitation, and any number of the described acts can be combined in any order to implement the process, or an alternate process. Additionally, individual blocks may be deleted from the process without departing from the spirit and scope of the subject matter described herein. In various implementations one or more acts of process 700 may be replaced by acts from the other processes described herein.
The process 700, for example, includes, at 702, the video tool 202 receiving a video. For instance the received video may be a video streamed over a network 102 or stored on a computing device 104. At 704, the video tool 202 performs video structuring. For example, the received video is structured by segmenting the video into a hierarchical structure that includes levels for scenes, shots, and key frames. At 706, the video tool 202 processes the faces from the structured video. For instance, faces from the key frames are processed by detecting and grouping. At 708, the video tool 202 discovers a community based on the processed faces. At 710, the video tool 202 automatically generates a presentation of the video based on the discovered community. In several implementations, the presentation is generated without relying on rich metadata such as cast lists, scripts, or crowd-sourced information such as that obtained from the world-wide-web.
The process 800, as another example, includes, at 802, the video tool 202 receiving a video. At 804, the video structuring component 204 hierarchically structures the video into the video structure information 208 including scene, shot, and key frame segments. For instance, the video structuring component 204 may first detect shots as a continuous section of video taken by a single camera, extract a key frame from each shot, and detect similar shots that the video structuring component 204 groups to form a scene. At 806, the community discovery component 214 and the face grouping component 210 receive the scene, shot, and key frame segments. At 808, the face grouping component 210 performs face grouping by detecting faces from the key frames to form the face clusters 212.
At 810, meanwhile, the community discovery component 214 constructs a community graph 216 by identifying nodes (e.g., according to co-occurrence of the roles in a scene) based on the roles represented by the face clusters 212 and the video structure information 208. At 812, the generation tool 218 receives the community graph 216. At 814, the generation tool 218 identifies important roles by using a role importance function such as that shown in equation 6. For instance, the generation tool 218 calculates role importance based on the nodes/vertices of the community graph 216 that contain the most frequently captured faces and have an appropriate number of edges connecting to other nodes/vertices. At 816, the generation tool 218 generates one or more presentations in accordance with those shown in FIG. 6.
FIG. 9 is a flow diagram of an example process for acquiring key roles via face grouping. The process 900 of FIG. 9 includes, at 902, the face grouping component 210 receiving the key frames 304. At 904, the face detection component 302 detects the face information 306 from the key frames 304. At 906, the feature extraction component 308 receives the detected face information 306. At 908, the face image normalization component 310 normalizes the detected faces into (e.g., 64×64) gray scale images 312. At 910, the feature concatenation component 314 concatenates the gray value of the pixels of the gray scale images 312 as a 4096-dimensional vector 316, in some instances. At 912, the face descriptor component 318 receives the vector 316. At 914, the distance matrix component 320 produces a similarity matrix 322 by comparing received vectors using learning-based encoding and principal component analysis (LE-PCA). At 916, the clustering component 324 generates face clusters, like face cluster 212, and selects an exemplar 326 for each cluster.
FIG. 10 is a flow diagram of an example process employing key-role acquisition from video to generate a presentation. The process 1000 of FIG. 10 illustrates the generation tool 218 automatically creating a presentation or poster containing identified key roles selected from a community graph such as the community graphs 216 or 500.
At 1002, the generation tool 218 identifies nodes/vertices containing the most-frequently captured faces and that have edges to other vertices with a correlation weight meeting a minimum threshold by using a role importance function. For instance, the generation tool 218 may use a role importance function such as that shown in equation 6 to identify the desired nodes/vertices.
At 1004, the generation tool 218 selects one or more presentation styles for generation. At 1006, when the generation tool 218 selects a key frame style presentation such as the example shown at 602, a representative frame containing key roles is selected as the presentation by using a representation function such as that shown in equation 7. At 1008, when the generation tool 218 selects a collage style presentation, such as the picture collage style example shown at 604 or a video collage style example shown at 606, the generation tool 218 selects candidate face images by using a role importance function. In some instances, the generation tool 218 uses a role importance function, such as that shown in equation 6 to select candidate face images.
At 1010, processing for the two example collage styles diverges. At 1012, when the generation tool 218 selects a picture collage style presentation, the generation tool 218 assembles a picture collage in which each face region-of-interest is visible, while other parts of the face images are overlaid. At 1014, when the generation tool 218 selects a video collage style presentation, the generation tool 218 creates a video collage by detecting the face regions-of-interest and concatenating the images with smoothed boundaries to assemble a naturally appealing collage.
At 1016, when the generation tool 218 selects a synthesized style presentation such as the example shown at 608, the generation tool 218 synthesizes a presentation by embedding images of the key roles on a representative background. For example, the representative background frame with the smallest r(ƒi) as defined by equation 7 is selected. To complete the synthesized style presentation, the generation tool 218 embeds face images of identified key roles on the filtered background.
At 1018, the generation tool 218 provides the selected presentation styles for display. In various implementations, the presentations are displayed electronically, e.g., on a computer screen or digital billboard, although the presentations may also be provided for use in print media.
CONCLUSION
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.

Claims (20)

What is claimed is:
1. A method comprising:
receiving a video from which to identify key roles;
performing video structuring on the video to identify key frames;
processing faces from the key frames to generate processed faces;
discovering a community from the processed faces, wherein the discovering the community comprises:
correlating roles that co-occur in a scene, wherein the roles are associated with the processed faces;
determining a strength of a relationship between a first role of the roles and a second role of the roles that co-occur in the scene based at least in part on a lapse of time between a first time that the first role occurs and a second time that the second role occurs in the scene; and
identifying the key roles and relationships between the key roles based at least in part on the strength of the relationship; and
generating a user-interface presentation that visually summarizes content of the video by depicting the key roles that have been identified.
2. A method as recited in claim 1, wherein the video includes internet protocol television (IPTV) content or video on demand (VOD) content.
3. A method as recited in claim 1, wherein performing the video structuring on the video comprises:
identifying a hierarchical structure of the video, the hierarchical structure of the video including scenes, shots, and the key frames;
extracting a shot from the video, wherein the shot represents a continuous section of video shot by a camera;
identifying a key frame in the shot, wherein the key frame includes a plurality of images from the shot; and
grouping a plurality of shots to form a scene, the user-interface presentation at least partly depicting the scene.
4. A method as recited in claim 1, wherein:
the processing the faces from the key frames includes determining an importance of a role associated with at least one processed face of the processed faces; and
generating the user-interface presentation is based at least in part on the importance of the role associated with the at least one processed face.
5. A method as recited in claim 1, wherein the discovering the community from the processed faces includes constructing a community graph representing interrelationships between the roles.
6. A method as recited in claim 5, wherein the community graph further represents strengths of the interrelationships between the roles.
7. A method as recited in claim 1, wherein the user-interface presentation includes a key frame style presentation based at least on a key frame representing the video in terms of one or more of color, theme, or visual quality.
8. A method as recited in claim 1, wherein the user-interface presentation includes multiple pictures arranged in a collage.
9. A method as recited in claim 1, wherein the user-interface presentation includes images of the key roles embedded on a background representative of the video in terms of one or more of color, theme, or visual quality.
10. A method as recited in claim 1, wherein the key frames include at least one face and represent a shot of the video at least in terms of color, background image, or action.
11. A method as recited in claim 1, wherein the discovering the community further comprises:
determining that the first role and the second role each appear a number of times above a predetermined threshold;
determining that the first role and the second role are key roles; and
determining that a strength of the relationship between the first role and the second role meets or exceeds a threshold value based at least in part on the lapse of time being within a predetermined threshold of time.
12. A computer storage device having encoded thereon computer-executable instructions to configure a computer to perform operations comprising:
receiving a video from which to ascertain a key role;
processing faces from the video to obtain processed faces, wherein an individual processed face of the processed faces is associated with an individual role of a plurality of roles;
discovering a community from the processed faces, wherein the community represents interrelationships between characters in the video, the discovering the community comprising:
identifying two or more roles of the plurality of roles that co-occur in a scene; and
determining a relationship between the two or more roles that co-occur in the scene within a predetermined threshold of time, wherein a strength of the relationship meets or exceeds a threshold value;
ascertaining the key role from the video based at least on the two or more roles; and
generating a user-interface presentation that visually summarizes content of the video, the user-interface presentation including the key role.
13. A computer storage device as recited in claim 12, wherein:
processing the faces from the video includes determining an importance of the individual role; and
generating the user-interface presentation is based at least in part on the importance of the individual role.
14. A computer storage device as recited in claim 12, wherein ascertaining the key role from the video is performed independent of metadata associated with the video.
15. A computer storage device as recited in claim 12, wherein discovering the community from the processed faces includes:
identifying individual processed faces most frequently processed from the video and having a threshold level of relationships to other individual processed faces; and
employing the individual processed faces being identified as vertices to construct a community graph including correlations between the individual processed faces.
16. A computer storage device as recited in claim 12, wherein:
generating the user-interface presentation is based at least in part on at least one key frame and at least the key role; and
the user-interface presentation comprises an image of at least the key role embedded on a representative background obtained from the at least one key frame.
17. A computer storage device as recited in claim 12, further comprising instructions to configure the computer to perform operations comprising:
extracting a shot from the video; and
identifying a key frame in the shot.
18. An apparatus comprising:
a processor; and
a video tool comprising:
a video structuring component configured to:
receive a video;
analyze the video; and
segment the video into hierarchical levels of scenes, shots, and key frames;
a face grouping component configured to generate face clusters for faces identified in the key frames;
a community discovery component configured to identify one or more key roles and relationships between the one or more key roles by:
determining, from a face cluster of the face clusters, that at least one role occurs at a frequency above a predetermined threshold in a scene of the scenes; and
determining a relationship between the at least one role and a second role based at least in part on a determination that the at least one role and the second role co-occur in the scene within a predetermined threshold of time, wherein a strength of the relationship meets or exceeds a threshold value; and
a generation tool configured to generate a user-interface presentation that visually summarizes content of the video, the user-interface presentation based at least on the one or more key roles and the relationships.
19. An apparatus as recited in claim 18, wherein the generation tool is further configured to:
receive a community graph representing a community, the community representing the one or more key roles and the relationships between the one or more key roles; and
generate the user-interface presentation based at least in part on the community graph.
20. An apparatus as recited in claim 18, wherein the generation tool is further configured to:
determine an importance of the one or more key roles; and
generate the user-interface presentation based at least in part on the importance of the one or more key roles.
US13/085,288 2011-04-12 2011-04-12 Detecting key roles and their relationships from video Active 2032-01-22 US9271035B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/085,288 US9271035B2 (en) 2011-04-12 2011-04-12 Detecting key roles and their relationships from video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/085,288 US9271035B2 (en) 2011-04-12 2011-04-12 Detecting key roles and their relationships from video

Publications (2)

Publication Number Publication Date
US20120263433A1 US20120263433A1 (en) 2012-10-18
US9271035B2 true US9271035B2 (en) 2016-02-23

Family

ID=47006444

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/085,288 Active 2032-01-22 US9271035B2 (en) 2011-04-12 2011-04-12 Detecting key roles and their relationships from video

Country Status (1)

Country Link
US (1) US9271035B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160026872A1 (en) * 2014-07-23 2016-01-28 Microsoft Corporation Identifying presentation styles of educational videos
US20170201525A1 (en) * 2016-01-10 2017-07-13 International Business Machines Corporation Evidence-based role based access control
US20170244778A1 (en) * 2016-02-23 2017-08-24 Linkedin Corporation Graph framework using heterogeneous social networks
CN109218660A (en) * 2017-07-07 2019-01-15 中兴通讯股份有限公司 A kind of method for processing video frequency and device
US10789291B1 (en) * 2017-03-01 2020-09-29 Matroid, Inc. Machine learning in video classification with playback highlighting
US11915429B2 (en) 2021-08-31 2024-02-27 Gracenote, Inc. Methods and systems for automatically generating backdrop imagery for a graphical user interface

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2698693B1 (en) * 2011-07-18 2016-01-13 ZTE Corporation Local image translating method and terminal with touch screen
US9449216B1 (en) * 2013-04-10 2016-09-20 Amazon Technologies, Inc. Detection of cast members in video content
US9154761B2 (en) 2013-08-19 2015-10-06 Google Inc. Content-based video segmentation
US10417271B2 (en) * 2014-11-25 2019-09-17 International Business Machines Corporation Media content search based on a relationship type and a relationship strength
KR102319456B1 (en) * 2014-12-15 2021-10-28 조은형 Method for reproduing contents and electronic device performing the same
US9699196B1 (en) * 2015-09-29 2017-07-04 EMC IP Holding Company LLC Providing security to an enterprise via user clustering
US10460196B2 (en) * 2016-08-09 2019-10-29 Adobe Inc. Salient video frame establishment
US10180939B2 (en) 2016-11-02 2019-01-15 International Business Machines Corporation Emotional and personality analysis of characters and their interrelationships
US10423822B2 (en) * 2017-03-15 2019-09-24 International Business Machines Corporation Video image overlay of an event performance
US10453496B2 (en) * 2017-12-29 2019-10-22 Dish Network L.L.C. Methods and systems for an augmented film crew using sweet spots
US10834478B2 (en) 2017-12-29 2020-11-10 Dish Network L.L.C. Methods and systems for an augmented film crew using purpose
US10783925B2 (en) 2017-12-29 2020-09-22 Dish Network L.L.C. Methods and systems for an augmented film crew using storyboards
CN108391180B (en) * 2018-02-09 2020-06-26 北京华录新媒信息技术有限公司 Video summary generation device and video summary generation method
US20190251350A1 (en) * 2018-02-15 2019-08-15 DMAI, Inc. System and method for inferring scenes based on visual context-free grammar model
US11308312B2 (en) 2018-02-15 2022-04-19 DMAI, Inc. System and method for reconstructing unoccupied 3D space
US11455986B2 (en) 2018-02-15 2022-09-27 DMAI, Inc. System and method for conversational agent via adaptive caching of dialogue tree
CN112101075B (en) * 2019-06-18 2022-03-25 腾讯科技(深圳)有限公司 Information implantation area identification method and device, storage medium and electronic equipment
US11334752B2 (en) * 2019-11-19 2022-05-17 Netflix, Inc. Techniques for automatically extracting compelling portions of a media content item
US11948360B2 (en) * 2020-06-11 2024-04-02 Netflix, Inc. Identifying representative frames in video content
CN113283480B (en) * 2021-05-13 2023-09-05 北京奇艺世纪科技有限公司 Object identification method and device, electronic equipment and storage medium
US11449893B1 (en) 2021-09-16 2022-09-20 Alphonso Inc. Method for identifying when a newly encountered advertisement is a variant of a known advertisement
CN113676776B (en) * 2021-09-22 2023-12-26 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN115022733B (en) * 2022-06-17 2023-09-15 中国平安人寿保险股份有限公司 Digest video generation method, digest video generation device, computer device and storage medium

Citations (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305195A (en) 1992-03-25 1994-04-19 Gerald Singer Interactive advertising system for on-line terminals
US5595389A (en) * 1993-12-30 1997-01-21 Eastman Kodak Company Method and apparatus for producing "personalized" video games using CD discs
US5623308A (en) 1995-07-07 1997-04-22 Lucent Technologies Inc. Multiple resolution, multi-stream video system using a single standard coder
US6028603A (en) 1997-10-24 2000-02-22 Pictra, Inc. Methods and apparatuses for presenting a collection of digital media in a media container
US6157677A (en) 1995-03-22 2000-12-05 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for coordination of motion determination over multiple frames
US20010034740A1 (en) 2000-02-14 2001-10-25 Andruid Kerne Weighted interactive grid presentation system and method for streaming a multimedia collage
US6535639B1 (en) 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US6538672B1 (en) 1999-02-08 2003-03-25 Koninklijke Philips Electronics N.V. Method and apparatus for displaying an electronic program guide
US20030095720A1 (en) 2001-11-16 2003-05-22 Patrick Chiu Video production and compaction with collage picture frame user interface
US20030179953A1 (en) 2002-03-20 2003-09-25 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and image processing program
US20030197716A1 (en) 2002-04-23 2003-10-23 Krueger Richard C. Layered image compositing system for user interfaces
US20030210808A1 (en) 2002-05-10 2003-11-13 Eastman Kodak Company Method and apparatus for organizing and retrieving images containing human faces
US20030210886A1 (en) * 2002-05-07 2003-11-13 Ying Li Scalable video summarization and navigation system and method
US20030237091A1 (en) 2002-06-19 2003-12-25 Kentaro Toyama Computer user interface for viewing video compositions generated from a video composition authoring system using video cliplets
US20040071441A1 (en) 1996-07-29 2004-04-15 Foreman Kevin J Graphical user interface for a motion video planning and editing system for a computer
US20040088723A1 (en) 2002-11-01 2004-05-06 Yu-Fei Ma Systems and methods for generating a video summary
US20040085341A1 (en) 2002-11-01 2004-05-06 Xian-Sheng Hua Systems and methods for automatically editing a video
US20040205498A1 (en) 2001-11-27 2004-10-14 Miller John David Displaying electronic content
US20050147322A1 (en) 2003-10-01 2005-07-07 Aryan Saed Digital composition of a mosaic image
US6922201B2 (en) 2001-12-05 2005-07-26 Eastman Kodak Company Chronological age altering lenticular image
US20050228849A1 (en) 2004-03-24 2005-10-13 Tong Zhang Intelligent key-frame extraction from a video
US20050255914A1 (en) * 2004-05-14 2005-11-17 Mchale Mike In-game interface with performance feedback
US6970639B1 (en) 1999-09-08 2005-11-29 Sony United Kingdom Limited System and method for editing source content to produce an edited content sequence
US20060106764A1 (en) 2004-11-12 2006-05-18 Fuji Xerox Co., Ltd System and method for presenting video search results
US20060120624A1 (en) 2004-12-08 2006-06-08 Microsoft Corporation System and method for video browsing using a cluster index
US20060153466A1 (en) 2003-06-30 2006-07-13 Ye Jong C System and method for video processing using overcomplete wavelet coding and circular prediction mapping
US20060184980A1 (en) 2003-04-07 2006-08-17 Cole David J Method of enabling an application program running on an electronic device to provide media manipulation capabilities
US7095907B1 (en) 2002-01-10 2006-08-22 Ricoh Co., Ltd. Content and display device dependent creation of smaller representation of images
US7107532B1 (en) 2001-08-29 2006-09-12 Digeo, Inc. System and method for focused navigation within a user interface
US20060233245A1 (en) 2005-04-15 2006-10-19 Chou Peter H Selective reencoding for GOP conformity
US20060242139A1 (en) 2005-04-21 2006-10-26 Yahoo! Inc. Interestingness ranking of media objects
US20060257048A1 (en) 2005-05-12 2006-11-16 Xiaofan Lin System and method for producing a page using frames of a video stream
US20070058884A1 (en) 2004-11-12 2007-03-15 Microsoft Corporation Auto Collage
US20070074110A1 (en) 2005-09-29 2007-03-29 Miksovsky Jan T Media display collages
US20070089152A1 (en) 2005-10-14 2007-04-19 Microsoft Corporation Photo and video collage effects
US20070101269A1 (en) 2005-10-31 2007-05-03 Microsoft Corporation Capture-intention detection for video content analysis
US20070110335A1 (en) 2004-11-12 2007-05-17 Microsoft Corporation Image Processing System for Digital Collage
US20070109304A1 (en) 2005-11-17 2007-05-17 Royi Akavia System and method for producing animations based on drawings
US7222300B2 (en) 2002-06-19 2007-05-22 Microsoft Corporation System and method for automatically authoring video compositions using video cliplets
US20070183661A1 (en) 2006-02-07 2007-08-09 El-Maleh Khaled H Multi-mode region-of-interest video object segmentation
US20070183497A1 (en) 2006-02-03 2007-08-09 Jiebo Luo Extracting key frame candidates from video clip
US20080019576A1 (en) 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
US20080037826A1 (en) * 2006-08-08 2008-02-14 Scenera Research, Llc Method and system for photo planning and tracking
US20080075390A1 (en) 2006-09-22 2008-03-27 Fuji Xerox Co., Ltd. Annealing algorithm for non-rectangular shaped stained glass collages
US20080159649A1 (en) 2006-12-29 2008-07-03 Texas Instruments Incorporated Directional fir filtering for image artifacts reduction
US20080209327A1 (en) 2007-02-27 2008-08-28 Microsoft Corporation Persistent spatial collaboration
US20080304735A1 (en) 2007-06-05 2008-12-11 Microsoft Corporation Learning object cutout from a single example
US20080304808A1 (en) 2007-06-05 2008-12-11 Newell Catherine D Automatic story creation using semantic classifiers for digital assets and associated metadata
US20090003712A1 (en) 2007-06-28 2009-01-01 Microsoft Corporation Video Collage Presentation
US7526725B2 (en) * 2005-04-08 2009-04-28 Mitsubishi Electric Research Laboratories, Inc. Context aware video conversion method and playback system
US20090116732A1 (en) 2006-06-23 2009-05-07 Samuel Zhou Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US20090169168A1 (en) * 2006-01-05 2009-07-02 Nec Corporation Video Generation Device, Video Generation Method, and Video Generation Program
US20100066822A1 (en) 2004-01-22 2010-03-18 Fotonation Ireland Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
US20100179816A1 (en) 2009-01-09 2010-07-15 Chung-Hsin Electric And Machinery Manufacturing Corp. Digital Lifetime Record and Display System
US20100199227A1 (en) 2009-02-05 2010-08-05 Jun Xiao Image collage authoring
US20100245567A1 (en) * 2009-03-27 2010-09-30 General Electric Company System, method and program product for camera-based discovery of social networks
US20110085710A1 (en) * 2006-05-10 2011-04-14 Aol Inc. Using relevance feedback in face recognition
US20110138306A1 (en) 2009-12-03 2011-06-09 Cbs Interactive, Inc. Online interactive digital content scrapbook and time machine

Patent Citations (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5305195A (en) 1992-03-25 1994-04-19 Gerald Singer Interactive advertising system for on-line terminals
US5595389A (en) * 1993-12-30 1997-01-21 Eastman Kodak Company Method and apparatus for producing "personalized" video games using CD discs
US6157677A (en) 1995-03-22 2000-12-05 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for coordination of motion determination over multiple frames
US5623308A (en) 1995-07-07 1997-04-22 Lucent Technologies Inc. Multiple resolution, multi-stream video system using a single standard coder
US20040071441A1 (en) 1996-07-29 2004-04-15 Foreman Kevin J Graphical user interface for a motion video planning and editing system for a computer
US6028603A (en) 1997-10-24 2000-02-22 Pictra, Inc. Methods and apparatuses for presenting a collection of digital media in a media container
US6538672B1 (en) 1999-02-08 2003-03-25 Koninklijke Philips Electronics N.V. Method and apparatus for displaying an electronic program guide
US6535639B1 (en) 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US6970639B1 (en) 1999-09-08 2005-11-29 Sony United Kingdom Limited System and method for editing source content to produce an edited content sequence
US20010034740A1 (en) 2000-02-14 2001-10-25 Andruid Kerne Weighted interactive grid presentation system and method for streaming a multimedia collage
US7107532B1 (en) 2001-08-29 2006-09-12 Digeo, Inc. System and method for focused navigation within a user interface
US20030095720A1 (en) 2001-11-16 2003-05-22 Patrick Chiu Video production and compaction with collage picture frame user interface
US7203380B2 (en) 2001-11-16 2007-04-10 Fuji Xerox Co., Ltd. Video production and compaction with collage picture frame user interface
US20040205498A1 (en) 2001-11-27 2004-10-14 Miller John David Displaying electronic content
US6922201B2 (en) 2001-12-05 2005-07-26 Eastman Kodak Company Chronological age altering lenticular image
US7095907B1 (en) 2002-01-10 2006-08-22 Ricoh Co., Ltd. Content and display device dependent creation of smaller representation of images
US20030179953A1 (en) 2002-03-20 2003-09-25 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and image processing program
US20030197716A1 (en) 2002-04-23 2003-10-23 Krueger Richard C. Layered image compositing system for user interfaces
US20030210886A1 (en) * 2002-05-07 2003-11-13 Ying Li Scalable video summarization and navigation system and method
US20030210808A1 (en) 2002-05-10 2003-11-13 Eastman Kodak Company Method and apparatus for organizing and retrieving images containing human faces
US20030237091A1 (en) 2002-06-19 2003-12-25 Kentaro Toyama Computer user interface for viewing video compositions generated from a video composition authoring system using video cliplets
US7222300B2 (en) 2002-06-19 2007-05-22 Microsoft Corporation System and method for automatically authoring video compositions using video cliplets
US20040088723A1 (en) 2002-11-01 2004-05-06 Yu-Fei Ma Systems and methods for generating a video summary
US20040085341A1 (en) 2002-11-01 2004-05-06 Xian-Sheng Hua Systems and methods for automatically editing a video
US7127120B2 (en) 2002-11-01 2006-10-24 Microsoft Corporation Systems and methods for automatically editing a video
US20060184980A1 (en) 2003-04-07 2006-08-17 Cole David J Method of enabling an application program running on an electronic device to provide media manipulation capabilities
US20060153466A1 (en) 2003-06-30 2006-07-13 Ye Jong C System and method for video processing using overcomplete wavelet coding and circular prediction mapping
US20050147322A1 (en) 2003-10-01 2005-07-07 Aryan Saed Digital composition of a mosaic image
US20100066822A1 (en) 2004-01-22 2010-03-18 Fotonation Ireland Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
US20050228849A1 (en) 2004-03-24 2005-10-13 Tong Zhang Intelligent key-frame extraction from a video
US20050255914A1 (en) * 2004-05-14 2005-11-17 Mchale Mike In-game interface with performance feedback
US7555718B2 (en) 2004-11-12 2009-06-30 Fuji Xerox Co., Ltd. System and method for presenting video search results
US20070058884A1 (en) 2004-11-12 2007-03-15 Microsoft Corporation Auto Collage
US20070110335A1 (en) 2004-11-12 2007-05-17 Microsoft Corporation Image Processing System for Digital Collage
US20060106764A1 (en) 2004-11-12 2006-05-18 Fuji Xerox Co., Ltd System and method for presenting video search results
US20060120624A1 (en) 2004-12-08 2006-06-08 Microsoft Corporation System and method for video browsing using a cluster index
US7526725B2 (en) * 2005-04-08 2009-04-28 Mitsubishi Electric Research Laboratories, Inc. Context aware video conversion method and playback system
US20060233245A1 (en) 2005-04-15 2006-10-19 Chou Peter H Selective reencoding for GOP conformity
US20060242139A1 (en) 2005-04-21 2006-10-26 Yahoo! Inc. Interestingness ranking of media objects
US7760956B2 (en) 2005-05-12 2010-07-20 Hewlett-Packard Development Company, L.P. System and method for producing a page using frames of a video stream
US20060257048A1 (en) 2005-05-12 2006-11-16 Xiaofan Lin System and method for producing a page using frames of a video stream
US20080019576A1 (en) 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
US20070074110A1 (en) 2005-09-29 2007-03-29 Miksovsky Jan T Media display collages
US20070089152A1 (en) 2005-10-14 2007-04-19 Microsoft Corporation Photo and video collage effects
US20070101269A1 (en) 2005-10-31 2007-05-03 Microsoft Corporation Capture-intention detection for video content analysis
US20070109304A1 (en) 2005-11-17 2007-05-17 Royi Akavia System and method for producing animations based on drawings
US20090169168A1 (en) * 2006-01-05 2009-07-02 Nec Corporation Video Generation Device, Video Generation Method, and Video Generation Program
US20070183497A1 (en) 2006-02-03 2007-08-09 Jiebo Luo Extracting key frame candidates from video clip
US20070183661A1 (en) 2006-02-07 2007-08-09 El-Maleh Khaled H Multi-mode region-of-interest video object segmentation
US20110085710A1 (en) * 2006-05-10 2011-04-14 Aol Inc. Using relevance feedback in face recognition
US20090116732A1 (en) 2006-06-23 2009-05-07 Samuel Zhou Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US20080037826A1 (en) * 2006-08-08 2008-02-14 Scenera Research, Llc Method and system for photo planning and tracking
US20080075390A1 (en) 2006-09-22 2008-03-27 Fuji Xerox Co., Ltd. Annealing algorithm for non-rectangular shaped stained glass collages
US20080159649A1 (en) 2006-12-29 2008-07-03 Texas Instruments Incorporated Directional fir filtering for image artifacts reduction
US20080209327A1 (en) 2007-02-27 2008-08-28 Microsoft Corporation Persistent spatial collaboration
US20080304735A1 (en) 2007-06-05 2008-12-11 Microsoft Corporation Learning object cutout from a single example
US20080304808A1 (en) 2007-06-05 2008-12-11 Newell Catherine D Automatic story creation using semantic classifiers for digital assets and associated metadata
US20090003712A1 (en) 2007-06-28 2009-01-01 Microsoft Corporation Video Collage Presentation
US20100179816A1 (en) 2009-01-09 2010-07-15 Chung-Hsin Electric And Machinery Manufacturing Corp. Digital Lifetime Record and Display System
US20100199227A1 (en) 2009-02-05 2010-08-05 Jun Xiao Image collage authoring
US20100245567A1 (en) * 2009-03-27 2010-09-30 General Electric Company System, method and program product for camera-based discovery of social networks
US20110138306A1 (en) 2009-12-03 2011-06-09 Cbs Interactive, Inc. Online interactive digital content scrapbook and time machine

Non-Patent Citations (43)

* Cited by examiner, † Cited by third party
Title
AT&T: U-verse TV, <<http://www.att.com/u-verse/, last accessed Nov. 25, 2010.
Brooks, "Movie Posters from Video by Example", 5th International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, Victoria, British Columbia, Canada, May 28-30, 2009, 8 pages.
Cao, et al., Face Recognition with Learning-based Descriptor, IEEE, 2010, pp. 2707-2714.
Everingham, et al., "Hello! My name is . . . Buffy"-Automatic Naming of Characters in TV Video, BMVC 2006, Sep. 4-7, 2006, Edinburgh, UK, 10 pages.
Final Office Action for U.S. Appl. No. 12/055,267, mailed on Jul. 15, 2013, Mei et al., "Video Collage Presentation", 14 pages.
Frascara, Communication Design Principles, Methods, and Practices, summary of book, published Nov. 2004. Summary accessed at <<http://www.design-bookshelf.com/Design/communication-design.html>>, accessed on Nov. 25, 2010.
Frascara, Communication Design Principles, Methods, and Practices, summary of book, published Nov. 2004. Summary accessed at >, accessed on Nov. 25, 2010.
Frey, et al., Clustering by Passing Messages Betweeen Data Points, Science vol. 315, Feb. 16, 2007, pp. 972-976.
Krahnstover, et al., "Towards a Unified Framework for Tracking and Analysis of Humanmotion", at <<http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/7478/20323/00938865.pdf>>, IEEE, 2001, pp. 47-54.
Krahnstover, et al., "Towards a Unified Framework for Tracking and Analysis of Humanmotion", at >, IEEE, 2001, pp. 47-54.
Li et al., "An Overview of Video Abstraction Techniques", Technical Report, Imaging Systems Laboratory, HP Laboratories, Palo Alto, CA, Jul. 31, 2001, 24 pages.
Liu, et al., "Video Collage", at <<http://delivery.acm.org/10.1145/1300000/1291341/p461-liu.pdf?key1=1291341&key2=2017162911&coll=Portal&dl=GUIDE&CFID=39418830&CFTOKEN=67965359>>, ACM, 2007, pp. 461-462.
Liu, et al., Learning to Detect a Salient Object, IEEE 2007, 8 pages.
Liu, et al., Naming Faces in Broadcast News Video by Image Google, MM 2008, Oct. 26-31, 2008, Vancouver BC Canada, pp. 717-720.
Mei, et al., Home Video Visual Quality Assessment With Spatiotemporal Factors, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 6, Jun. 2007, pp. 699-706.
Mei, et al., Video Collage: Presenting a Video Sequence Using a Single Image. Springer-Verlag 2008.
Mentzelopoulos et al., "Key-Frame Extraction Algorithm using Entropy Difference", Multimedia Information Retrieval (MIR 2004), New York, NY, Oct. 15-16, 2004, 7 pages.
Office Action for U.S. Appl. No. 12/055,267, mailed on Apr. 11, 2012, Tao Mei, "Video Collage Presentation", 14 pgs.
Office action for U.S. Appl. No. 12/055,267, mailed on Apr. 15, 2014, Mei et al., "Video Collage Presentation", 16 pages.
Office action for U.S. Appl. No. 12/055,267, mailed on Dec. 2, 2013, Mei, et al., "Video Collage Presentation", 15 pages.
Office action for U.S. Appl. No. 12/055,267, mailed on Feb. 6, 2013, Mei et al., "Video Collage Presentation", 16 pages.
Office Action for U.S. Appl. No. 12/055,267, mailed on Sep. 8, 2011, Tao Mei, "Video Collage Presentation", 12 pgs.
Peters, et al., "MultiMatch", at <<http://multimatch.eu/docs/publicdels/sota-final-public.pdf>>, Information Society Technologies, 2006, pp. 127.
Peters, et al., "MultiMatch", at >, Information Society Technologies, 2006, pp. 127.
Satoh, et al., Name-It: Association of Face and Name in Video, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, Jun. 17-19, 1997.
Scott, Social Networking Analysis: A Handbook, SAGE Publications (2000).
Shen, et al., Visual Analysis of Large Heterogeneous Social Networks by Semantic and Structural Abstraction, IEEE Transactions on Visualization and Computer Graphics, vol. 12, No. 6, Nov./Dec. 2006, pp. 1427-1439.
Skolos, et al., Type, Image, Message: A Graphic Design Layout Workshop, review, Eye Magazine, 2001, accessed on Nov. 25, 2010 at <<http://www.eyemagazine.com/review.php?id=140&rid=662&set=727>>.
Skolos, et al., Type, Image, Message: A Graphic Design Layout Workshop, review, Eye Magazine, 2001, accessed on Nov. 25, 2010 at >.
Social Network Analysis, A Brief Introduction <<http://www.orgnet.com/sna.html>>, accessed Nov. 26, 2010.
Social Network Analysis, A Brief Introduction >, accessed Nov. 26, 2010.
Taskiran, Evaluation of Automatic Video Summarization Systems, Proceedings paper, Proceedings of the SPIE International Society for Optics and Photonics, Jan. 16, 2006, 10, pages.
Wang, et al., "Video Collage: A Novel Presentation of Video Sequence", at <<http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/4284552/4284553/04284941.pdf?tp=&isnumber=4284553&arnumber=4284941>>, IEEE, 2007, pp. 1479-1482.
Wang, et al., "Video Content Representation on Tiny Devices", available at least as early as Jun. 1, 2007, at <<http:// www.cactus.tudelft.nl/CactusPublications/VideoContentRepTinyDevices.pdf>>, pp. 4.
Wang, et al., "Video Content Representation on Tiny Devices", available at least as early as Jun. 1, 2007, at >, pp. 4.
Wang, et al., Dynamic Video Collage, In: International Conference on Multimedia Modeling, Chongqing, China (2010) pp. 793-795.
Wang, et al., Picture Collage, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Patter Recognition (CVPR 2006), 8 pages.
Weng, et al., RoleNet: Treat a Movie as a Small Society, MIR 2007, Sep. 28-29, 2007, Augsburg, Bavaria, Germany, pp. 51-60.
Zhang, et al., "An Automated Video Object Extraction System Based on Spatiotemporal Independent Component Analysis and Multiscale Segmentation", available at least as early as Jun. 1, 2007, at <<http://www.ee.ryerson.ca/˜xzhang/publications/Eurasip2006-stlCAvideo-zhang-chen.pdf>>, Hindawi Publishing Corporation, 2006, pp. 22.
Zhang, et al., "An Automated Video Object Extraction System Based on Spatiotemporal Independent Component Analysis and Multiscale Segmentation", available at least as early as Jun. 1, 2007, at >, Hindawi Publishing Corporation, 2006, pp. 22.
Zhang, et al., Automatic partitioning of full-motion video, Multimedia Systems (1993) 1: 10-28.
Zhang, et al., Character Identification in Feature-Length Films Using Global Face-Name Matching, IEEE Transactions on Multimedia, vol. 11, No. 7, Nov. 2009, pp. 1276-1288.
Zhao, et al., Face Recognition: A Literature Survey, ACM Computing Surveys, vol. 35, No. 4, Dec. 2003, pp. 399-458.

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10248865B2 (en) * 2014-07-23 2019-04-02 Microsoft Technology Licensing, Llc Identifying presentation styles of educational videos
US9652675B2 (en) * 2014-07-23 2017-05-16 Microsoft Technology Licensing, Llc Identifying presentation styles of educational videos
US20160026872A1 (en) * 2014-07-23 2016-01-28 Microsoft Corporation Identifying presentation styles of educational videos
US20170201525A1 (en) * 2016-01-10 2017-07-13 International Business Machines Corporation Evidence-based role based access control
US10171471B2 (en) * 2016-01-10 2019-01-01 International Business Machines Corporation Evidence-based role based access control
US20170244778A1 (en) * 2016-02-23 2017-08-24 Linkedin Corporation Graph framework using heterogeneous social networks
US10264048B2 (en) * 2016-02-23 2019-04-16 Microsoft Technology Licensing, Llc Graph framework using heterogeneous social networks
US10789291B1 (en) * 2017-03-01 2020-09-29 Matroid, Inc. Machine learning in video classification with playback highlighting
US11232309B2 (en) 2017-03-01 2022-01-25 Matroid, Inc. Machine learning in video classification with playback highlighting
US11656748B2 (en) 2017-03-01 2023-05-23 Matroid, Inc. Machine learning in video classification with playback highlighting
CN109218660A (en) * 2017-07-07 2019-01-15 中兴通讯股份有限公司 A kind of method for processing video frequency and device
CN109218660B (en) * 2017-07-07 2021-10-12 中兴通讯股份有限公司 Video processing method and device
US11915429B2 (en) 2021-08-31 2024-02-27 Gracenote, Inc. Methods and systems for automatically generating backdrop imagery for a graphical user interface

Also Published As

Publication number Publication date
US20120263433A1 (en) 2012-10-18

Similar Documents

Publication Publication Date Title
US9271035B2 (en) Detecting key roles and their relationships from video
US8457469B2 (en) Display control device, display control method, and program
US8750602B2 (en) Method and system for personalized advertisement push based on user interest learning
US8503770B2 (en) Information processing apparatus and method, and program
US11057457B2 (en) Television key phrase detection
US8938153B2 (en) Representative image or representative image group display system, representative image or representative image group display method, and program therefor
CN107852520A (en) Manage the content uploaded
Tiwari et al. A survey of recent work on video summarization: approaches and techniques
US20140020005A1 (en) Devices, systems, methods, and media for detecting, indexing, and comparing video signals from a video display in a background scene using a camera-enabled device
WO2020259510A1 (en) Method and apparatus for detecting information embedding region, electronic device, and storage medium
TW201907736A (en) Method and device for generating video summary
US11605227B2 (en) Method and system for dynamically analyzing, modifying, and distributing digital images and video
CN103984778B (en) A kind of video retrieval method and system
CN111491187A (en) Video recommendation method, device, equipment and storage medium
JP2013207529A (en) Display control device, display control method and program
Lai et al. Tennis Video 2.0: A new presentation of sports videos with content separation and rendering
JP2006217046A (en) Video index image generator and generation program
JP2009060413A (en) Method and system for extracting feature of moving image, and method and system for retrieving moving image
Kim et al. Automatic color scheme extraction from movies
Khalil et al. Detection of violence in cartoon videos using visual features
CN113569668A (en) Method, medium, apparatus and computing device for determining highlight segments in video
CN114283349A (en) Data processing method and device, computer equipment and storage medium
Wang et al. Community discovery from movie and its application to poster generation
Ejaz et al. Video summarization by employing visual saliency in a sufficient content change method
CN116137671A (en) Cover generation method, device, equipment and medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MEI, TAO;HUA, XIAN-SHENG;LI, SHIPENG;AND OTHERS;SIGNING DATES FROM 20110330 TO 20110412;REEL/FRAME:026579/0985

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001

Effective date: 20141014

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8