CA2679461C - Method for recognizing content in an image sequence - Google Patents

Method for recognizing content in an image sequence Download PDF

Info

Publication number
CA2679461C
CA2679461C CA2679461A CA2679461A CA2679461C CA 2679461 C CA2679461 C CA 2679461C CA 2679461 A CA2679461 A CA 2679461A CA 2679461 A CA2679461 A CA 2679461A CA 2679461 C CA2679461 C CA 2679461C
Authority
CA
Canada
Prior art keywords
image sequence
under test
additional feature
sequence under
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA2679461A
Other languages
French (fr)
Other versions
CA2679461A1 (en
Inventor
Rudolf Hauke
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ATG Advanced Swiss Tech Group AG
Original Assignee
ATG Advanced Swiss Tech Group AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ATG Advanced Swiss Tech Group AG filed Critical ATG Advanced Swiss Tech Group AG
Publication of CA2679461A1 publication Critical patent/CA2679461A1/en
Application granted granted Critical
Publication of CA2679461C publication Critical patent/CA2679461C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/10Recognition assisted with metadata

Abstract

The invention relates to a method for recognizing content in an image sequence, comprising the steps of: detecting at least one face appearing in at least one frame of an image sequence under test; recognizing characteristic features of said at least one face; comparing said characteristic features to known features of characters stored in a database, thereby deciding whether said face represents a known character; detecting and recognizing at least one additional feature in at least one frame of said image sequence under test and at least one relation between the appearance of said known character and said at least one additional feature; comparing said at least one relation to metadata comprising known relations stored in said database each one assigned to a particular known image sequence, thereby recognizing if said image sequence under test at least partially equals one of said known image sequences.

Description

Method for recognizing content in an image sequence The invention refers to a method for recognizing content in image sequences.
With an increasing traffic on video sharing websites there is a growing demand for techniques to classify an image sequence in order to give the flood of information a structure for eas-ing its use and searchability. On the other hand providers of such video sharing web-sites are under increasing pressure on the part of copyright holders to make sure their copyrights are not violated by distribution of copyrighted video footage.
Framewise comparison of the image sequence that users want to upload is impracticable because of the huge amount of calculating power and memory necessary. Furthermore the provider would have to own a copy of every copyrighted movie. An approach for achieving the object needs to extract metadata describing the image sequence and comparing them to sets of metadata assigned to individual movies stored in a database thus tremendously reducing the necessary memory. Such an approach has been recently described by Mark Everingham, Josef Sivic and Andrew Zisserman, Department of Engineering Science, University of Oxford, in "Hello! My name is... Buffy" - Automatic Naming of Charac-ters in TV Video. In this publication a method for automatically labelling appearances of characters in TV or film material is presented, which combines multiple sources of information:
(i) automatic generation of time stamped character annotation by aligning subtitles and transcripts;
(ii) strengthening the supervisory information by identifying when characters are speak-ing;
(iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks.
The drawback of this approach is that subtitles are available only in image sequences on DVDs and that these subtitles can easily be removed thus making content recogni-tion impossible. Transcripts are normally not publicly available but for a fraction of all copyrighted videos and need to be tediously collected from a huge number of sources distributed over the internet. This approach may consequently ease content based search within a video but is less adequate for preventing copyright violations.

It is therefore an object of the present invention to provide an improved method for recognizing content in an image sequence.

With the foregoing and other objects in view there is provided, in accordance with the invention a method for recognizing content in an image sequence consisting of at least one frame, comprising the steps of. detecting at least one face appearing in at least one of the frames of a image sequence under test; recognizing characteristic features of said at least one face; comparing said characteristic features to known features of characters stored in a database, thereby deciding whether said face represents a known character;
detecting and recognizing at least one additional feature in at least one frame of said image sequence under test and at least one relation between the appearance of said known character and said at least one additional feature; comparing said at least one relation to metadata comprising known relations stored in said database each one assigned to a particular known image sequence, thereby recognizing if said image sequence under test at least partially equals one of said known image sequences, wherein successive appearance of at least two of said characters in said image sequence under test along with time intervals between said appearances is detected and compared to said metadata.

In other words, according to the invention, an image sequence under test consisting of at least one frame or a sequence of frames is analyzed using a face detection technique for detecting at least one face in at least one of the frames. The term image sequence may denote any type of electronic image documents. In this sense the term image sequence may apply to sequences of images, such as videos or image sequences from computer games or to single images as a borderline case of an image sequence with the length 1. If a face is detected in the frame, recognition of characteristic features, i.e.
biometrical features, of that face is attempted. If these characteristic features are acquired they are compared to known features of characters stored in a database. If the characteristic features match a set of known features the character is identified as a known character.
Such a character can be a real person, such as an actor or an actress.
Likewise it can be an animated character, e.g. in an animated cartoon or a computer game. The database can contain information assigned to that known character describ-ing in which known image sequences, e.g. Hollywood movies, this known character is starring, thereby tremendously reducing the number of datasets in the database to be considered in the subsequent search. The image sequence under test is furthermore scanned for at least one additional feature appearing in at least one frame.
The addi-tional feature can be an attribute of the character himself. Preferably it is an object or another character appearing in one of the frames. In the latter case a relation between the appearance of the identified known character and the additional feature, a spatio-temporal relation, to be more specific, is obtained by locating the identified known character and the additional feature, i.e. determining their respective position in their respective frame, and by determining a time interval between their appearance, which can be zero if they appear in the same frame. In conventional 2D frames the depth di-mension is also zero, however 3D image sequences are not excluded from being ana-lyzed by the method. This spatiotemporal relation is compared to metadata stored in the database comprising known spatiotemporal relations between the known character and additional features each spatiotemporal relation assigned to a particular known image sequence, the known character is starring in. Thus it is recognized if said image se-quence under test at least partially equals one of said known image sequences.
This way it is possible to figure out if the image sequence under test is a sequence out of one of the known image sequences, e.g. to detect if the image sequence under test is copy-righted without relying on hidden markers, digital signatures, check sums or other aux-iliary means that can easily be faked or removed, e.g. by projecting a movie and re-cording the projected images by means of a camera, e.g. a video camera, a webcam, a camera integrated into a cellular phone or the like. Another possible application of the method is to recognize content of computer games by analyzing their screen output, which is in the form of a video stream. Illegal or undesirable playing of such games can be detected and appropriate measures can be taken, e.g. informing an administrator or an authority, killing the game application or shutting down the computer or device which the game is played on. For instance, children can be kept from playing first per-son shooters, third person shooters or other computer or video fighting games on PCs, portable devices, video game consoles for home or casino use, handheld devices, cellu-lar phones and the like.
Other features which are considered as characteristic for the invention are set forth in the appended claims.

The spatiotemporal relation between faces and objects towards each other can be scal-able in order to be independent from the resolution of the frames.

According to another feature of the invention, said additional feature can be another face. This other face is detected and recognized the same way. If two or more charac-ters are recognized, the further search reduces to sets of metadata in the database as-signed to known image sequence in which said characters coappear. Regarding the spa-tiotemporal relation between the appearance of the two or more characters the sets of metadata to be considered is further reduced. For example, if one of the identified char-acters is Sean Connery and another one is Ursula Andress and they coappear in the same frame the probability is high that the image sequence under test is a sequence of the James Bond movie "Dr. No", further confirmed by their spatiotemporal relation, i.e.
their relative position towards each other in the frame. Two or more characters in dif-ferent frames with a certain time interval between their appearances can as well be used to identify the image sequence under test. Thereby the sheer appearance of the faces can be regarded without considering the absolute or relative position of the faces. Tak-ing the relative position into account as well further increases the discriminatory power of the method.

According to yet another feature of the invention, said at least one additional feature is an object, preferably in one of the classes: car, weapon, building, text, logo, trademark.
Such objects may be recognized and classified using pattern matching techniques ap-plied for identification of biometric features in huge data bases. Reference objects for each class are also stored in said database. These reference objects can be images or 3D
models of objects, from which 2D projections can easily be derived in order to recog-nize an object in the image sequence under test regardless of its orientation.
Since the number of possible 2D projections of a 3D model is infinite these projections do not necessarily have to be all stored in the database. Instead they can be generated on de-mand from the 3D model. Practical approaches work with just a few projections (12 to 24) which can be stored in the data base or generated on demand. These approaches also allow for a recognition independent of the orientation of the objects. 3D
modelling can also be applied to characters or faces. For instance, the coappearance of Sean Con-nery with an object identified as a car of recognized make, such as an Aston Martin, along with the spatiotemporal relation between their appearances can allow a unambi-guous assignment of the image sequence under test. The discriminatory power of the method increases with the number of faces and objects incorporated in the comparison.
This applies for faces and objects appearing in one single frame as well as in different frames. Two or more characters or objects adjacent to each other in a frame can be combined to form an object class and tracked together as such. Characteristic features of animated faces appearing in computer games, e.g. computer or video fighting games, such as Counter-strike or Doom, can be recognized as well and lead to an adequate ac-tion like terminating the game application or informing an administrator or an authority.
The discriminatory power of the method is particularly high if weapons coappearing with these animated faces are recognized.

A text object appearing in the image sequence can be either recognized by an OCR (op-tical character recognition) technique, which recognizes every single alphabetical char-acter as well as by pattern matching, where a whole word is recognized by pattern matching or correlation, which is much faster than OCR. Therefore a reference list of words can be stored in the database. Such a list can also be used to detect offensive language in images and frames. In case an offensive word is recognized further action can be taken such as to block displaying, downloading or uploading an image, inform an administrator or an authority or the like. Texts to be recognized can consist of char-acters of any script, such as Hebrew, Cyrillic, Chinese, Japanese, Latin etc.

In accordance with a preferred embodiment of the invention, the additional feature can be the color of an object. It also can be an object touched by said known character, such as a glass of wine or a handgun held by the character. In another preferred embodiment the additional feature is a costume worn by said known character. A background scen-ery, e.g. sea, mountains, indoor etc., can also be classified as an additional feature.

According to another embodiment of the invention, the additional feature can be a ver-bal or nonverbal sound, such as engine noise or speech. The type of noise may be de-tected by spectral analysis, speech recognition techniques or the like. The appearance of a certain character and his recognized speech may also allow a unambiguous assign-ment of the image sequence under test to a specific known image sequence.
However speech is often translated into a plurality of languages whereas image sequences always remain the same.

Other additional features that can be considered are facial expressions, hand gestures or body movements of said known character.

In a preferred embodiment of the invention the additional feature is a spatiotemporal profile of said known character acquired by tracking said known character in the course of the image sequence under test. Such a spatiotemporal profile can describe sequences of frames in which one of the characters or objects appears in the image sequence under test. Information on the position of the character or object with respect to the frame are not mandatory but can increase the performance of the method. Thus time maps can be created describing the appearance of characters and objects or other additional features in the course of the image sequence under test which can be compared to time maps contained in the metadata in said database. This comparison can be carried out as well for fractions of the time maps in order to be able to identify short image sequences cut out of larger video footage.

The position of a face or an object can be described in the form of coordinates (Carte-sian, Polar coordinates or the like). Since conventional frames are 2D
projections of 3D
objects and settings, two coordinates will be sufficient in most cases.
However the terms image sequence and frame may as well refer to 3D images such as holograms. In this case three coordinates are needed to describe the position. Beside the coordinates the description of a face or another object comprises an object classifier and a time stamp, if applicable, whereby time is considered the fourth dimension.

According to a preferred feature of the invention, the effort for recognizing content in the image sequence under test can be further reduced by subsampling. The conventional frame rate in movies represented in movie theaters is 24 frames per second.
Subsampling means that only a fraction of this number is regarded for content recognition.
For instance with a subsampling frame rate of 2.4 frames per second every tenth frame is used for the method thus further reducing the effort. Time sequence interpolation in most cases will be good enough for tracking normal moving characters or objects.

The method can be used for generating a cast list of the image sequence under test or for identifying a movie title by comparing that cast list to a data base.

The method may be advantageously applied for detecting copyrighted image sequences.
The detection may be carried out on a client computer following an attempt to upload said image sequence under test from that client computer to a server, which may host a video sharing website. If said image sequence under test is recognized as copyrighted the upload can be denied. The method may as well be carried out on a server following an upload of said image sequence under test from the client computer. If the image sequence under test is recognized as non copyrighted the image sequence under test is incorporated in a video database. Otherwise it is rejected.

The method may also be used to scan a database, such as the internet, for similar image sequences or images. A single image shall be considered a borderline case of an image sequence consisting of only one frame in which the at least one character appears along with the additional feature.

The method can be implemented on any type of data processing facilities, such as personal computers, servers, portable computers, other portable units such as handheld computers or cell phones. The frames can be acquired from a file stored on said data processing facility or from a frame buffer of a graphics device, such as a graphics card arranged in the data processing facility. This method has been described in the patent application U.S. Patent Publication No. 2008/0049027.

The database can be build using a similar method comprising the steps of detecting at least one face appearing in at least one of the frames of an image sequence under test;
recognizing characteristic features of said at least one face;
storing said characteristic features in a database and assigning them to a known character;
detecting and recognizing at least one additional feature in at least one fi-ame of said image sequence under test and at least one relation between the appearance of said known character and said at least one additional feature;
storing said at least one relation to metadata in said database;
assigning said at least one relation to said image sequence under test in said database.

All features described in the embodiments above can be applied for building the database in a similar manner.

It must be emphasized that all features described above and in the appended claims can be combined with each other.

Although the invention is illustrated and described herein as embodied in a method for recognizing content of an image sequence, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the broadest interpretation consistent with the description as a whole and within the scope and range of equivalents of the claims.

The construction and method of operation of the invention, however, together with additional objects and advantages thereof will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a schematic view of an image frame with faces and objects identified by a method according to the invention;

FIG. 2 is a diagram depicting the successive appearance of characters and objects in an image sequence;

FIG. 3 shows three consecutive frames of an image sequence with a moving character;
FIG. 4 depicts a track of a character in an image sequence FIG. 5 is a track of three characters in the course of three frames of an image sequence.
DESCRIPTION OF THE PREFERRED EMBODIMENTS

Referring now to the figures of the drawings in detail and first, particularly, to FIG. 1 thereof, there is shown a schematic view of an image frame 1 with three faces 2.1 to 2.3 and three objects 3.1 to 3.3 identified by a method according to the invention. The frame can be part of an image sequence, such as a video or an image stream from the video output of a computer game. It can be as well a single image. In a first step of the method, the faces 2.1 to 2.3 appearing in the frame 1 are detected. Then a recognition of characteristic features, e.g. biometrical features, is attempted for each face 2.1 to 2.3.
These biometrical features are then compared to known features of characters stored in a database, thereby deciding whether the face 2.1 to 2.3 represents a known character.
If this comparison is successful and the characters are identified, the database can be checked for metadata of known image sequences in which these characters coappear. If the result is ambiguous, at least one of the objects 3.1 to 3.3 (e.g. hat, gun, car) can be recognized and classified by comparison to reference objects stored in the database and checking their appearance with the characters 2.1 to 2.3 in the same frame of an image sequence. Furthermore the positions of faces 2.1 to 2.3 and objects 3.1 to 3.3 relative to each other indicated by arrows can be acquired and compared to metadata in the data-base, provided these metadata comprise such relative positions from characters and objects of known images or image sequences. Comparing identified characters and classified objects along with their respective positions to each other yields a high dis-criminatory power so chances are good to recognize if the frame is part of an image sequence stored in the database. This way it can be easily checked, if the content of the image is copyrighted, illegal or undesirable and appropriate measures can be taken. The faces 2.1 to 2.3 can be faces of real characters like face 2.1 or faces of animated charac-ters like faces 2.2 and 2.3. The number of faces 2.1 to 2.3 and objects 3.1 to 3.3 recog-nized in the frame 1 can be different from three.

FIG. 2 shows a diagram depicting the successive appearance of characters 2.1 to 2.3 and objects 3.1 to 3.3 in an image sequence under test. Instead of or additionally to rec-ognizing a multitude of characters and objects in one single frame and their respective positions relative to each other like depicted in figure 1, three characters 2.1 to 2.3 and three objects are identified in at least a fraction of the frames 1 from an image se-quence. The arrows indicate a time interval in which the characters 2.1 to 2.3 and ob-jects 3.1 to 3.3 respectively appear in the course of the image sequence. We refer to the pattern obtained this way as a time map. This time map can as well be compared to metadata from the database in order to identify if the image sequence under test at least partially equals to an image sequence described by a set of metadata. The positions of the faces 2.1 to 2.3 and objects 3.1 to 3.3 can as well be tracked over the course of the image sequence in order to further improve the method and increase its discriminatory power. The number of faces 2.1 to 2.3 and objects 3.1 to 3.3 recognized in the frames 1 of the image sequence can be different from three.

FIG. 3 shows three consecutive frames 1.1 to 1.3 of an image sequence with a moving character 2.1. The character 2.1 is tracked in the course of the image sequence, i.e. his position in every frame 1.1 to 1.3 is determined. The result is a trajectory 4 in Min-kowski space, which can also be compared to metadata in the database provided these metadata are appropriately structured. The frames 1.1 to 1.3 do not necessarily have to be directly consecutive. Instead the image sequence can be subsampled, e.g.
every 10`h frame 1 can be regarded. As well as the positions between objects 3.1 to 3.3 and charac-ters 2.1 to 2.3 time intervals between their appearance can be described relative to each other thus avoiding scale dependences occurring along with subsampling or supersam-pling.

FIG. 4 depicts a track of the character 2.1 from FIG. 3 in an image sequence.
Basically FIG. 4 is another representation of the situation shown in FIG. 3. All frames 1.1 to 1.n are projected on top of each other thus allowing to see the track or trajectory 4 of char-acter 2.1 in the course of the image sequence. Objects can be tracked the same way as characters 2.1 to 2.n. Optionally a probability map of the positions of characters 2.1 to 2.3 or objects 3.1 to 3.3 in at least a fraction of the image sequence can be created this way, which may be compared to metadata in the database as an additional feature.
FIG. 5 shows a track of three characters 2.1 to 2.3 in the course of three frames 1.1 to 1.3 of an image sequence. In this figure three characters 2.1 to 2.3 are tracked similar to what is shown in FIG. 3 and 4. Regarding the tracks or trajectories 4 of more than one character 2.1 to 2.n and/or objects 3.1 to 3.n yields an even higher discriminatory power thus facilitating a unambiguous recognition of the image sequence under test.
In the example the characters 2.2 and 2.3 are grouped and can be considered an object class of their own, for instance called crew.

Claims (31)

1. A method for recognizing content in image sequence consisting of at least one image frame, comprising the steps of:
detecting at least one face appearing in at least one of the frames of an image sequence under test;
recognizing characteristic features of said at least one face;
comparing said characteristic features to known features of characters stored in a database, thereby deciding whether said face represents a known character;
detecting and recognizing at least one additional feature in at least one frame of said image sequence under test and at least one relation between the appearance of said known character and said at least one additional feature;
comparing said at least one relation to metadata comprising known relations stored in said database each one assigned to a particular known image sequence, thereby recognizing if said image sequence under test at least partially equals one of said known image sequences, wherein successive appearance of at least two of said characters in said image sequence under test along with time intervals between said appearances is detected and compared to said metadata.
2. The method according to claim 1, wherein said relation is spatiotemporal.
3. The method according to one of the claims 1 or 2, wherein said at least one character is a real person.
4. The method according to one of the claim 1 to 3, wherein said at least one character is an animated character.
5. The method according to one of the claims 1 to 4, wherein said at least one additional feature is another face.
6. The method according to one of the claims 1 to 5, wherein said at least one additional feature is an object.
7. The method according to claim 6, wherein the object is in one of the classes: car, weapon, building, text, logo, trademark.
8. The method according to one of the claims 6 or 7, wherein the text object is identified by pattern matching.
9. The method according to one of the claims 1 to 8, wherein said at least one additional feature is a color of an object.
10. The method according to one of the claims 1 to 9, wherein said at least one additional feature is an object touched by said known character.
11. The method according to one of the claims 1 to 10, wherein said at least one additional feature is a costume worn by said known character.
12. The method according to one of the claims 1 to 11, wherein said at least one additional feature is a background scenery.
13. The method according to one of the claims 1 to 12, wherein said at least one additional feature is a sound.
14. The method according to claim 13, wherein said sound is verbal.
15. The method according to claim 13, wherein said sound is nonverbal.
16. The method according to one of the claims 1 to 15, wherein said at least one additional feature is a facial expression of said known character.
17. The method according to one of the claims 1 to 16, wherein said at least one additional feature is a hand gesture of said known character.
18. The method according to one of the claims 1 to 17, wherein said at least one additional feature is a body movement of said known character.
19. The method according to one of the claims 1 to 18, wherein said at least one additional feature is a movement of the lips of said known character.
20. The method according to one of the claims 1 to 19, wherein said at least one additional feature is at least detected in said at least one frame in which said at least one face was detected.
21. The method according to one of the claims 1 to 20, wherein said at least one additional feature is detected in at least one second frame distinct from said at least one frame in which said at least one face was detected.
22. The method according to one of the claims 1 to 21, wherein said at least one additional feature is a spatiotemporal profile of said known character acquired by tracking said known character in the course of the image sequence under test.
23. The method according to one of the claims 1 to 22, wherein coappearance of at least two of said characters in said at least one frame is detected and compared to said metadata.
24. The method according to one of the claims 1 to 23, wherein said image sequence under test is subsampled, thereby reducing the number of frames to be tested.
25. The method according to one of the claims 1 to 24, wherein a cast list of the image sequence under test is generated by recognizing characters.
26. The method according to one of the claims 1 to 25, wherein said at least one additional feature is an object and wherein at least one of the additional features is a spatiotemporal profile of said known character acquired by tracking said known character in the course of the image sequence under test.
27. Application of the method according to one of the claims 1 to 26 for detecting whether said image sequence under test is copyrighted by comparing it to metadata of an image sequence known to be copyrighted.
28. Application according to claim 27, wherein the detection is carried out on a client computer following an attempt to upload said image sequence under test to a server and wherein said upload is denied if said image sequence under test is recognized as copyrighted.
29. Application according to one of the claims 27 or 28, wherein the detection is carried out on a server following an upload of said image sequence under test from a client computer wherein the image sequence under test is incorporated in a video database only if said image sequence under test is recognized as noncopyrighted.
30. Application of the method according to one of the claims 1 to 26 for detecting whether said image sequence under test is part of a video output of a computer game by comparing it to metadata of said computer game.
31. Implementation of the method according to one of the claims 1 to 26 in at least one of computer, a portable device, a video game console, a handheld devices and a cellular phone.
CA2679461A 2007-04-13 2008-04-01 Method for recognizing content in an image sequence Expired - Fee Related CA2679461C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11/785,027 2007-04-13
US11/785,027 US8077930B2 (en) 2007-04-13 2007-04-13 Method for recognizing content in an image sequence
PCT/EP2008/053868 WO2008125481A1 (en) 2007-04-13 2008-04-01 Method for recognizing content in an image sequence

Publications (2)

Publication Number Publication Date
CA2679461A1 CA2679461A1 (en) 2008-10-23
CA2679461C true CA2679461C (en) 2012-05-15

Family

ID=39428020

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2679461A Expired - Fee Related CA2679461C (en) 2007-04-13 2008-04-01 Method for recognizing content in an image sequence

Country Status (5)

Country Link
US (1) US8077930B2 (en)
EP (1) EP2137669A1 (en)
CA (1) CA2679461C (en)
MX (1) MX2009011031A (en)
WO (1) WO2008125481A1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007034010A1 (en) * 2007-07-20 2009-01-22 Dallmeier Electronic Gmbh & Co. Kg Method and device for processing video data
US8090212B1 (en) 2007-12-21 2012-01-03 Zoran Corporation Method, apparatus, and system for reducing blurring of an image using multiple filtered images
US9270950B2 (en) * 2008-01-03 2016-02-23 International Business Machines Corporation Identifying a locale for controlling capture of data by a digital life recorder based on location
US8014573B2 (en) * 2008-01-03 2011-09-06 International Business Machines Corporation Digital life recording and playback
US8005272B2 (en) * 2008-01-03 2011-08-23 International Business Machines Corporation Digital life recorder implementing enhanced facial recognition subsystem for acquiring face glossary data
US9164995B2 (en) * 2008-01-03 2015-10-20 International Business Machines Corporation Establishing usage policies for recorded events in digital life recording
US9105298B2 (en) * 2008-01-03 2015-08-11 International Business Machines Corporation Digital life recorder with selective playback of digital video
US7894639B2 (en) * 2008-01-03 2011-02-22 International Business Machines Corporation Digital life recorder implementing enhanced facial recognition subsystem for acquiring a face glossary data
EP2356583B9 (en) 2008-11-10 2014-09-10 Metaio GmbH Method and system for analysing an image generated by at least one camera
CN101854518A (en) * 2009-03-30 2010-10-06 鸿富锦精密工业(深圳)有限公司 Object detection system and method
US8452599B2 (en) * 2009-06-10 2013-05-28 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for extracting messages
CN103003880B (en) * 2010-07-26 2016-10-19 皇家飞利浦电子股份有限公司 Representative image is determined for video
CN103810425B (en) * 2012-11-13 2015-09-30 腾讯科技(深圳)有限公司 The detection method of malice network address and device
US9152930B2 (en) 2013-03-15 2015-10-06 United Airlines, Inc. Expedited international flight online check-in
US9449216B1 (en) * 2013-04-10 2016-09-20 Amazon Technologies, Inc. Detection of cast members in video content
US11080777B2 (en) 2014-03-31 2021-08-03 Monticello Enterprises LLC System and method for providing a social media shopping experience
RU2656573C2 (en) * 2014-06-25 2018-06-05 Общество с ограниченной ответственностью "Аби Девелопмент" Methods of detecting the user-integrated check marks
CN109299692A (en) * 2018-09-26 2019-02-01 深圳壹账通智能科技有限公司 A kind of personal identification method, computer readable storage medium and terminal device
US20210034907A1 (en) * 2019-07-29 2021-02-04 Walmart Apollo, Llc System and method for textual analysis of images

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4697209A (en) 1984-04-26 1987-09-29 A. C. Nielsen Company Methods and apparatus for automatically identifying programs viewed or recorded
US5870754A (en) 1996-04-25 1999-02-09 Philips Electronics North America Corporation Video retrieval of MPEG compressed sequences using DC and motion signatures
US6188776B1 (en) 1996-05-21 2001-02-13 Interval Research Corporation Principle component analysis of images for the automatic location of control points
DE69936620T2 (en) * 1998-09-28 2008-05-21 Matsushita Electric Industrial Co., Ltd., Kadoma Method and device for segmenting hand gestures
TW452748B (en) * 1999-01-26 2001-09-01 Ibm Description of video contents based on objects by using spatio-temporal features and sequential of outlines
US6587574B1 (en) * 1999-01-28 2003-07-01 Koninklijke Philips Electronics N.V. System and method for representing trajectories of moving objects for content-based indexing and retrieval of visual animated data
US6711587B1 (en) * 2000-09-05 2004-03-23 Hewlett-Packard Development Company, L.P. Keyframe selection to represent a video
US6925197B2 (en) * 2001-12-27 2005-08-02 Koninklijke Philips Electronics N.V. Method and system for name-face/voice-role association
US7564476B1 (en) * 2005-05-13 2009-07-21 Avaya Inc. Prevent video calls based on appearance
US8156114B2 (en) * 2005-08-26 2012-04-10 At&T Intellectual Property Ii, L.P. System and method for searching and analyzing media content
JP2007072520A (en) * 2005-09-02 2007-03-22 Sony Corp Video processor
US7489804B2 (en) * 2005-09-26 2009-02-10 Cognisign Llc Apparatus and method for trajectory-based identification of digital data content
WO2007036892A1 (en) 2005-09-30 2007-04-05 Koninklijke Philips Electronics, N.V. Method and apparatus for long term memory model in face detection and recognition
US7921116B2 (en) * 2006-06-16 2011-04-05 Microsoft Corporation Highly meaningful multimedia metadata creation and associations
WO2008073366A2 (en) * 2006-12-08 2008-06-19 Sobayli, Llc Target object recognition in images and video
JP4945236B2 (en) * 2006-12-27 2012-06-06 株式会社東芝 Video content display device, video content display method and program thereof

Also Published As

Publication number Publication date
US8077930B2 (en) 2011-12-13
EP2137669A1 (en) 2009-12-30
MX2009011031A (en) 2010-01-25
US20080253623A1 (en) 2008-10-16
CA2679461A1 (en) 2008-10-23
WO2008125481A1 (en) 2008-10-23

Similar Documents

Publication Publication Date Title
CA2679461C (en) Method for recognizing content in an image sequence
Serrano et al. Fight recognition in video using hough forests and 2D convolutional neural network
Senst et al. Crowd violence detection using global motion-compensated lagrangian features and scale-sensitive video-level representation
Tejero-de-Pablos et al. Summarization of user-generated sports video by using deep action recognition features
Zhang et al. Character identification in feature-length films using global face-name matching
Bauml et al. Semi-supervised learning with constraints for person identification in multimedia data
Smeaton et al. High-level feature detection from video in TRECVid: a 5-year retrospective of achievements
Xu et al. Using webcast text for semantic event detection in broadcast sports video
Merler et al. Automatic curation of sports highlights using multimodal excitement features
Zhu et al. Trajectory based event tactics analysis in broadcast sports video
Nagrani et al. From benedict cumberbatch to sherlock holmes: Character identification in tv series without a script
Zhu et al. Player action recognition in broadcast tennis video with applications to semantic analysis of sports game
Zhu et al. Human behavior analysis for highlight ranking in broadcast racket sports video
Tapaswi et al. Total cluster: A person agnostic clustering method for broadcast videos
Kaufman et al. Temporal tessellation: A unified approach for video analysis
CN111209897B (en) Video processing method, device and storage medium
Jou et al. Structured exploration of who, what, when, and where in heterogeneous multimedia news sources
Awad et al. An overview on the evaluated video retrieval tasks at TRECVID 2022
Goldmann et al. Components and their topology for robust face detection in the presence of partial occlusions
Wang et al. Synchronization of lecture videos and electronic slides by video text analysis
Ngo et al. Structuring lecture videos for distance learning applications
Monteiro et al. Design and evaluation of classifier for identifying sign language videos in video sharing sites
Xu et al. Cast2face: Character identification in movie with actor-character correspondence
Sun et al. Field lines and players detection and recognition in soccer video
Shih et al. Content extraction and interpretation of superimposed captions for broadcasted sports videos

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20170403