WO2004034708A1 - Method and apparatus for separately providing additional information on each object in digital broadcasting image - Google Patents

Method and apparatus for separately providing additional information on each object in digital broadcasting image Download PDF

Info

Publication number
WO2004034708A1
WO2004034708A1 PCT/KR2002/001895 KR0201895W WO2004034708A1 WO 2004034708 A1 WO2004034708 A1 WO 2004034708A1 KR 0201895 W KR0201895 W KR 0201895W WO 2004034708 A1 WO2004034708 A1 WO 2004034708A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
information
tracking
unit image
scene transition
Prior art date
Application number
PCT/KR2002/001895
Other languages
French (fr)
Inventor
Seong-Whan Lee
Sang-Cheol Park
Seong-Hoon Lim
Original Assignee
Virtualmedia Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Virtualmedia Co., Ltd. filed Critical Virtualmedia Co., Ltd.
Priority to AU2002348647A priority Critical patent/AU2002348647A1/en
Priority to PCT/KR2002/001895 priority patent/WO2004034708A1/en
Publication of WO2004034708A1 publication Critical patent/WO2004034708A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Definitions

  • the present invention relates to an apparatus and method for providing additional information regarding a particular object in a digital broadcast image, and more particularly, to an apparatus and method for recognizing and tracking a particular object corresponding to a user's setting in a digital broadcast image and providing additional information regarding the particular object.
  • additional information services services of providing additional information together with a digital broadcast image.
  • additional information regarding at least one object included in the motion image is also displayed on a screen together with the motion image.
  • additional information regarding all objects in the motion image is provided in units of frames over time.
  • the present invention provides an apparatus and method for extracting a particular object designated by a user from a digital broadcast image or a normal motion image, recognizing the extracted object, and providing additional information regarding the object while the object is displayed on a screen. According to an aspect of the present invention, there is provided an apparatus for providing additional information regarding a particular object in a digital broadcast image.
  • the apparatus includes a motion image input unit which receives a motion image signal that is a stream of sequential unit images; a user command input unit which receives a user command; a scene transition detection unit which analyzes a motion image signal received through the motion image input unit and detects scene transition information that is information on a unit image having scene transition; a target object setting unit which receives scene transition information from the scene transition detection unit, a motion image signal from the motion image input unit, and object designation information on an object designated by a user from the user command input unit, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information and unit images succeeding the unit image corresponding to the scene transition information, sets a target area of the object in the detected unit image, and detects an initial position of the object; an object processing unit which receives object target area setting information resulting from the setting of the target area from the target object setting unit, scene transition information from the scene transition detection unit, and a motion image signal from the motion image input unit, sequentially extracts an object from
  • a method for providing additional information regarding a particular object in a digital broadcast image includes (a) receiving a motion image signal that is a stream of sequential unit images, analyzing the motion image signal, and detecting scene transition information regarding a unit image having scene transition; (b) receiving object designation information regarding an object designated by a user, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information detected in step (a) and unit images succeeding the unit image corresponding to the scene transition information and preceding a unit image corresponding to next scene transition information, setting an object target area in the detected unit image; (c) sequentially extracting the object from a unit image corresponding to the object target area set in step (b) and unit images succeeding the unit image corresponding to the object target area and preceding the unit image corresponding to the next scene transition information; (d) verifying whether the object extracted from each of the unit images in step (c) exists in each unit image; (e) tracking a moving position of
  • FIG. 1 is a schematic block diagram of an apparatus for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram of an object processing unit according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of an operation of detecting scene transition according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of an operation of tracking an object according to an embodiment of the present invention.
  • FIG. 1 is a schematic block diagram of an apparatus for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • the apparatus includes a motion image input unit 100, a scene transition detection unit 110, a target object setting unit 120, an object processing unit 130, an additional information insertion unit 140, an output unit 150, a first buffer 160, a second buffer 170, and a user command input unit 180.
  • the motion image input unit 100 receives a motion image signal, i.e., a stream of sequential unit images, for example, frame_1 through frame_n.
  • the user command input unit 180 receives a user's command signals (for example, an object designation signal and an object tracking stop request signal).
  • the scene transition detection unit 110 sequentially receives the unit images, for example, frame_1 through frame_n, constituting the motion image signal from the motion image input unit 100 and stores them in the first buffer 160.
  • the scene transition detection unit 110 also compares a unit image (e.g., frame_t) currently stored in the first buffer 160 with a unit image (e.g., frame_(t-3)), which corresponds to scene transition information and has already been stored in the second buffer 170, detects a unit image having scene transition according to a comparison result, and stores scene transition information regarding the detected unit image in the second buffer 170.
  • a unit image e.g., frame_t
  • frame_(t-3) e.g., frame_(t-3)
  • the scene transition detection unit 110 determines the unit image (e.g., frame_t) currently stored in the first buffer 160 as a unit image having scene transition. Thereafter, the scene transition detection unit 110 stores the scene transition information regarding the unit image (e.g., frame_t) in the second buffer 170 and simultaneously stores a next unit image (e.g., frame_(t+1)) in the first buffer 160. Next, the scene transition detection unit 110 compares the unit image (e.g., frame_(t+1)) currently stored in the first buffer 160 with the unit image (e.g., frame_t) corresponding to scene transition information stored in the second buffer 170.
  • the unit image e.g., frame_t
  • the scene transition detection unit 110 detects the unit image (e.g., frame_t) stored in the first buffer 160 as a unit image having scene transition and stores scene transition information regarding the detected unit image (e.g., frame_t) in the second buffer 170. In addition, the scene transition detection unit 110 determines whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame (e.g., frame_n) of the motion image signal.
  • the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal. However, when it is determined that the unit image stored in the first buffer 160 is not the last frame of the motion image signal, the scene transition detection unit 110 stores a next unit image (e.g., frame_(t+1)) in the first buffer 160 and compares the unit image (e.g., frame_(t+1)) with a unit image (e.g., frame_t), which is stored in the second buffer 170 and corresponds to scene transition information.
  • a next unit image e.g., frame_(t+1)
  • the scene transition detection unit 110 determines that there is no scene transition.
  • the scene transition detection unit 110 determines whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame (e.g., frame i) of the motion image signal. When it is determined that the unit image stored in the first buffer 160 is the last frame of the motion image signal, the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal.
  • the unit image e.g., frame_t
  • the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal.
  • the scene transition detection unit 110 stores a next unit image (e.g., frame_(t+1)) in the first buffer 160 and compares the unit image (e.g., frame_(t+1)) with a unit image (e.g., frame_(t-3)), which is stored in the second buffer 170 and corresponds to scene transition information.
  • a unit image having scene transition e.g., frame_(t-3)
  • a representative of scene transition information detected by the scene transition detection unit 110 is a frame number corresponding to a unit image having scene transition.
  • the target object setting unit 120 receives scene transition information from the scene transition detection unit 110, a motion image signal from the motion image input unit 100, and object designation information regarding object designated by a user from the user command input unit 180. Next, the target object setting unit 120 detects a unit image (e.g., frame_(t+1)) corresponding to the object designation information among a unit image (e.g., frame_t) corresponding to the scene transition information and unit images (e.g., frame_(t+1) through frame_(t+20)) which are input following the unit image (e.g., frame_t) corresponding to the current scene transition information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information.
  • a unit image e.g., frame_(t+1)
  • unit images e.g., frame_(t+1) through frame_(t+20
  • the target object setting unit 120 sets a target area of an object (hereinafter, referred to as an object target area) in the detected unit image (e.g., frame_(t+1)) and detects an initial position of the object. Thereafter, the target object setting unit 120 transmits object target area setting information, i.e., information on the unit image (e.g., frame_(t+1)) in which the object target area has been set, to the object processing unit 130.
  • the object target area setting information may include the object target area and a frame number of the unit image where the object target area has been set.
  • the object processing unit 130 receives scene transition information from the scene transition detection unit 110, object target area setting information from the target object setting unit 120, and a motion image signal from the motion image input unit 100. Next, the object processing unit 130 sequentially extracts an object from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information.
  • a unit image e.g., frame_(t+1)
  • the object processing unit 130 tracks a motion of the extracted object over a stream of the sequential unit images (e.g., frame_(t+1) through frame (t+20)) and transmits tracking information of the object (hereinafter, referred to as object tracking information) to the additional information insertion unit 140.
  • the object tracking information may include frame numbers of unit images, over which the object is extracted and tracked, and basic information regarding the object (such as, a name of the object).
  • the object processing unit 130 performs extraction and tracking of an object with respect to unit images (e.g., frame_(t+1) through frame (t+20)) from a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information received from the target object setting unit 120 to a unit image (e.g. frame (t+20)) preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information received from the scene transition detection unit 110.
  • unit images e.g., frame_(t+1) through frame (t+20)
  • the additional information insertion unit 140 receives object tracking information from the object processing unit 130, detects a range of unit images (e.g., frame_(t+1) through frame (t+20)), over which an object has been tracked, based on the object tracking information, and inserts predetermined additional information regarding the object into the range of the unit images.
  • a range of unit images e.g., frame_(t+1) through frame (t+20)
  • Various apparatuses and methods already known can be used to insert the additional information regarding the object into the range corresponding to the unit images with respect to which the object has been extracted and tracked.
  • the output unit 150 receives object additional information as an insertion result from the additional information insertion unit 140, converts the object additional information to be suitable for a system (such as a digital TV, a mobile apparatus, or video on demand (VOD)) to which the object additional information will be provided, and outputs the converted object additional information to the system.
  • a system such as a digital TV, a mobile apparatus, or video on demand (VOD)
  • VOD video on demand
  • the object processing unit 130 includes an object extractor 131 , an object recognizer 132, an object tracker 133, and an object management database (DB) 134.
  • the object extractor 131 receives scene transition information from the scene transition detection unit 110 shown in FIG. 1 , object target area setting information from the target object setting unit 120 shown in FIG. 1 , and a motion image signal from the motion image input unit 100 shown in FIG. 1.
  • the object extractor 131 sequentially extracts an object from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information.
  • the object extractor 131 transmits information on a unit image from which an object is extracted, i.e., object extraction information, to the object recognizer 132.
  • the object extraction information includes basic information regarding the extracted object and a frame number of a unit image from which the object is extracted.
  • the object extractor 131 While sequentially extracting an object from each of the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and the unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding the unit image (e.g., frame_(t+21)) corresponding to the next scene transition information, when it is determined that a current unit image (e.g., frame_(t+19)) is the last frame of the motion image signal, the object extractor 131 terminates object extraction immediately after extracting an object from the current unit image (e.g., frame_(t+19)).
  • Various apparatuses and methods already known can be used to extract an object from a unit image.
  • the object recognizer 132 receives object extraction information from the object extractor 131 and a motion image signal from the motion image input unit 100 and verifies whether an object exists in a unit image (e.g., frame_(t+1)) corresponding to the object extraction information based on the object management DB 134.
  • the object management DB 134 stores basic information (e.g., a name of an object) regarding all objects existing in a motion image.
  • the object recognizer 132 When an object is verified as existing in the unit image (e.g., frame_(t+1)) corresponding to the object extraction information, the object recognizer 132 transmits object recognition information to the object tracker 133. However, whet an object is not verified as existing in the unit image (e.g., frame_(t+1)) corresponding to the object extraction information, the object recognizer 132 requests the target object setting unit 120 to reset an object. Then, the target object setting unit 120 requests a user to newly designate an object and repeats the-above described operations with respect to the newly designated object.
  • the target object setting unit 120 requests a user to newly designate an object and repeats the-above described operations with respect to the newly designated object.
  • the object recognizer 132 sequentially receives object extraction information from the object extractor 131 and verifies whether an object exists in each of the unit images (e.g., frame_(t+2) through frame_(t+20)) corresponding to the object extraction information.
  • the object tracker 133 sequentially receives object recognition information regarding each of unit frames (e.g., frame_(t+1) through frame_(t+20)) from the object recognizer 132 and a motion image signal from the motion image input unit 100, tracks a moving position of an object over a stream of the unit images (e.g., frame_(t+1) through frame_(t+20)) corresponding to the sequentially received object recognition information, and outputs object tracking information according to the motion of the object.
  • unit frames e.g., frame_(t+1) through frame_(t+20)
  • the object tracker 133 While tracking the object, the object tracker 133 compares a size of the object in a previous unit image (e.g., frame_(t+10)) with a size of the object in a current unit image (e.g., frame_(t+11)). When a difference between the object size in the previous unit image and the object size in the current unit image is greater than a predetermined reference value, the object tracker 133 determines that the size of the object has changed and performs object size compensation with respect to the current unit image (e.g., frame_(t+11)) before performing object tracking over a next unit image (e.g., frame_(t+12)).
  • a predetermined reference value e.g., the object tracker 133 determines that the size of the object has changed and performs object size compensation with respect to the current unit image (e.g., frame_(t+11)) before performing object tracking over a next unit image (e.g., frame_(t+12)).
  • the object tracker 133 When object tracking stop request information generated by a user requesting stop of object tracking is received from the user command input unit 180 during object tracking, the object tracker 133 outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+18)) from the unit image (e.g., frame__(t+1)) corresponding to the object recognition information_to a unit image (e.g., frame_(t+18)) corresponding to the object tracking stop request information.
  • a moving position of the object over unit images e.g., frame_(t+1) through frame_(t+18)
  • the unit image e.g., frame__(t+1)
  • the object recognition information_to e.g., frame_(t+18)
  • the object tracker 133 determines that the object disappears and outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+17)) from the unit image (e.g., frame_(t+1)) corresponding to the object recognition information o a unit image (e.g., frame_(t+17)) preceding the current unit image (e.g., frame_(t+18)).
  • a unit image e.g., frame_(t+17)
  • the object tracker 133 When it is determined that a current frame (e.g., frame_(t+19)) is the last frame of the motion image signal during object tracking, the object tracker 133 outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+19)) from the unit image (e.g., frame_(t+1)) corresponding to the object recognition information to the current unit image (e.g., frame_(t+19)).
  • a current frame e.g., frame_(t+19)
  • FIG. 3 is a flowchart of a method for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • a motion image signal that is a stream of sequential unit images is received in step S100.
  • Information on a unit image e.g., frame_t
  • scene transition information is detected by analyzing the motion image signal in step S110.
  • step S120 When object designation information is input by a user in step S120, among a unit image (e.g., frame_t) corresponding to the scene transition information detected in step S110 and unit images (e.g., frame_(t+1) through frame_(t+20)) which are input following the unit image (e.g., frame_t) corresponding to the current scene transition information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information, a unit image (e.g., frame_(t+1)) corresponding to the object designation information is detected, an object target area is set in the detected unit image (e.g., frame_(t+1)), and an initial position of an object is detected in step S130.
  • a unit image e.g., frame_t
  • unit images e.g., frame_(t+1) through frame_(t+20)
  • a unit image e.g., frame_(t+1)
  • the object target area has been set in step S130
  • the object target area setting information i.e., object target area setting information
  • the object is sequentially extracted from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding the unit image (e.g., frame_(t+21)) corresponding to the next scene transition information in step S140.
  • a unit image e.g., frame_(t+1)
  • unit images e.g., frame_(t+2) through frame_(t+20
  • step S150 It is verified whether the object extracted from the unit images (e.g., frame_(t+1) through frame_(t+20)) exists in each of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S150.
  • the object is recognized in all of the unit images (e.g., frame_(t+1) through frame_(t+20)) as a result of verification in step S150, a moving position of the object is tracked over the stream of the unit images in step S160.
  • the method goes to step S120.
  • a range of the unit images (e.g., frame_(t+1) through frame_(t+20)), over which the object has been tracked, is detected, and additional information regarding the object is inserted into the detected range in step S170.
  • a result of inserting the additional information regarding the object i.e., object additional information
  • FIG. 4 is a flowchart of an operation of detecting scene transition in step S110, according to an embodiment of the present invention.
  • a unit image is received in step S111 and then stored in the first buffer 160 shown in FIG. 1 in step S112.
  • scene transition information that is information on a unit image (e.g., frame_(t-3)) having scene transition in the second buffer 170 shown in FIG. 1 in step S113
  • the unit image (e.g., frame_t) stored in the first buffer 160 is compared with a unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 in step S115.
  • the unit image (e.g., frame_t) stored in the first buffer 160 is determined as having scene transition, and scene transition information regarding the unit image (e.g., frame_t) stored in the first buffer 160 is stored in the second buffer 170 in step S114.
  • the operation goes to step S111 in which a unit image (e.g., frame_(t+1)) succeeding the unit image (e.g., frame_t) stored in the first buffer 160 is received.
  • steps S112 and S113 are repeated.
  • the unit image (e.g., frame_t) stored in the first buffer 160 is compared with a unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 in step S115.
  • a unit image e.g., frame_(t-3)
  • the unit image (e.g., frame_t) stored in the first buffer 160 is determined as having scene transition, and scene transition information regarding the unit image (e.g., frame_t) stored in the first buffer 160 is stored in the second buffer 170 so that the scene transition information stored in the second buffer 170 is updated in step S117.
  • step S118 it is determined whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal in step S118.
  • the operation ends.
  • the operation goes to step S111 in which a unit image (e.g., frame_(t+1)) succeeding the unit image (e.g., frame_t) stored in the first buffer 160 is received, and then steps S112 through S118 are performed.
  • the unit image (e.g., frame_t) stored in the first buffer 160 and the unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 does not exceed the predetermined threshold value in step S116, the unit image (e.g., frame_t) is determined as not having scene transition, and it is determined whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal in step S118.
  • Detecting scene transition from a motion image signal and steps shown in FIG. 4 are techniques already known in the field of the present invention, and thus various known techniques can be selectively used.
  • FIG. 5 is a flowchart of an operation of tracking the object in step S160, according to an embodiment of the present invention.
  • a moving position of the object is tracked over the stream of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S161.
  • object tracking end information is received in step S162
  • the moving position of the object is tracked up to a specified unit image according to the object tracking end information, and a result of tracking the object is output as object tracking information.
  • the moving position of the object is tracked up to a current unit image (e.g., frame_(t+18)) corresponding to the object tracking stop request information in step S163, and a result of tracking the object is output as object tracking information in step S164.
  • a current unit image e.g., frame_(t+18)
  • the moving position of the object is tracked up to a unit image (e.g., frame_(t+17)) preceding the current unit image (e.g., frame_(t+18)) from which the object disappears in step S165, and object tracking information is output in step S166.
  • object tracking end information is information on the last frame of the motion image signal
  • a current unit image e.g., frame_(t+19)
  • the moving position of the object is tracked up to the current unit image (e.g., frame_(t+19)) corresponding to the last frame in step S168, and object tracking information is output in step S169.
  • the current unit image e.g., frame_(t+19)
  • the operation goes to step S140 shown in FIG. 3.
  • a particular object in a digital broadcast motion image is extracted, recognized, and tracked so that additional information regarding the particular object can be provided.
  • the present invention can be applied to normal motion images as well as digital broadcast images.
  • the present invention enables providing additional information regarding a particular object among objects appearing in a motion image
  • the present invention can be widely used in service systems providing detailed information regarding goods online, T-commerce systems, etc.
  • the present invention has an incidental effect of indirectly advertising an object regarding which additional information is provided.

Abstract

An apparatus and method for providing additional information regarding a particular object in a digital broadcast image are provided. When a user designates an object, the object is extracted from unit images constituting a motion image signal. A moving position of the object is tracked over a stream of unit images. Additional information regarding the object is inserted in a range of unit images over which the object has been tracked so that the additional information regarding the object can be provided. Since a particular object in a motion image is extracted, recognized, and tracked, additional information regarding only the particular object can be provided. In addition, the apparatus and method can be widely used for systems that provide detailed information regarding goods online, T-commerce systems, etc.

Description

APPARATUS AND METHOD FOR SEPARATELY PROVIDING
ADDITIONAL INFORMATION ON EACH OBJECT IN DIGITAL
BROADCASTING IMAGE
Technical Field
The present invention relates to an apparatus and method for providing additional information regarding a particular object in a digital broadcast image, and more particularly, to an apparatus and method for recognizing and tracking a particular object corresponding to a user's setting in a digital broadcast image and providing additional information regarding the particular object.
Background Art
With the development of digital broadcast technology, services of providing additional information together with a digital broadcast image (hereinafter, referred to as additional information services) increase.
In such additional information services, when a motion image is displayed on a separate apparatus such as a digital TV, additional information regarding at least one object included in the motion image is also displayed on a screen together with the motion image. Conventionally, additional information regarding all objects in the motion image is provided in units of frames over time.
As described above, when additional information regarding all objects in a frame is provided, additional information regarding a particular object needed by a user cannot be provided efficiently.
Disclosure of the Invention
The present invention provides an apparatus and method for extracting a particular object designated by a user from a digital broadcast image or a normal motion image, recognizing the extracted object, and providing additional information regarding the object while the object is displayed on a screen. According to an aspect of the present invention, there is provided an apparatus for providing additional information regarding a particular object in a digital broadcast image. The apparatus includes a motion image input unit which receives a motion image signal that is a stream of sequential unit images; a user command input unit which receives a user command; a scene transition detection unit which analyzes a motion image signal received through the motion image input unit and detects scene transition information that is information on a unit image having scene transition; a target object setting unit which receives scene transition information from the scene transition detection unit, a motion image signal from the motion image input unit, and object designation information on an object designated by a user from the user command input unit, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information and unit images succeeding the unit image corresponding to the scene transition information, sets a target area of the object in the detected unit image, and detects an initial position of the object; an object processing unit which receives object target area setting information resulting from the setting of the target area from the target object setting unit, scene transition information from the scene transition detection unit, and a motion image signal from the motion image input unit, sequentially extracts an object from a unit image corresponding to the object target area setting information and unit images succeeding the unit image corresponding to the object target area setting information, and tracks a moving position of the object; an additional information insertion unit which receives object tracking information resulting from tracking the object from the object processing unit, detects a range of unit images, over which the tracking of the object is performed, based on the object tracking information, and inserts additional information regarding the object in the detected range; and an output unit which converts object additional information resulting from the inserting of the additional information to be suitable for a system, to which the object additional information will be provided, and outputs the converted object additional information to the system.
According to another aspect of the present invention, there is provided a method for providing additional information regarding a particular object in a digital broadcast image. The method includes (a) receiving a motion image signal that is a stream of sequential unit images, analyzing the motion image signal, and detecting scene transition information regarding a unit image having scene transition; (b) receiving object designation information regarding an object designated by a user, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information detected in step (a) and unit images succeeding the unit image corresponding to the scene transition information and preceding a unit image corresponding to next scene transition information, setting an object target area in the detected unit image; (c) sequentially extracting the object from a unit image corresponding to the object target area set in step (b) and unit images succeeding the unit image corresponding to the object target area and preceding the unit image corresponding to the next scene transition information; (d) verifying whether the object extracted from each of the unit images in step (c) exists in each unit image; (e) tracking a moving position of the object over a stream of unit images according to a verification result and detecting object tracking information; and (f) detecting a range of unit images, over which the object is tracked, based on the object tracking information, inserting additional information regarding the object in the detected range, converting the additional information regarding the object to be suitable for a system, to which the additional information will be provided, and outputting the converted additional information to the system.
Brief Description of the Drawings FIG. 1 is a schematic block diagram of an apparatus for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
FIG. 2 is a schematic block diagram of an object processing unit according to an embodiment of the present invention.
FIG. 3 is a flowchart of a method for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
FIG. 4 is a flowchart of an operation of detecting scene transition according to an embodiment of the present invention.
FIG. 5 is a flowchart of an operation of tracking an object according to an embodiment of the present invention.
Best mode for carrying out the Invention Hereinafter, embodiments of an apparatus and method for providing additional information regarding an object in a digital broadcast image according to the present invention will be described in detail with reference to the attached drawings.
FIG. 1 is a schematic block diagram of an apparatus for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention. Referring to FIG. 1 , the apparatus according to the embodiment of the present invention includes a motion image input unit 100, a scene transition detection unit 110, a target object setting unit 120, an object processing unit 130, an additional information insertion unit 140, an output unit 150, a first buffer 160, a second buffer 170, and a user command input unit 180.
The motion image input unit 100 receives a motion image signal, i.e., a stream of sequential unit images, for example, frame_1 through frame_n. The user command input unit 180 receives a user's command signals (for example, an object designation signal and an object tracking stop request signal). The scene transition detection unit 110 sequentially receives the unit images, for example, frame_1 through frame_n, constituting the motion image signal from the motion image input unit 100 and stores them in the first buffer 160. The scene transition detection unit 110 also compares a unit image (e.g., frame_t) currently stored in the first buffer 160 with a unit image (e.g., frame_(t-3)), which corresponds to scene transition information and has already been stored in the second buffer 170, detects a unit image having scene transition according to a comparison result, and stores scene transition information regarding the detected unit image in the second buffer 170.
When there is no scene transition information stored in the second buffer 170, the scene transition detection unit 110 determines the unit image (e.g., frame_t) currently stored in the first buffer 160 as a unit image having scene transition. Thereafter, the scene transition detection unit 110 stores the scene transition information regarding the unit image (e.g., frame_t) in the second buffer 170 and simultaneously stores a next unit image (e.g., frame_(t+1)) in the first buffer 160. Next, the scene transition detection unit 110 compares the unit image (e.g., frame_(t+1)) currently stored in the first buffer 160 with the unit image (e.g., frame_t) corresponding to scene transition information stored in the second buffer 170.
In other words, when a histogram difference calculated by comparing the unit image (e.g., frame_t) stored in the first buffer 160 with the unit frame (e.g., frame_(t-3)) corresponding to scene transition information stored in the second buffer 170 is greater than a predetermined threshold value, the scene transition detection unit 110 detects the unit image (e.g., frame_t) stored in the first buffer 160 as a unit image having scene transition and stores scene transition information regarding the detected unit image (e.g., frame_t) in the second buffer 170. In addition, the scene transition detection unit 110 determines whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame (e.g., frame_n) of the motion image signal. When it is determined that the unit image stored in the first buffer 160 is the last frame of the motion image signal, the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal. However, when it is determined that the unit image stored in the first buffer 160 is not the last frame of the motion image signal, the scene transition detection unit 110 stores a next unit image (e.g., frame_(t+1)) in the first buffer 160 and compares the unit image (e.g., frame_(t+1)) with a unit image (e.g., frame_t), which is stored in the second buffer 170 and corresponds to scene transition information.
Meanwhile, when the histogram difference calculated by comparing the unit image (e.g., frame_t) stored in the first buffer 160 with the unit frame (e.g., frame_(t-3)) corresponding to scene transition information stored in the second buffer 170 does not exceed the predetermined threshold value, the scene transition detection unit 110 determines that there is no scene transition.
Next, the scene transition detection unit 110 determines whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame (e.g., frame i) of the motion image signal. When it is determined that the unit image stored in the first buffer 160 is the last frame of the motion image signal, the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal. However, when it is determined that the unit image stored in the first buffer 160 is not the last frame of the motion image signal, the scene transition detection unit 110 stores a next unit image (e.g., frame_(t+1)) in the first buffer 160 and compares the unit image (e.g., frame_(t+1)) with a unit image (e.g., frame_(t-3)), which is stored in the second buffer 170 and corresponds to scene transition information. Various apparatuses and methods already known can be used to detect a unit image having scene transition. A representative of scene transition information detected by the scene transition detection unit 110 is a frame number corresponding to a unit image having scene transition. The target object setting unit 120 receives scene transition information from the scene transition detection unit 110, a motion image signal from the motion image input unit 100, and object designation information regarding object designated by a user from the user command input unit 180. Next, the target object setting unit 120 detects a unit image (e.g., frame_(t+1)) corresponding to the object designation information among a unit image (e.g., frame_t) corresponding to the scene transition information and unit images (e.g., frame_(t+1) through frame_(t+20)) which are input following the unit image (e.g., frame_t) corresponding to the current scene transition information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information. Next, the target object setting unit 120 sets a target area of an object (hereinafter, referred to as an object target area) in the detected unit image (e.g., frame_(t+1)) and detects an initial position of the object. Thereafter, the target object setting unit 120 transmits object target area setting information, i.e., information on the unit image (e.g., frame_(t+1)) in which the object target area has been set, to the object processing unit 130. The object target area setting information may include the object target area and a frame number of the unit image where the object target area has been set.
The object processing unit 130 receives scene transition information from the scene transition detection unit 110, object target area setting information from the target object setting unit 120, and a motion image signal from the motion image input unit 100. Next, the object processing unit 130 sequentially extracts an object from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information. Next, the object processing unit 130 tracks a motion of the extracted object over a stream of the sequential unit images (e.g., frame_(t+1) through frame (t+20)) and transmits tracking information of the object (hereinafter, referred to as object tracking information) to the additional information insertion unit 140. The object tracking information may include frame numbers of unit images, over which the object is extracted and tracked, and basic information regarding the object (such as, a name of the object).
Hereinafter, it is assumed that the object processing unit 130 performs extraction and tracking of an object with respect to unit images (e.g., frame_(t+1) through frame (t+20)) from a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information received from the target object setting unit 120 to a unit image (e.g. frame (t+20)) preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information received from the scene transition detection unit 110.
The additional information insertion unit 140 receives object tracking information from the object processing unit 130, detects a range of unit images (e.g., frame_(t+1) through frame (t+20)), over which an object has been tracked, based on the object tracking information, and inserts predetermined additional information regarding the object into the range of the unit images. Various apparatuses and methods already known can be used to insert the additional information regarding the object into the range corresponding to the unit images with respect to which the object has been extracted and tracked. The output unit 150 receives object additional information as an insertion result from the additional information insertion unit 140, converts the object additional information to be suitable for a system (such as a digital TV, a mobile apparatus, or video on demand (VOD)) to which the object additional information will be provided, and outputs the converted object additional information to the system. FIG. 2 is a schematic block diagram of the object processing unit
130 according to an embodiment of the present invention. Referring to FIG. 2, the object processing unit 130 includes an object extractor 131 , an object recognizer 132, an object tracker 133, and an object management database (DB) 134. The object extractor 131 receives scene transition information from the scene transition detection unit 110 shown in FIG. 1 , object target area setting information from the target object setting unit 120 shown in FIG. 1 , and a motion image signal from the motion image input unit 100 shown in FIG. 1. The object extractor 131 sequentially extracts an object from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information. Next, the object extractor 131 transmits information on a unit image from which an object is extracted, i.e., object extraction information, to the object recognizer 132. The object extraction information includes basic information regarding the extracted object and a frame number of a unit image from which the object is extracted.
While sequentially extracting an object from each of the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and the unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding the unit image (e.g., frame_(t+21)) corresponding to the next scene transition information, when it is determined that a current unit image (e.g., frame_(t+19)) is the last frame of the motion image signal, the object extractor 131 terminates object extraction immediately after extracting an object from the current unit image (e.g., frame_(t+19)). Various apparatuses and methods already known can be used to extract an object from a unit image.
The object recognizer 132 receives object extraction information from the object extractor 131 and a motion image signal from the motion image input unit 100 and verifies whether an object exists in a unit image (e.g., frame_(t+1)) corresponding to the object extraction information based on the object management DB 134. The object management DB 134 stores basic information (e.g., a name of an object) regarding all objects existing in a motion image.
When an object is verified as existing in the unit image (e.g., frame_(t+1)) corresponding to the object extraction information, the object recognizer 132 transmits object recognition information to the object tracker 133. However, whet an object is not verified as existing in the unit image (e.g., frame_(t+1)) corresponding to the object extraction information, the object recognizer 132 requests the target object setting unit 120 to reset an object. Then, the target object setting unit 120 requests a user to newly designate an object and repeats the-above described operations with respect to the newly designated object.
Similarly, the object recognizer 132 sequentially receives object extraction information from the object extractor 131 and verifies whether an object exists in each of the unit images (e.g., frame_(t+2) through frame_(t+20)) corresponding to the object extraction information.
The object tracker 133 sequentially receives object recognition information regarding each of unit frames (e.g., frame_(t+1) through frame_(t+20)) from the object recognizer 132 and a motion image signal from the motion image input unit 100, tracks a moving position of an object over a stream of the unit images (e.g., frame_(t+1) through frame_(t+20)) corresponding to the sequentially received object recognition information, and outputs object tracking information according to the motion of the object.
While tracking the object, the object tracker 133 compares a size of the object in a previous unit image (e.g., frame_(t+10)) with a size of the object in a current unit image (e.g., frame_(t+11)). When a difference between the object size in the previous unit image and the object size in the current unit image is greater than a predetermined reference value, the object tracker 133 determines that the size of the object has changed and performs object size compensation with respect to the current unit image (e.g., frame_(t+11)) before performing object tracking over a next unit image (e.g., frame_(t+12)).
When object tracking stop request information generated by a user requesting stop of object tracking is received from the user command input unit 180 during object tracking, the object tracker 133 outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+18)) from the unit image (e.g., frame__(t+1)) corresponding to the object recognition information_to a unit image (e.g., frame_(t+18)) corresponding to the object tracking stop request information.
When the object tracker 133 cannot track the motion of the object in a current unit image (e.g., frame_(t+18)), it determines that the object disappears and outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+17)) from the unit image (e.g., frame_(t+1)) corresponding to the object recognition information o a unit image (e.g., frame_(t+17)) preceding the current unit image (e.g., frame_(t+18)).
When it is determined that a current frame (e.g., frame_(t+19)) is the last frame of the motion image signal during object tracking, the object tracker 133 outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+19)) from the unit image (e.g., frame_(t+1)) corresponding to the object recognition information to the current unit image (e.g., frame_(t+19)).
FIG. 3 is a flowchart of a method for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention. Referring to FIG. 3, a motion image signal that is a stream of sequential unit images is received in step S100. Information on a unit image (e.g., frame_t) having scene transition, i.e., scene transition information, is detected by analyzing the motion image signal in step S110.
When object designation information is input by a user in step S120, among a unit image (e.g., frame_t) corresponding to the scene transition information detected in step S110 and unit images (e.g., frame_(t+1) through frame_(t+20)) which are input following the unit image (e.g., frame_t) corresponding to the current scene transition information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information, a unit image (e.g., frame_(t+1)) corresponding to the object designation information is detected, an object target area is set in the detected unit image (e.g., frame_(t+1)), and an initial position of an object is detected in step S130.
Next, based on the scene transition information detected in step
S110 and information on a unit image (e.g., frame_(t+1)), in which the object target area has been set in step S130, i.e., object target area setting information, the object is sequentially extracted from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding the unit image (e.g., frame_(t+21)) corresponding to the next scene transition information in step S140. It is verified whether the object extracted from the unit images (e.g., frame_(t+1) through frame_(t+20)) exists in each of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S150. When the object is recognized in all of the unit images (e.g., frame_(t+1) through frame_(t+20)) as a result of verification in step S150, a moving position of the object is tracked over the stream of the unit images in step S160. When the object is not recognized in step S150, the method goes to step S120.
Based on object tracking information obtained as a result of tracking the moving position of the object in step S160, a range of the unit images (e.g., frame_(t+1) through frame_(t+20)), over which the object has been tracked, is detected, and additional information regarding the object is inserted into the detected range in step S170.
A result of inserting the additional information regarding the object, i.e., object additional information, is converted suitable for a system to which the object additional information will be provided, and the converted object additional information is output to the system in step S180.
FIG. 4 is a flowchart of an operation of detecting scene transition in step S110, according to an embodiment of the present invention. Referring to FIG. 4, a unit image is received in step S111 and then stored in the first buffer 160 shown in FIG. 1 in step S112. When there is scene transition information that is information on a unit image (e.g., frame_(t-3)) having scene transition in the second buffer 170 shown in FIG. 1 in step S113, the unit image (e.g., frame_t) stored in the first buffer 160 is compared with a unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 in step S115.
However, when there is no scene transition information in the second buffer 170 in step S113, the unit image (e.g., frame_t) stored in the first buffer 160 is determined as having scene transition, and scene transition information regarding the unit image (e.g., frame_t) stored in the first buffer 160 is stored in the second buffer 170 in step S114. Next, the operation goes to step S111 in which a unit image (e.g., frame_(t+1)) succeeding the unit image (e.g., frame_t) stored in the first buffer 160 is received. Next, steps S112 and S113 are repeated.
Thereafter, the unit image (e.g., frame_t) stored in the first buffer 160 is compared with a unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 in step S115. As a result of comparing the two unit images in step S115, when it is determined that a histogram difference between the two unit images is greater than a predetermined threshold value in step S116, the unit image (e.g., frame_t) stored in the first buffer 160 is determined as having scene transition, and scene transition information regarding the unit image (e.g., frame_t) stored in the first buffer 160 is stored in the second buffer 170 so that the scene transition information stored in the second buffer 170 is updated in step S117.
Next, it is determined whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal in step S118. When it is determined that the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal, the operation ends. However, when it is determined that the unit image (e.g., frame_t) stored in the first buffer 160 is not the last frame of the motion image signal, the operation goes to step S111 in which a unit image (e.g., frame_(t+1)) succeeding the unit image (e.g., frame_t) stored in the first buffer 160 is received, and then steps S112 through S118 are performed.
However, when the histogram difference between the unit image (e.g., frame_t) stored in the first buffer 160 and the unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 does not exceed the predetermined threshold value in step S116, the unit image (e.g., frame_t) is determined as not having scene transition, and it is determined whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal in step S118. Detecting scene transition from a motion image signal and steps shown in FIG. 4 are techniques already known in the field of the present invention, and thus various known techniques can be selectively used.
FIG. 5 is a flowchart of an operation of tracking the object in step S160, according to an embodiment of the present invention. Referring to FIG. 5, after It is verified whether the object extracted from the unit images (e.g., frame_(t+1) through frame_(t+20)) exists in each of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S150, a moving position of the object is tracked over the stream of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S161. When object tracking end information is received in step S162, the moving position of the object is tracked up to a specified unit image according to the object tracking end information, and a result of tracking the object is output as object tracking information.
In other words, when the object tracking end information indicates object tracking stop request information generated by a user, the moving position of the object is tracked up to a current unit image (e.g., frame_(t+18)) corresponding to the object tracking stop request information in step S163, and a result of tracking the object is output as object tracking information in step S164. When object tracking end information indicates that the object disappears form a current unit image (e.g., frame_(t+18)), the moving position of the object is tracked up to a unit image (e.g., frame_(t+17)) preceding the current unit image (e.g., frame_(t+18)) from which the object disappears in step S165, and object tracking information is output in step S166. When object tracking end information is information on the last frame of the motion image signal, it is determined whether a current unit image (e.g., frame_(t+19)), on which object tracking is to be performed next, is the last frame of the motion image signal in step S167. When it is determined that the current unit image (e.g., frame_(t+19)) is the last frame of the motion image signal, the moving position of the object is tracked up to the current unit image (e.g., frame_(t+19)) corresponding to the last frame in step S168, and object tracking information is output in step S169. However, when the current unit image (e.g., frame_(t+19)) is not the last frame of the motion image signal, the operation goes to step S140 shown in FIG. 3.
While this invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes may be made therein without departing from the scope of the invention. Therefore, the above-described embodiments will be considered not in restrictive sense but in descriptive sense only. The scope of the invention will be defined not by the above description but by the appended claims, and it will be construed that all differences made within the scope defined by the claims are included in the present invention.
Industrial Applicability
As described above, in an apparatus and method for providing additional information regarding a particular object in a digital broadcast image according to the present invention, a particular object in a digital broadcast motion image is extracted, recognized, and tracked so that additional information regarding the particular object can be provided.
The present invention can be applied to normal motion images as well as digital broadcast images. In addition, since the present invention enables providing additional information regarding a particular object among objects appearing in a motion image, the present invention can be widely used in service systems providing detailed information regarding goods online, T-commerce systems, etc.
Moreover, the present invention has an incidental effect of indirectly advertising an object regarding which additional information is provided.

Claims

What is claimed is:
1. An apparatus for providing additional information regarding a particular object in a digital broadcast image, the apparatus comprising: a motion image input unit which receives a motion image signal that is a stream of sequential unit images; a user command input unit which receives a user command; a scene transition detection unit which analyzes a motion image signal received through the motion image input unit and detects scene transition information that is information on a unit image having scene transition; a target object setting unit which receives scene transition information from the scene transition detection unit, a motion image signal from the motion image input unit, and object designation information on an object designated by a user from the user command input unit, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information and unit images succeeding the unit image corresponding to the scene transition information, sets a target area of the object in the detected unit image, and detects an initial position of the object; an object processing unit which receives object target area setting information resulting from the setting of the target area from the target object setting unit, scene transition information from the scene transition detection unit, and a motion image signal from the motion image input unit, sequentially extracts an object from a unit image corresponding to the object target area setting information and unit images succeeding the unit image corresponding to the object target area setting information, and tracks a moving position of the object; an additional information insertion unit which receives object tracking information resulting from tracking the object from the Object processing unit, detects a range of unit images, over which the tracking of the object is performed, based on the object tracking information, and inserts additional information regarding the object in the detected range; and an output unit which converts object additional information resulting from the inserting of the additional information to be suitable for a system, to which the object additional information will be provided, and outputs the converted object additional information to the system.
2. The apparatus of claim 1 , further comprising: a first buffer which sequentially stores unit images constituting a motion image signal input through the motion image input unit; and a second buffer which stores scene transition information, wherein the scene transition detection unit receives the motion image signal from the motion image input unit, sequentially stores the unit images constituting the motion image signal in the first buffer, compares a unit image stored in the first buffer with a unit image corresponding to the scene transition information stored in the second buffer, detects a unit image having scene transition according to a comparison result, and updates the scene transition information stored in the second buffer with scene transition information regarding the detected unit image.
3. The apparatus of claim 1 , wherein when object designation information is received from the user command input unit, the target object setting unit detects a unit image corresponding to the object designation information among a unit image corresponding to scene transition information and unit images succeeding the unit image corresponding to the scene transition information and preceding a unit image corresponding to next scene transition information.
4. The apparatus of claim 1 , wherein the object processing unit comprises: an object management database which stores basic information regarding an object so that existence or non-existence of the object in unit images constituting a motion image can be verified; an object extractor which sequentially extracts an object from a unit image corresponding to object target area setting information and unit images succeeding the unit image corresponding to the object target area setting information and preceding a unit image corresponding to next scene transition information; an object recognizer which receives object extraction information regarding an object extracted from each unit image from the object extractor and verifies whether the object exists in each unit image based on the object management database; and an object tracker which receives object recognition information resulting from the verification from the object recognizer and tracks a moving position of an object over a stream of unit images to detect object tracking information.
5. The apparatus of claim 4, wherein when the existence of an object in a unit image is not verified, the object recognizer requests re-designation of an object so that a target area of an object can be reset.
6. The apparatus of claim 4, wherein when object tracking stop request information generated by the user is received while the object tracker is tracking a moving position of an object over a unit image corresponding to object target area setting information and unit images succeeding the unit image corresponding to the object target area setting information and preceding a unit image corresponding to next scene transition information, the object tracker detects object tracking information based on a result of tracking the moving position of the object over unit images from the unit image corresponding to the object target area setting information to a unit image corresponding to the object tracking stop request information.
7. The apparatus of claim 4, wherein when an object that is a target of tracking disappears from a current unit image while the object tracker is tracking a moving position of an object over a unit image corresponding to object target area setting information and unit images succeeding the unit image corresponding to the object target area setting information and preceding a unit image corresponding to next scene transition information, the object tracker detects object tracking information based on a result of tracking the moving position of the object over unit images from the unit image corresponding to the object target area setting information to a unit image preceding the current unit image from which the object disappears.
8. The apparatus of claim 4, wherein when a current unit image is determined as being a last frame of a motion image signal while the object tracker is tracking a moving position of an object over a unit image corresponding to object target area setting information and unit images succeeding the unit image corresponding to the object target area setting information and preceding a unit image corresponding to next scene transition information, the object tracker detects object tracking information based on a result of tracking the moving position of the object over unit images from the unit image corresponding to the object target area setting information to a unit image corresponding to the last frame of the motion image signal.
9. A method for providing additional information regarding a particular object in a digital broadcast image, the method comprising: (a) receiving a motion image signal that is a stream of sequential unit images, analyzing the motion image signal, and detecting scene transition information regarding a unit image having scene transition;
(b) receiving object designation information regarding an object designated by a user, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information detected in step (a) and unit images succeeding the unit image corresponding to the scene transition information and preceding a unit image corresponding to next scene transition information, setting an object target area in the detected unit image;
(c) sequentially extracting the object from a unit image corresponding to the object target area set in step (b) and unit images succeeding the unit image corresponding to the object target area and preceding the unit image corresponding to the next scene transition information;
(d) verifying whether the object extracted from each of the unit images in step (c) exists in each unit image;
(e) tracking a moving position of the object over a stream of unit images according to a verification result and detecting object tracking information; and
(f) detecting a range of unit images, over which the object is tracked, based on the object tracking information, inserting additional information regarding the object in the detected range, converting the additional information regarding the object to be suitable for a system, to which the additional information will be provided, and outputting the converted additional information to the system.
10. The method for claim 9, wherein when object tracking stop request information generated by the user is received while sequentially tracking the moving position of the object over the stream of unit images, step (e) comprises outputting object tracking information regarding unit images, over which the object has been tracked, based on a result of tracking the object up to a current unit image corresponding to the object tracking stop request information.
11. The method for claim 9, wherein when the object that is a target of the tracking disappears from a current unit image while sequentially tracking the moving position of the object over the stream of unit images, step (e) comprises outputting object tracking information regarding unit images, over which the object has been tracked, based on a result of tracking the object up to a unit image that has the object and precedes the current unit image.
12. The method for claim 9, wherein when a current unit image is determined as being a last frame of the motion image signal while sequentially tracking the moving position of the object over the stream of unit images, step (e) comprises outputting object tracking information regarding unit images, over which the object has been tracked, based on a result of tracking the object up to the current image.
PCT/KR2002/001895 2002-10-10 2002-10-10 Method and apparatus for separately providing additional information on each object in digital broadcasting image WO2004034708A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2002348647A AU2002348647A1 (en) 2002-10-10 2002-10-10 Method and apparatus for separately providing additional information on each object in digital broadcasting image
PCT/KR2002/001895 WO2004034708A1 (en) 2002-10-10 2002-10-10 Method and apparatus for separately providing additional information on each object in digital broadcasting image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2002/001895 WO2004034708A1 (en) 2002-10-10 2002-10-10 Method and apparatus for separately providing additional information on each object in digital broadcasting image

Publications (1)

Publication Number Publication Date
WO2004034708A1 true WO2004034708A1 (en) 2004-04-22

Family

ID=32089644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2002/001895 WO2004034708A1 (en) 2002-10-10 2002-10-10 Method and apparatus for separately providing additional information on each object in digital broadcasting image

Country Status (2)

Country Link
AU (1) AU2002348647A1 (en)
WO (1) WO2004034708A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130019267A1 (en) * 2010-06-28 2013-01-17 At&T Intellectual Property I, L.P. Systems and Methods for Producing Processed Media Content

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5636036A (en) * 1987-02-27 1997-06-03 Ashbey; James A. Interactive video system having frame recall dependent upon user input and current displayed image
KR20000057859A (en) * 1999-02-01 2000-09-25 김영환 Motion activity description method and apparatus for video
US6169573B1 (en) * 1997-07-03 2001-01-02 Hotv, Inc. Hypervideo system and method with object tracking in a compressed digital video environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5636036A (en) * 1987-02-27 1997-06-03 Ashbey; James A. Interactive video system having frame recall dependent upon user input and current displayed image
US6169573B1 (en) * 1997-07-03 2001-01-02 Hotv, Inc. Hypervideo system and method with object tracking in a compressed digital video environment
KR20000057859A (en) * 1999-02-01 2000-09-25 김영환 Motion activity description method and apparatus for video

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130019267A1 (en) * 2010-06-28 2013-01-17 At&T Intellectual Property I, L.P. Systems and Methods for Producing Processed Media Content
US9906830B2 (en) * 2010-06-28 2018-02-27 At&T Intellectual Property I, L.P. Systems and methods for producing processed media content
US10827215B2 (en) 2010-06-28 2020-11-03 At&T Intellectual Property I, L.P. Systems and methods for producing processed media content

Also Published As

Publication number Publication date
AU2002348647A1 (en) 2004-05-04

Similar Documents

Publication Publication Date Title
CN112990191B (en) Shot boundary detection and key frame extraction method based on subtitle video
US8516119B2 (en) Systems and methods for determining attributes of media items accessed via a personal media broadcaster
US10304458B1 (en) Systems and methods for transcribing videos using speaker identification
US9213896B2 (en) Method for detecting and tracking objects in image sequences of scenes acquired by a stationary camera
EP3010235A1 (en) System and method for detecting advertisements on the basis of fingerprints
JP2004128550A (en) Scene classification apparatus for moving picture data
KR100717402B1 (en) Apparatus and method for determining genre of multimedia data
JP2005513663A (en) Family histogram based techniques for detection of commercial and other video content
US20130083965A1 (en) Apparatus and method for detecting object in image
US20060245625A1 (en) Data block detect by fingerprint
US20100246944A1 (en) Using a video processing and text extraction method to identify video segments of interest
CN113052169A (en) Video subtitle recognition method, device, medium, and electronic device
EP3251053B1 (en) Detecting of graphical objects to identify video demarcations
CN104853244A (en) Method and apparatus for managing audio visual, audio or visual content
US20200311898A1 (en) Method, apparatus and computer program product for storing images of a scene
US20110033115A1 (en) Method of detecting feature images
US20090180670A1 (en) Blocker image identification apparatus and method
US7734096B2 (en) Method and device for discriminating obscene video using time-based feature value
WO2004034708A1 (en) Method and apparatus for separately providing additional information on each object in digital broadcasting image
KR101672123B1 (en) Apparatus and method for generating caption file of edited video
KR101667011B1 (en) Apparatus and Method for detecting scene change of stereo-scopic image
CN102667770B (en) For area of computer aided explain multi-medium data method and apparatus
JP2003143546A (en) Method for processing football video
JP2000276478A (en) Method and device for detecting time-series data and recording medium where program thereof is recorded
JP4349004B2 (en) Television receiver detection apparatus and method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC. EPO FORM 1205A DATED 20-07-05

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP