WO2002017633A2 - Method and system for active modification of video content responsively to processes and data embedded in a video stream - Google Patents

Method and system for active modification of video content responsively to processes and data embedded in a video stream Download PDF

Info

Publication number
WO2002017633A2
WO2002017633A2 PCT/EP2001/009634 EP0109634W WO0217633A2 WO 2002017633 A2 WO2002017633 A2 WO 2002017633A2 EP 0109634 W EP0109634 W EP 0109634W WO 0217633 A2 WO0217633 A2 WO 0217633A2
Authority
WO
WIPO (PCT)
Prior art keywords
media information
procedure
stream
information stream
video
Prior art date
Application number
PCT/EP2001/009634
Other languages
French (fr)
Other versions
WO2002017633A3 (en
Inventor
Nevenka Dimitrova
Kavitha V. Devara
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2002522196A priority Critical patent/JP2004507939A/en
Priority to KR1020027005030A priority patent/KR20020041828A/en
Priority to EP01974203A priority patent/EP1314313A2/en
Publication of WO2002017633A2 publication Critical patent/WO2002017633A2/en
Publication of WO2002017633A3 publication Critical patent/WO2002017633A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2389Multiplex stream processing, e.g. multiplex stream encrypting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4385Multiplex stream processing, e.g. multiplex stream decrypting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/654Transmission by server directed to the client
    • H04N21/6543Transmission by server directed to the client for forcing some client operations, e.g. recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8193Monomedia components thereof involving executable data, e.g. software dedicated tools, e.g. video decoder software or IPMP tool

Definitions

  • the invention relates to video systems in which video content is actively modified according to conditions at the delivery site, preferences of a user, or other conditions and more particularly to such systems where the procedures controlling modification are embedded in or otherwise synchronized with the video stream.
  • PROGRAMME DISPLAY ACCORDING TO THE CONTENTS describes blanking out delimited portions of a video stream marked as containing objectionable subject matter.
  • Application GB 2 284 914 filed December 18, 1993 for DISCRETIONARY VIEWING CONTROL describes a system that places restrictions on television programming based on time of day, the identity of the program, the rating of the program, etc. based on conditions defined at the delivery point. Again, the result is to delete or inhibit the video signal when certain conditions are present.
  • US Patent No. 5,778,135 for REAL-TIME EDIT CONTROL FOR VIDEO PROGRAM MATERIAL describes a system in which video is segmented and each segment rated. An application edits out the segments that are rated above a selected level. This is essentially the same technique used in PCT Application WO 96/41438 for ENCODER APPARATUS AND DECODER APP ARAUS FOR A TELEVISION SIGNAL.
  • Various techniques may be used to permit real time modification of a video stream. According to the invention, the way this modification is done increases the range of resulting modifications that are possible. It also increases the control the creator of the video stream has over the features provided at the receiving end.
  • These advantages are provided by associating with each video stream or file one or more software procedures that are executed by the display or other generator such as a copying station (e.g., storing station, forwarding station, recording station, broadcasting station, etc.).
  • the generator receives both the raw video stream and code defining one or more procedures to be implemented such as to modify the video stream in some way.
  • the generator is a television.
  • the television receives a video signal with embedded procedure code.
  • the television has an internal controller that separates the software data and the raw video data and executes the software data, which may then modify the video data.
  • the software data may be contained in the video blanking interval ("VBI") of an analog video signal or simply contained in a header file attached to the video file.
  • the internal controller may be programmed with an Application Program Interface ("API") that provides a set of functions the procedure may access to create various effects. This could be a Java®-like system or an enhancement to Java®.
  • API Application Program Interface
  • the software data defines a procedure that is executed and which modifies the video data.
  • the procedure may be keyed to time or segment markers in the video data to allow the procedure to identify portions of the video data to be modified.
  • This API can provide a rich feature set or a lean feature set. It may also be written at a high level or a low level.
  • the API could provide a function to draw an object such as a flat rectangle or graded ellipse of a specified color over a specified area of the display only a certain time interval of the video.
  • Such functions might take arguments from the procedure specifying the coordinates of the object, the size and shape, the color and the start and stop time segments.
  • Another example is the application of a specified filter to a portion of the screen. The filter mask may be supplied as an argument.
  • the invention allows the video content generator to provide many features and options for distribution and use of the video content.
  • One result is that the features that are available are not limited to some set that was predefined in the delivery device or output device (e.g., television) as in the prior art.
  • the delivery device or output device e.g., television
  • both a large set of functions of a more integrated nature or primitive functions can provide the same degree of flexibility. Both may be provided.
  • the invention provides for the association of executable procedures, that modify the video data, with the video itself.
  • the association may be provided by the supply of procedures for processing the video substantially synchronously with the presentation of the video on the display processing equipment that ultimately transforms the multiplexed or compressed or coded signal into video data stream.
  • Packaging the procedure code in the same or a related file may provide the association.
  • Other embodiments may create the association by supplying the code embedded in an interleave fashion in the video data stream whether analog or digital.
  • Video may be shipped with multiple language tracks one being selected according to a user profile accessed by the procedure.
  • the procedure decrypts the video using profile data and a password entered by the user.
  • the procedure applies a blur filter to portions of a frame during a scene of a movie to mask out frontal nudity.
  • the procedure provides a control console that allows a user to speed up the display of the video according to the user's preference entered on a console generated by the procedure.
  • the procedure provides a low resolution image and accepts data indicating payment authorization at which point it permits full resolution video to be shown.
  • the procedure recognizes portions of the video signal based on pattern recognition, the portions containing material to be censored out, and omits those portions by skipping frames to make the speed of playback very fast.
  • the procedure omits sound track segments, for example that represent expletives, indicated by markers in the video signal.
  • a previously-unknown technique is transmitted with the video, such as a procedure that responds to the user profile in some special way or gives the user certain choices.
  • the procedure provides a text overlay on the video or Flash® animation on top of the video.
  • the procedure retrieves commercials from a web site and displays the commercials at intervals during the video.
  • the procedure further reduces the number and duration of commercials by providing the user a vehicle for paying down the commercials by accepting a payment for watching the video, similar to shareware that displays a banner ad until it is registered.
  • the procedure controls reproduction rights so that the various license privileges that can be exercised by the user are controlled by a profile file on the machine.
  • the common feature of all the above examples is that a program that is associated with the file provides the features enjoyed, rather than requiring them to be present or otherwise available to the display or reproduction device.
  • the invention allows the creator or distributor of the video to control the display or other use of the video with great flexibility.
  • the procedure consists of commands that operate on separable portions of the video stream.
  • the execution environment is stateless so that any finite number of such portions will always be copied with the procedure(s) applicable to it/them.
  • a procedure might be executed to turn on application of the mask and many frames a procedure might be executed to turn off generation of the mask.
  • the portion of the video between the turn-on and turn-off commands must not be divided lest the turn-on command not be activated ahead of the sensitive subject matter.
  • An alternative way of implementing the invention is to insure that every frame of the video contains its own state-generating procedure code. This environment would also be stateless. Then, any number of frames that are copied will contain suitable code for applying the correct attributes to the frame.
  • the command to apply the filter, and the filter's definition would precede each frame.
  • the environment is stateless between frames. This embodiment could be used with a broadcast model.
  • information about the video could be encoded with the procedure data.
  • the title, author, description, etc. could be incorporated in it so that, any copied video sequence could contain global information about the video file from which the segment came.
  • Such data need not be stored for each frame, but could be distributed over multiple frames.
  • FIG. 1 is an illustration of a user-environment in which the invention may be used.
  • Fig. 2 illustrates an embodiment of the invention in which video data from a source is demultiplexed to extract data defining a procedure and then decoded and the procedure executed to modify the video responsively to a profile.
  • Fig. 3 illustrates an embodiment of the invention in which video data from a source is demultiplexed to extract data defining a procedure, the compressed file modified by the procedure executed responsively to a profile, and the modified compressed file decoded.
  • Fig. 4 illustrates an embodiment of the invention in which video data from a source is demultiplexed and data defining a procedure is obtained from independent source and in which the video file is decoded and the procedure executed to modify the video responsively to a profile.
  • Fig. 5 is a figurative representation of a video file to illustrate features of certain embodiments of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • an example of a physical infrastructure capable of supporting the functional features of the invention includes view system 100 with a computer 140 and various types of input and/or storage devices.
  • the latter include a keyboard 112, a remote control 110, a removable medium such as a floppy disk, optical disk, memory card, etc. 120, Philips Pronto®, programmable controller, voice recognition/activated controller, mouse, gesture-recognition controller, etc.
  • Data may be stored locally on a fixed disk drive 135.
  • Output devices may include a monitor or TV 130, speakers 131, and/or other output devices.
  • the computer 140 receives data 160 and/or video 170 from an external source which could be a broadcast transmission, a data store, the Internet, a network, a satellite transmission, a switched circuit transmission, or any other source of data or other signal.
  • an external source which could be a broadcast transmission, a data store, the Internet, a network, a satellite transmission, a switched circuit transmission, or any other source of data or other signal.
  • the computer 140 executes procedures that may be stored on its data store 135 or embedded in the data 160 and/or video 170 received from an external source or embedded in files transferred to the computer in the form of a data file. The procedures modify the video either in compressed or decompressed form. After modification, the video may be stored on VTR 133 or transmitted by a radio transmitter 137 as a broadcast, or displayed on the TV or monitor 130.
  • the inputs and outputs shown are examples only.
  • the data 160 and video 170 can be transmitted by two different transmitters.
  • the data 160 can be distributed by multiple transmitters, whereas the corresponding video 170 is transmitted by a single transmitter.
  • the video is for example distributed nationally, whereas the data is distributed locally. This enables to provide different procedures with the video in different regions.
  • the range of the transmitter that transmitter the video 170 is larger than the range of the transmitters that are part of the multitude of transmitters that transmits the data 160.
  • the computer 140 receives a video file from some source which could be a cable, microwave, satellite, or other broadcast transmission 180, a computer such as a notebook 185, a network 190 such as the Internet, a data store 195, or any other source of analog and/or digital data. These may also include a smart mobile phone, PDA, etc.
  • the received data is a video stream.
  • the video stream is received by a demultiplexer 205 which separates the video stream into an active video procedure data stream and a raw video stream.
  • the former is applied to an active streams engine 225 and the latter to a decoder 210 (if necessary to decode a compressed video format).
  • the output of the decoder 210 is applied to a process 215 that checks a profile stored on the computer 140.
  • the profile stores data characterizing the audience. If there is a match between the profile and the current video, the procedure is applied in a process 225 responsively to profile data to generate a modified video stream. If the profile is such as not to warrant a modification of the video stream the original decompressed video is output.
  • the output stream is applied to an output device which may be any of a variety of different sinks.
  • the output may be a broadcast transmission 180, a computer 185, a TV or monitor 131, or a data store 195.
  • Output devices could also include a VTR as illustrated in Fig. 1 and the examples shown in Fig. 2 are merely illustrative examples.
  • the demultiplexer may receive an analog or digital signal.
  • An example of an analog signal is an NTSC signal from a television broadcast.
  • a common place to place data is in the VBI, in which case the demultiplexer may extract the data residing in the VBI from the raw video stream and apply it to the active streams engine 225.
  • the active streams engine simply runs the procedure applied to it.
  • the active video procedure may consist of more code than can be packaged in a single VBI, in which case, the active streams engine 225 is programmed to acquire an entire procedure, the end of which may be indicated in a normal fashion such as by an end of file marker or other delimiter indicating that data preceding the delimiter represents a procedure is to be executed.
  • any suitable protocol may be defined to accumulate a procedure in the memory of the computer before the video segment to which it must be applied is reached.
  • the procedure data may be packaged as a header or interleaved in the data file or any other suitable way. If it is streaming data, the procedure can be sent in a header file or sent in small parcels as the video is buffered so that playback can begin immediately without waiting for an entire procedure or set of procedures to be loaded, the procedure(s) being accumulated over time. The latter accumulation scheme assumes the video to which they will be applied is not loaded before the procedure(s) is/are loaded.
  • the procedure data can be distributed throughout the video file and executed by an interpreter running on the computer 140. (An interpreter is a program that executes instructions immediately upon receipt without precompiling, for example, like the command line of a text-based operating system shell such as MSDOS or a command mode of a database program like dBase III.)
  • the procedure may be executed responsively to profile data and to indicators in the video file.
  • the indicators in the video file or data stream can take various forms. Several different examples are illustrated in Fig. 5 which shows a file or streaming media data 501 with time advancing in the indicated direction.
  • An audio sequence Audi could serve as a marker in which case a sound classifier could be run over the audio track until some feature is detected.
  • an image Imgr, Img 2 or other signal fraction could be recognized to identify a part of the video stream. Even a subimage SI of a frame image 510 could classified to trigger an event.
  • a marker such as Mi, M 2 , and M 3 could be written to the file.
  • the marker In an analog file, such as NTSC, the marker could be placed in the VBI.
  • the time since the beginning of the data start point could be tracked and used to indicate portions of the video stream such as time delimiters T t and T 2 .
  • a procedure 500 may be embedded in the video stream prior to the appearance of the part of the stream to which it is applied. For example the procedure 500 could be applied to the sequence demarcated by Ti and T 2 , but not to one indicated at M 3 . (Note that time is going up the page)
  • indicator is not necessary if the instructions are executable immediately upon receipt.
  • One form of indicator is simply a place marker.
  • the marker could take the form of a watermark or an icon that is recognized in a portion of the video image or a datum multiplexed into the VBI.
  • the marker can be any suitable symbol and an indication of a temporal position.
  • the marker need not necessarily occupy a position in the stream that coincides with the application of the procedure, but it may.
  • the decoder 210 may be a process that decompresses, decrypts, unpacks, unbundles or performs any other defined process that is required for access to the video data. The particulars of this process are not important to the practice of the invention.
  • the profile may contain simply an identification of the user, information about the preferences of a user or the user-group (such as a household), or any of a variety of data.
  • the profile could indicate that the user-group is a household with a very young child.
  • the active procedure could query the user before displaying highly violent or sexual subject matter and in the event of no response, mask or delete the potentially disagreeable subject matter.
  • the profile database may include subject matter preferences that the procedure uses to filter a set of selectable attributes. For example, suppose the video file contained many different video files all aggregated such that a particular one can be seen. The profile could filter these and present only one, or multiple ones, to be selected for viewing.
  • the procedure(s) consists) of commands that operate on separable portions of the video stream.
  • the execution environment is stateless so that any finite number of such portions will always be copied with the procedure(s) applicable to it/them. No information is persisted between divisible portions of the execution environment unless the procedure attached is capable of handling it, or if the procedure is capable of generating it itself.
  • the demultiplexer continuously generates commands as they are received. The commands are executed instantly with the demultiplexing or keyed to markers or inherent indicators in the video stream.
  • the active procedure is applied to a compressed video stream.
  • Decoding 310 is only done after the active procedure is applied to the raw video stream.
  • the video is described as compressed, but it could be encrypted, bundled, or otherwise encoded.
  • profile data may be supplied to the active streams engine to make the procedure responsive to data in the profile.
  • the active procedure is transmitted or otherwise supplied to the active streams engine 425 in a parallel transmission.
  • a parallel transmission could be generated and the video modified according to it.
  • the synchronization could be insured by keying execution to markers or other indicia in the video stream.
  • the key advantage is the short lifetime of the procedure code.
  • the video is always updated according to procedures that are the most recent supplied by the source of the active procedure.
  • the active procedure data can be located at locations that are independent of the portions of the video to which they apply.
  • One requirement is that for a streaming source such as a TV broadcast or an Internet streaming file, the procedure must be loaded before it is needed.
  • the procedure can be broken up, but all of it must be cumulated in memory before it is required.
  • the procedure code can be dumped.
  • the code and the event that triggers the dumping can be encoded within the procedure itself.
  • the code defining procedures is not necessarily at a high level, including elements that define complex predefined procedures, or at a low level, including elements that define small incremental procedures that must be assembled to perform useful actions.
  • the following are illustrative examples of the kinds of commands that can be executed by a suitable API to modify a media data stream.
  • Play block bO-bl Play a series of video blocks from block bO to block bl.
  • Draw line xl, yl, x2, y2, W, C Draw a superimposed line from the indicated coordinates, with the indicated weight and color.
  • Draw rectangle xl, yl, x2, y2, W, C, F Draw a superimposed rectangle from the indicated coordinates, with the indicated border weight and color and fill.

Abstract

Video, or other media data, are synchronized with procedure data streams that modify the video stream. When the video is played, reproduced, or rebroadcast, the video is modified by the procedures defined in the procedure data stream. In a stateless embodiment, the procedure stream includes commands that are executed immediately upon receipt by an interpreter. In a particular embodiment, the procedure data stream is incorporated directly in the media data stream and separated out by a demultiplexer. In a still more particular embodiment, the combined media/procedure data stream may be fractured into portions while still carrying code appropriate to the portion removed from the whole.

Description

Method and system for active modification of video content responsively to processes and data embedded in a video stream
BACKGROUND OF THE INVENTION
Field of the Invention
The invention relates to video systems in which video content is actively modified according to conditions at the delivery site, preferences of a user, or other conditions and more particularly to such systems where the procedures controlling modification are embedded in or otherwise synchronized with the video stream.
Background Certain portions of video content may be considered to be objectionable and other portions unobjectionable. This has inspired some to propose that markers be incorporated in video data to indicate the portions that might be considered objectionable for some audiences. An application running at the delivery location recognizes markers in the video stream and selectively mutes or deletes passages responsively to them. For example, PCT Application WO 98/21891 published May 22, 1998 for INHIBITION OF A TV
PROGRAMME DISPLAY ACCORDING TO THE CONTENTS, describes blanking out delimited portions of a video stream marked as containing objectionable subject matter.
Application GB 2 284 914 filed December 18, 1993 for DISCRETIONARY VIEWING CONTROL describes a system that places restrictions on television programming based on time of day, the identity of the program, the rating of the program, etc. based on conditions defined at the delivery point. Again, the result is to delete or inhibit the video signal when certain conditions are present.
Application WO 83/02208 published June 12, 1983 for METHOD AND APPARATUS FOR EDITING THE OUTPUT OF A TELEVISION SET, describes a system that filters content based on markers in the video and an application running on a television set top box. The markers inserted in the signal are graded according to the content. UK Application GB 2315175 filed July 10, 1997 for RESTRICTING ACCESS TO VIDEO MATERIAL describes a system in which video content is allowed to be transmitted or reproduced based on a characterization of its content. The filtering application is predefined at the nexus between transmission and recording or viewing.
US Patent No. 5,751,335 filed Feb. 23, 1996 for VIEWING RESTRICTING METHOD AND VIEWING RESTRICTING APPARATUS describes a system in which a TV program is muted if it exceeds an allowed rating.
US Patent No. 5,778,135 for REAL-TIME EDIT CONTROL FOR VIDEO PROGRAM MATERIAL describes a system in which video is segmented and each segment rated. An application edits out the segments that are rated above a selected level. This is essentially the same technique used in PCT Application WO 96/41438 for ENCODER APPARATUS AND DECODER APP ARAUS FOR A TELEVISION SIGNAL.
The deletion of video content responsively to content indicators in the video stream is known. In prior art systems, parts of the video are simply blocked by inhibiting display of the video at defined intervals. Also, the applications that perform the blocking are predefined and implemented responsively to markers (i.e., "special codes" - Please note that the word "code" in the specification and prior art may refer to markers or indicators, but this should be distinguished based on context from the word "code" as used to refer to a procedure or process) in the video stream. Also, there are computer games and other kinds of software that play back video sequences selectively in accordance with an execution path resulting from a user interaction with the software. For example, there may be several alternative video sequences that can be played during the execution of a game depending on choices made by a user.
There exists a need for a mechanism that allows the creator of video content greater control over the features relating to the selective display of video content. Current technology limits such control to the degree of sophistication provided at an intermediate control or delivery point.
SUMMARY OF THE INVENTION
Various techniques may be used to permit real time modification of a video stream. According to the invention, the way this modification is done increases the range of resulting modifications that are possible. It also increases the control the creator of the video stream has over the features provided at the receiving end. These advantages are provided by associating with each video stream or file one or more software procedures that are executed by the display or other generator such as a copying station (e.g., storing station, forwarding station, recording station, broadcasting station, etc.). The generator receives both the raw video stream and code defining one or more procedures to be implemented such as to modify the video stream in some way. For example, in an embodiment, the generator is a television. According to this embodiment, the television receives a video signal with embedded procedure code. The television has an internal controller that separates the software data and the raw video data and executes the software data, which may then modify the video data. For example, the software data may be contained in the video blanking interval ("VBI") of an analog video signal or simply contained in a header file attached to the video file. In this embodiment the internal controller may be programmed with an Application Program Interface ("API") that provides a set of functions the procedure may access to create various effects. This could be a Java®-like system or an enhancement to Java®. The software data defines a procedure that is executed and which modifies the video data. The procedure may be keyed to time or segment markers in the video data to allow the procedure to identify portions of the video data to be modified.
This API can provide a rich feature set or a lean feature set. It may also be written at a high level or a low level. For example, the API could provide a function to draw an object such as a flat rectangle or graded ellipse of a specified color over a specified area of the display only a certain time interval of the video. Such functions might take arguments from the procedure specifying the coordinates of the object, the size and shape, the color and the start and stop time segments. Another example is the application of a specified filter to a portion of the screen. The filter mask may be supplied as an argument.
By providing either a large set of functions, or primitive functions that can be accessed and implemented in a large variety of different ways, the invention allows the video content generator to provide many features and options for distribution and use of the video content. One result is that the features that are available are not limited to some set that was predefined in the delivery device or output device (e.g., television) as in the prior art. Note that both a large set of functions of a more integrated nature or primitive functions can provide the same degree of flexibility. Both may be provided.
At an abstract level, the invention provides for the association of executable procedures, that modify the video data, with the video itself. The association may be provided by the supply of procedures for processing the video substantially synchronously with the presentation of the video on the display processing equipment that ultimately transforms the multiplexed or compressed or coded signal into video data stream. Packaging the procedure code in the same or a related file may provide the association. Other embodiments may create the association by supplying the code embedded in an interleave fashion in the video data stream whether analog or digital.
Note that we are using the term "procedure" and "code" and other terms that refer to definitions of processes or potentiate processes broadly to encompass both declarative definitions and procedural definitions. Thus, we do not by any means intend to limit by such wording the invention to algorithms. The invention encompasses event-driven types of languages, object-oriented languages, etc.
With procedures defined for each video, the range of possible modifications, conditions, rules, criteria, and options that are possible is obviously too large to provide a comprehensive list. This is a great advantage. Also, these possibilities do not have to be known at the time the display (recording, broadcast, etc.) device is developed. Thus, video content can be delivered with its own feature upgrades. Enhancements to content are enabled without requiring software changes in the display (or other) device. Although the number and types of modifications are endless. The following are a few examples for purposes of illustration.
Video may be shipped with multiple language tracks one being selected according to a user profile accessed by the procedure.
The procedure decrypts the video using profile data and a password entered by the user. The procedure applies a blur filter to portions of a frame during a scene of a movie to mask out frontal nudity.
The procedure provides a control console that allows a user to speed up the display of the video according to the user's preference entered on a console generated by the procedure. The procedure provides a low resolution image and accepts data indicating payment authorization at which point it permits full resolution video to be shown.
The procedure recognizes portions of the video signal based on pattern recognition, the portions containing material to be censored out, and omits those portions by skipping frames to make the speed of playback very fast. The procedure omits sound track segments, for example that represent expletives, indicated by markers in the video signal.
A previously-unknown technique is transmitted with the video, such as a procedure that responds to the user profile in some special way or gives the user certain choices. The procedure provides a text overlay on the video or Flash® animation on top of the video.
The procedure retrieves commercials from a web site and displays the commercials at intervals during the video. The procedure further reduces the number and duration of commercials by providing the user a vehicle for paying down the commercials by accepting a payment for watching the video, similar to shareware that displays a banner ad until it is registered.
The procedure controls reproduction rights so that the various license privileges that can be exercised by the user are controlled by a profile file on the machine. The common feature of all the above examples is that a program that is associated with the file provides the features enjoyed, rather than requiring them to be present or otherwise available to the display or reproduction device. The invention allows the creator or distributor of the video to control the display or other use of the video with great flexibility. In an embodiment, the procedure consists of commands that operate on separable portions of the video stream. The execution environment is stateless so that any finite number of such portions will always be copied with the procedure(s) applicable to it/them. Thus, no information is persisted ("persisted" is a neologism meaning "made to persist") across divisible portions of the media stream unless the procedure attached is capable, if it does not find that information, generating it itself. To insure that these separable portions are not themselves divided, the environment used to make copies of portions of the program must respect the indivisibility of the portions. One way to insure the indivisibility is not violated, is to block encode each indivisible portion so that the block cannot be read without every bit of it. An example of an indivisible portion could be a video segment in which a part of the video image is filtered in a certain way, for example to mask parts of a nude scene. A procedure might be executed to turn on application of the mask and many frames a procedure might be executed to turn off generation of the mask. The portion of the video between the turn-on and turn-off commands must not be divided lest the turn-on command not be activated ahead of the sensitive subject matter. An alternative way of implementing the invention is to insure that every frame of the video contains its own state-generating procedure code. This environment would also be stateless. Then, any number of frames that are copied will contain suitable code for applying the correct attributes to the frame. In the above example, where a filter was applied to a portion of each frame of a sequence, the command to apply the filter, and the filter's definition, would precede each frame. In this embodiment, the environment is stateless between frames. This embodiment could be used with a broadcast model. In addition to procedures, information about the video could be encoded with the procedure data. For example, the title, author, description, etc. could be incorporated in it so that, any copied video sequence could contain global information about the video file from which the segment came. Such data need not be stored for each frame, but could be distributed over multiple frames.
The invention will be described in connection with certain preferred embodiments, with reference to the following illustrative figures so that it may be more fully understood. With reference to the figures, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is an illustration of a user-environment in which the invention may be used.
Fig. 2 illustrates an embodiment of the invention in which video data from a source is demultiplexed to extract data defining a procedure and then decoded and the procedure executed to modify the video responsively to a profile. Fig. 3 illustrates an embodiment of the invention in which video data from a source is demultiplexed to extract data defining a procedure, the compressed file modified by the procedure executed responsively to a profile, and the modified compressed file decoded.
Fig. 4 illustrates an embodiment of the invention in which video data from a source is demultiplexed and data defining a procedure is obtained from independent source and in which the video file is decoded and the procedure executed to modify the video responsively to a profile.
Fig. 5 is a figurative representation of a video file to illustrate features of certain embodiments of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring to Fig. 1, an example of a physical infrastructure capable of supporting the functional features of the invention includes view system 100 with a computer 140 and various types of input and/or storage devices. The latter include a keyboard 112, a remote control 110, a removable medium such as a floppy disk, optical disk, memory card, etc. 120, Philips Pronto®, programmable controller, voice recognition/activated controller, mouse, gesture-recognition controller, etc. Data may be stored locally on a fixed disk drive 135. Output devices may include a monitor or TV 130, speakers 131, and/or other output devices. The computer 140 (again any computation-capable devices as discussed in the summary of invention section) receives data 160 and/or video 170 from an external source which could be a broadcast transmission, a data store, the Internet, a network, a satellite transmission, a switched circuit transmission, or any other source of data or other signal. Note that by the term "computer" we mean a set top box, embedded system, storage device with a controller, or any digital device capable of carrying out the functional requirements discussed herein. The computer 140 executes procedures that may be stored on its data store 135 or embedded in the data 160 and/or video 170 received from an external source or embedded in files transferred to the computer in the form of a data file. The procedures modify the video either in compressed or decompressed form. After modification, the video may be stored on VTR 133 or transmitted by a radio transmitter 137 as a broadcast, or displayed on the TV or monitor 130. The inputs and outputs shown are examples only.
In another embodiment of the invention, the data 160 and video 170 can be transmitted by two different transmitters. Also, in a further embodiment, the data 160 can be distributed by multiple transmitters, whereas the corresponding video 170 is transmitted by a single transmitter. In this embodiment, the video is for example distributed nationally, whereas the data is distributed locally. This enables to provide different procedures with the video in different regions. In this embodiment, the range of the transmitter that transmitter the video 170 is larger than the range of the transmitters that are part of the multitude of transmitters that transmits the data 160.
In the previous two embodiments, care has to be taken in synchronisation of the data 160 and the video 170, to ensure that procedures defined in the data are applied to the correct corresponding portions of the video stream.
Referring now also to Fig. 2, in an illustrative embodiment, the computer 140 receives a video file from some source which could be a cable, microwave, satellite, or other broadcast transmission 180, a computer such as a notebook 185, a network 190 such as the Internet, a data store 195, or any other source of analog and/or digital data. These may also include a smart mobile phone, PDA, etc. In the current embodiment, the received data is a video stream. The video stream is received by a demultiplexer 205 which separates the video stream into an active video procedure data stream and a raw video stream. The former is applied to an active streams engine 225 and the latter to a decoder 210 (if necessary to decode a compressed video format). The output of the decoder 210 is applied to a process 215 that checks a profile stored on the computer 140. The profile stores data characterizing the audience. If there is a match between the profile and the current video, the procedure is applied in a process 225 responsively to profile data to generate a modified video stream. If the profile is such as not to warrant a modification of the video stream the original decompressed video is output. The output stream is applied to an output device which may be any of a variety of different sinks. For example, the output may be a broadcast transmission 180, a computer 185, a TV or monitor 131, or a data store 195. Output devices could also include a VTR as illustrated in Fig. 1 and the examples shown in Fig. 2 are merely illustrative examples.
The demultiplexer may receive an analog or digital signal. An example of an analog signal is an NTSC signal from a television broadcast. In this case, a common place to place data is in the VBI, in which case the demultiplexer may extract the data residing in the VBI from the raw video stream and apply it to the active streams engine 225. The active streams engine simply runs the procedure applied to it. The active video procedure may consist of more code than can be packaged in a single VBI, in which case, the active streams engine 225 is programmed to acquire an entire procedure, the end of which may be indicated in a normal fashion such as by an end of file marker or other delimiter indicating that data preceding the delimiter represents a procedure is to be executed. Any suitable protocol may be defined to accumulate a procedure in the memory of the computer before the video segment to which it must be applied is reached. If the video data file is digital, the procedure data may be packaged as a header or interleaved in the data file or any other suitable way. If it is streaming data, the procedure can be sent in a header file or sent in small parcels as the video is buffered so that playback can begin immediately without waiting for an entire procedure or set of procedures to be loaded, the procedure(s) being accumulated over time. The latter accumulation scheme assumes the video to which they will be applied is not loaded before the procedure(s) is/are loaded. Alternatively, the procedure data can be distributed throughout the video file and executed by an interpreter running on the computer 140. (An interpreter is a program that executes instructions immediately upon receipt without precompiling, for example, like the command line of a text-based operating system shell such as MSDOS or a command mode of a database program like dBase III.)
Once the procedure is accumulated in memory, it may be executed responsively to profile data and to indicators in the video file. Referring to Fig. 5, the indicators in the video file or data stream can take various forms. Several different examples are illustrated in Fig. 5 which shows a file or streaming media data 501 with time advancing in the indicated direction. An audio sequence Audi could serve as a marker in which case a sound classifier could be run over the audio track until some feature is detected. Similarly, an image Imgr, Img2 or other signal fraction could be recognized to identify a part of the video stream. Even a subimage SI of a frame image 510 could classified to trigger an event.
A marker such as Mi, M2, and M3 could be written to the file. In an analog file, such as NTSC, the marker could be placed in the VBI. The time since the beginning of the data start point could be tracked and used to indicate portions of the video stream such as time delimiters Tt and T2. Again, a procedure 500 may be embedded in the video stream prior to the appearance of the part of the stream to which it is applied. For example the procedure 500 could be applied to the sequence demarcated by Ti and T2, but not to one indicated at M3. (Note that time is going up the page)
Note that indicators are not necessary if the instructions are executable immediately upon receipt. One form of indicator is simply a place marker. In an analog stream, the marker could take the form of a watermark or an icon that is recognized in a portion of the video image or a datum multiplexed into the VBI. In a digital stream, the marker can be any suitable symbol and an indication of a temporal position. Of course in digital embodiments, the marker need not necessarily occupy a position in the stream that coincides with the application of the procedure, but it may. The decoder 210 may be a process that decompresses, decrypts, unpacks, unbundles or performs any other defined process that is required for access to the video data. The particulars of this process are not important to the practice of the invention.
The profile may contain simply an identification of the user, information about the preferences of a user or the user-group (such as a household), or any of a variety of data. For example, the profile could indicate that the user-group is a household with a very young child. The active procedure could query the user before displaying highly violent or sexual subject matter and in the event of no response, mask or delete the potentially disagreeable subject matter. The profile database may include subject matter preferences that the procedure uses to filter a set of selectable attributes. For example, suppose the video file contained many different video files all aggregated such that a particular one can be seen. The profile could filter these and present only one, or multiple ones, to be selected for viewing.
As discussed above, in an embodiment, the procedure(s) consists) of commands that operate on separable portions of the video stream. The execution environment is stateless so that any finite number of such portions will always be copied with the procedure(s) applicable to it/them. No information is persisted between divisible portions of the execution environment unless the procedure attached is capable of handling it, or if the procedure is capable of generating it itself. In this frame-by-frame stateless embodiment described in the summary section above, the demultiplexer continuously generates commands as they are received. The commands are executed instantly with the demultiplexing or keyed to markers or inherent indicators in the video stream.
Referring now to Fig. 3, in an alternative embodiment, the active procedure is applied to a compressed video stream. Decoding 310 is only done after the active procedure is applied to the raw video stream. In this example, the video is described as compressed, but it could be encrypted, bundled, or otherwise encoded. Again, although not shown, profile data may be supplied to the active streams engine to make the procedure responsive to data in the profile. In Fig. 4, the active procedure is transmitted or otherwise supplied to the active streams engine 425 in a parallel transmission. For example in a broadcast environment, a parallel transmission could be generated and the video modified according to it. Again, the synchronization could be insured by keying execution to markers or other indicia in the video stream. In this example, the key advantage is the short lifetime of the procedure code. The video is always updated according to procedures that are the most recent supplied by the source of the active procedure. In an environment where the statelessness requirement is not important, the active procedure data can be located at locations that are independent of the portions of the video to which they apply. One requirement, however, is that for a streaming source such as a TV broadcast or an Internet streaming file, the procedure must be loaded before it is needed. The procedure can be broken up, but all of it must be cumulated in memory before it is required. Then, the procedure code can be dumped. Preferably the code and the event that triggers the dumping can be encoded within the procedure itself.
The code defining procedures is not necessarily at a high level, including elements that define complex predefined procedures, or at a low level, including elements that define small incremental procedures that must be assembled to perform useful actions. The following are illustrative examples of the kinds of commands that can be executed by a suitable API to modify a media data stream.
Play block bO-bl: Play a series of video blocks from block bO to block bl. Draw line xl, yl, x2, y2, W, C: Draw a superimposed line from the indicated coordinates, with the indicated weight and color.
Draw rectangle xl, yl, x2, y2, W, C, F: Draw a superimposed rectangle from the indicated coordinates, with the indicated border weight and color and fill.
Apply filter (el l, cl2, cl3, c21, c22, c23, c31, c32, c33), xl, yl, x2, y2: Apply a filter to a zone defined by a specified matrix over the specified region. Include video segment at path://filename.vid: Define an alternate stream and halt video to insert the alternate stream.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims

CLAIMS:
1. A method of modifying a media information stream, comprising the steps of: incorporating, in said media information stream (500), a definition of a procedure (500) to be applied to modify a portion of said media information stream; implementing, said procedure at a point of playback, recording, or retransmission (140, 133, 130, 137, 135) of said media information stream.
2. A method as in claim 1, wherein said procedure includes multiple steps and each step is included in said media information stream adjacent a portion of said media information stream to which said each step is applicable.
3. A method as in claim 1, wherein said step of implementing includes modifying a media information portion of said media information stream.
4. A method as in claim 1, wherein said step of implementing includes providing a command to an interpreter on a computer at said point of playback, recording, or retransmission.
5. A method as in claim 1 , wherein said step of implementing includes inputting a command to an interpreter said interpreter being programmed to execute said command immediately upon said input, said interpreter being executed continuously on a computer at said point of playback, recording, or retransmission.
6. A method as in claim 5, wherein an execution environment of said interpreter is stateless in that no variables are persisted from one command to the next.
7. A method as in claim 5, wherein an execution environment of said interpreter is stateless in that no variables are persisted from one set of commands preceding a video segment, to which said one set is applied to modify it, to a next set of commands.
8. A method of modifying a media information stream, comprising the steps of: synchronizing at least one command, which points to a procedure and resides in a data stream, with a media information stream; invoking said procedure responsively to said command at a point of playback in said media information stream such that said procedure modifies said media information stream; said synchronization being effective to insure that said procedure operates on a specified portion of said media information stream.
9. A media information stream, comprising streaming video data with tokens
(500) located at various points in said streaming video data, said tokens indicating commands such that when said media information stream is applied to an interpreter, said interpreter is enabled to modify said streaming video data responsively to said commands.
10. A media information stream as in claim 9, wherein said interpreter is stateless such that each contiguous set of commands is executed beginning in the same virtual machine state as every other contiguous set of commands.
11. A media information stream as in claim 9, wherein a modification of said modifying includes applying a filter to at least one frame of said streaming video data.
12. A media information stream as in claim 9, wherein said streaming video data also includes markers and said commands are executable responsively to said markers.
13. A media information stream as in claim 9, wherein said commands are responsive to features in said streaming video data.
14. A broadcast system, comprising: a first transmitter (137) effective to output a media information stream; a second transmitter (137) effective to output a procedure data stream synchronized with said media information stream; said procedure data stream including data indicating at least one procedure, which, when executed is effective to modify said media information stream; a synchronization of said media information stream and said procedure data stream being such as to insure that procedures defined in said procedure data stream are applied to specified portions of said media information stream.
15. A system as in claim 14, wherein the second transmitter comprises a matrix of transmitters; each of the transmitters of the matrix having a range that is smaller than the range of the first transmitter.
16. A system as in claim 15, wherein each of the transmitters of the matrix transmit different data.
17. A system as in claim 14, wherein an initial state of a state machine corresponding to any series of commands in said procedure data stream and defining a procedure to be applied to a contiguous portion of said media information stream is a same state.
18. A device for receiving and modifying a media information stream, comprising: a demultiplexer (205, 305, 405) with an input for receiving a combined data stream and for outputting a media information stream and a procedure data stream; a controller (225, 325, 425) with an input and an output; said controller being programmed to receive said media information stream and said procedure data stream and to modify said media information stream responsively to said procedure data stream.
19. A method as in claim 18, wherein said procedure data stream includes multiple commands and each command is included in said combined data stream adjacent a portion representing a portion of said media information stream to which said each step is applicable, whereby said procedure data stream is synchronized with said media information stream.
20. A method as in claim 18, wherein said controller is programmed to generate an interpreter process that executes commands in said procedure data stream to realize a state machine whose cycles are synchronized responsively to a structure of said combined data stream.
21. A method as in claim 18, wherein said controller is programmed to realize an interpreter process that is executed continuously such that commands in said procedure data stream are executed immediately upon being received by said controller.
22. A method as in claim 21 , wherein an execution environment of said interpreter is stateless in that no variables are persisted from one command to the next.
23. A method as in claim 21 , wherein an execution environment of said interpreter is stateless in that no variables are persisted from one set of commands preceding a video segment, to which said one set is applied to modify it, to a next set of commands.
PCT/EP2001/009634 2000-08-21 2001-08-13 Method and system for active modification of video content responsively to processes and data embedded in a video stream WO2002017633A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2002522196A JP2004507939A (en) 2000-08-21 2001-08-13 Method and system for actively modifying video content in response to processes and data embedded in a video stream
KR1020027005030A KR20020041828A (en) 2000-08-21 2001-08-13 Method and system for active modification of video content responsively to processes and data embedded in a video stream
EP01974203A EP1314313A2 (en) 2000-08-21 2001-08-13 Method and system for active modification of video content responsively to processes and data embedded in a video stream

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US64318600A 2000-08-21 2000-08-21
US09/643,186 2000-08-21

Publications (2)

Publication Number Publication Date
WO2002017633A2 true WO2002017633A2 (en) 2002-02-28
WO2002017633A3 WO2002017633A3 (en) 2002-06-27

Family

ID=24579721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2001/009634 WO2002017633A2 (en) 2000-08-21 2001-08-13 Method and system for active modification of video content responsively to processes and data embedded in a video stream

Country Status (5)

Country Link
EP (1) EP1314313A2 (en)
JP (1) JP2004507939A (en)
KR (1) KR20020041828A (en)
CN (1) CN1394441A (en)
WO (1) WO2002017633A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006515722A (en) * 2002-10-10 2006-06-01 トムソン ライセンシング Display method without interruption of a television program having a hidden program segment
WO2006090159A1 (en) * 2005-02-24 2006-08-31 I-Zone Tv Limited Interactive television
WO2007072959A1 (en) * 2005-12-19 2007-06-28 Matsushita Electric Industrial Co., Ltd. Broadcast receiving apparatus
EP1761060A3 (en) * 2005-09-06 2008-09-03 Electronics and Telecommunications Research Institute Transmission system, receiving terminal, and method for controlling data broadcasting contents
US7657057B2 (en) 2000-09-11 2010-02-02 Digimarc Corporation Watermark encoding and decoding
US7803998B2 (en) 2005-12-21 2010-09-28 Pioneer Hi-Bred International, Inc. Methods and compositions for modifying flower development
US10542321B2 (en) 2010-04-01 2020-01-21 Saturn Licensing Llc Receiver and system using an electronic questionnaire for advanced broadcast services

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101027249B1 (en) * 2003-02-21 2011-04-06 파나소닉 주식회사 Recording medium, playback device, recording method, and playback method
CN108287651B (en) 2012-05-09 2021-04-13 苹果公司 Method and apparatus for providing haptic feedback for operations performed in a user interface
DE112013002409T5 (en) 2012-05-09 2015-02-26 Apple Inc. Apparatus, method and graphical user interface for displaying additional information in response to a user contact
CN102970610B (en) * 2012-11-26 2015-07-08 东莞宇龙通信科技有限公司 Intelligent displaying method and electronic equipment
US9632664B2 (en) 2015-03-08 2017-04-25 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9639184B2 (en) 2015-03-19 2017-05-02 Apple Inc. Touch input cursor manipulation
US9860451B2 (en) * 2015-06-07 2018-01-02 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US9880735B2 (en) 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5848934A (en) * 1995-08-31 1998-12-15 U.S. Philips Corporation Interactive entertainment attribute setting
WO1999026415A1 (en) * 1997-11-13 1999-05-27 Scidel Technologies Ltd. Method and system for personalizing images inserted into a video stream
US5990972A (en) * 1996-10-22 1999-11-23 Lucent Technologies, Inc. System and method for displaying a video menu
EP1021036A2 (en) * 1997-03-11 2000-07-19 Actv, Inc. A digital interactive system for providing full interactivity with live programming events

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5848934A (en) * 1995-08-31 1998-12-15 U.S. Philips Corporation Interactive entertainment attribute setting
US5990972A (en) * 1996-10-22 1999-11-23 Lucent Technologies, Inc. System and method for displaying a video menu
EP1021036A2 (en) * 1997-03-11 2000-07-19 Actv, Inc. A digital interactive system for providing full interactivity with live programming events
WO1999026415A1 (en) * 1997-11-13 1999-05-27 Scidel Technologies Ltd. Method and system for personalizing images inserted into a video stream

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657057B2 (en) 2000-09-11 2010-02-02 Digimarc Corporation Watermark encoding and decoding
JP2006515722A (en) * 2002-10-10 2006-06-01 トムソン ライセンシング Display method without interruption of a television program having a hidden program segment
WO2006090159A1 (en) * 2005-02-24 2006-08-31 I-Zone Tv Limited Interactive television
EP1761060A3 (en) * 2005-09-06 2008-09-03 Electronics and Telecommunications Research Institute Transmission system, receiving terminal, and method for controlling data broadcasting contents
WO2007072959A1 (en) * 2005-12-19 2007-06-28 Matsushita Electric Industrial Co., Ltd. Broadcast receiving apparatus
US7803998B2 (en) 2005-12-21 2010-09-28 Pioneer Hi-Bred International, Inc. Methods and compositions for modifying flower development
US10542321B2 (en) 2010-04-01 2020-01-21 Saturn Licensing Llc Receiver and system using an electronic questionnaire for advanced broadcast services

Also Published As

Publication number Publication date
KR20020041828A (en) 2002-06-03
EP1314313A2 (en) 2003-05-28
JP2004507939A (en) 2004-03-11
CN1394441A (en) 2003-01-29
WO2002017633A3 (en) 2002-06-27

Similar Documents

Publication Publication Date Title
US20200162787A1 (en) Multimedia content navigation and playback
US6889383B1 (en) Delivery of navigation data for playback of audio and video content
US7530084B2 (en) Method and apparatus for synchronizing dynamic graphics
CA2425741C (en) Methods and apparatus for continuous control and protection of media content
KR100618923B1 (en) Information processing apparatus, method, and computer-readable medium
CN1875630B (en) Content distribution server and content distribution method
US8208794B2 (en) Reproducing apparatus, reproducing method, program, and program storage medium
US20060031870A1 (en) Apparatus, system, and method for filtering objectionable portions of a multimedia presentation
US20100169906A1 (en) User-Annotated Video Markup
WO2002017633A2 (en) Method and system for active modification of video content responsively to processes and data embedded in a video stream
KR20040079437A (en) Alternative advertising
WO2007072327A2 (en) Script synchronization by watermarking
US20100275226A1 (en) Server apparatus, trick reproduction restriction method, and reception apparatus
JP2007282048A (en) Method and device for processing content, method and device for generating change information, control program, and recording medium
JP2001359069A (en) Information processing unit and its method, as well as program code and storage medium
JP4392880B2 (en) Authentication apparatus, control method therefor, and storage medium
KR100781907B1 (en) Apparatus for and method of presenting a scene
KR101971181B1 (en) Method and system for protecting copyright of videos by sharing profit of advertisement
JP2008154124A (en) Server apparatus and digital content distribution system
JP4878495B2 (en) Broadcast receiving apparatus and control method thereof
KR101034758B1 (en) Method for Providing Initial Behavior of Multimedia Application Format Content and System therefor
JPWO2005112454A1 (en) Metadata conversion device, metadata conversion method, and metadata conversion system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2002 522196

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1020027005030

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1020027005030

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 018032052

Country of ref document: CN

AK Designated states

Kind code of ref document: A3

Designated state(s): CN JP KR

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

WWE Wipo information: entry into national phase

Ref document number: 2001974203

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001974203

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2001974203

Country of ref document: EP