US20140085450A1 - Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream - Google Patents

Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream Download PDF

Info

Publication number
US20140085450A1
US20140085450A1 US14/093,210 US201314093210A US2014085450A1 US 20140085450 A1 US20140085450 A1 US 20140085450A1 US 201314093210 A US201314093210 A US 201314093210A US 2014085450 A1 US2014085450 A1 US 2014085450A1
Authority
US
United States
Prior art keywords
video stream
people
virtual
virtual content
movie theater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/093,210
Inventor
Mitchell M. Rohde
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quantum Signal AI LLC
Original Assignee
Quantum Signal LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/460,981 external-priority patent/US20070030343A1/en
Application filed by Quantum Signal LLC filed Critical Quantum Signal LLC
Priority to US14/093,210 priority Critical patent/US20140085450A1/en
Publication of US20140085450A1 publication Critical patent/US20140085450A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the present patent application is a divisional of the previously filed patent application entitled “Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream,” filed on Feb. 16, 2010, and assigned Ser. No. 12/706,206, which is a continuation-in-part of the previously filed patent application entitled “Interactive, video-based content for theaters,” filed on Jul. 29, 2006, and assigned Ser. No. 11/460,981, which claims the benefit of the previously filed provisional patent application having the same title, filed on Aug. 6, 2005, and assigned Ser. No. 60/705,746.
  • the content of all the above-referenced patent applications is hereby incorporated into the present patent application by reference.
  • Patrons of a movie theater typically arrive some time before the show time of a movie to which they bought tickets. During this time, they may buy concessions, and then settle into their seats in the movie theater, waiting for the movie to start. Movie theaters have tried to engage their customers during this time, by showing advertisements on the screen, and so on. However, many customers tune out these advertisements, reducing their effectiveness. Furthermore, younger patrons in particular can become bored, and start doing things that the movie theatres would prefer they not, such as causing problems with other patrons, raising their voices too much, and so on.
  • the present invention overlays virtual content onto a video stream of the people within a movie theater, based on an analysis of the people within the video stream.
  • a video stream of the people within a movie theater is received.
  • a processor of a computing device analyzes the people within the video stream, and overlays virtual content onto the video stream based on this analysis.
  • the video stream, with the virtual content overlaid thereon, is displayed on a screen within the movie theater. For instance, the virtual content and one or more of the people may appear to be interacting with one another, as if the virtual content were real and present within the movie theater.
  • the virtual content may include advertisements, such as logos of businesses. Because of the interactive nature of the virtual content, the patrons within the movie theater are less likely to tune out the virtual content, increasing the effectiveness of the advertisements. The virtual content may also engage patrons that would otherwise become bored, reducing the likelihood that the patrons start partaking in conduct that the movie theatres would prefer they not do. Still other advantages, aspects, and embodiments of the invention will become apparent by reading the detailed description that follows, and by referring to the accompanying drawings.
  • FIG. 1 is a diagram of a movie theater, according to an embodiment of the invention.
  • FIGS. 2-6 are diagrams of examples of virtual content that may be overlaid onto a video stream of people within a movie theater, according to an embodiment of the invention.
  • FIG. 7 is a diagram of a system, according to an embodiment of the invention.
  • FIG. 8 is a flowchart of a method, according to an embodiment of the invention.
  • FIG. 9 is a flowchart of a method, according to another embodiment of the invention.
  • FIG. 10 is a diagram of an example of a first video stream integrated within a second video stream, according to an embodiment of the invention.
  • FIG. 1 shows a representative movie theater 100 , according to an embodiment of the invention.
  • the movie theater 100 is more generally a venue.
  • a number of people 102 are seated within the movie theater 100 towards a screen 106 .
  • a projector 104 projects a video stream onto the screen 106 , for viewing by the people 102 .
  • a video camera 108 records or generates a video stream of the people 102 .
  • the video stream of the people 102 recorded or generated by the video camera 108 is analyzed, and virtual content is overlaid onto the video stream based on this analysis.
  • the projector 104 then displays the video stream of the people 102 , within which the virtual content has been overlaid, onto the screen 106 . This process occurs in real time or in near-real time.
  • more than one video camera 108 may be used to provide for better coverage of the people 102 within the theater 100 , as well as different types of coverage of the people 102 within the theater 100 .
  • stereo and time-of-flight video cameras may be employed.
  • the present invention is not limited to these examples.
  • Other embodiments of the invention may employ other types of virtual content, in addition to and/or in lieu of those described herein.
  • the virtual content is overlaid so that it appears one or more of the people within the movie theater are interacting with the virtual content as if the virtual content were real and present within the theater.
  • FIG. 2 shows an example of virtual content overlaid onto a video stream 200 of the people 102 in a movie theater, according to an embodiment of the invention.
  • the video stream 200 is displayed on the screen 106 .
  • the video stream 200 is of the people 102 seated in the movie theater.
  • a virtual object 202 has been overlaid onto the video stream 200 . That is, the virtual object 202 does not actually exist in the movie theater, but rather is overlaid onto the video stream 200 in FIG. 2 .
  • the virtual object 202 is a moving object, and has motion to approximate or mirror that of a real physical object, like an inflated beach ball.
  • the virtual object 202 When the virtual object 202 is first overlaid onto the video stream 200 , it may movie as if it had dropped from the ceiling of the movie theater.
  • the video stream 200 is analyzed to detect which person is close to the virtual object 202 , and to detect motion of this person.
  • the motion of the virtual object 202 as overlaid onto the video stream 200 is then changed as if the virtual object 202 were real, and this person were interacting with the virtual object 202 .
  • the person 204 is raising his or her hands to hit the virtual object 202 .
  • the motion of the virtual object 202 as overlaid onto the video stream 200 will change so that it appears the object 202 has bounced off or has been hit by the person 204 .
  • the virtual object 202 and the person 204 appear to be interacting with one another, as if the virtual object 202 were real and present within the movie theater.
  • the virtual object 202 may have a logo of a business, or an advertisement, on it. Therefore, while the people 102 are having fun playing with a virtual beach ball, for instance (i.e., interacting with the virtual content), they are more likely to continue watching the video stream 200 displayed on the screen 106 , and thus more likely to view the logo or the advertisement, instead of not concentrating on the screen 106 .
  • the invention thus advantageously entertains the people 102 while they are waiting for a movie to start, while potentially providing increased advertising revenue to the movie theater.
  • FIG. 3 shows an example of virtual content overlaid onto a video stream 300 of the people 102 in a movie theater, according to a second embodiment of the invention.
  • the video stream 300 is displayed on the screen 106 .
  • the video stream 300 is of the people 102 seated in the movie theater.
  • a virtual object 302 has been overlaid onto the video stream 300 . That is, the virtual object 302 does not actually exist in the movie theater, but rather is overlaid onto the video stream 300 in FIG. 3 .
  • the virtual object 302 is a ribbon or a rainbow, that starts from the top of the video stream 300 and lengthens and extends downward.
  • the video stream 300 is analyzed to detect which person is close to the virtual object 302 , and to detect motion of this person to see if he or she is trying to catch the object 302 . If this person does not appear to be trying to catch the virtual object 302 , then the object 302 continues to length and extend downwards towards the bottom of the video stream 300 .
  • the person 304 is raising his or her hands so that it appears the person 304 has caught the virtual object 302 .
  • the virtual object 302 may disappear, and words like “good job” or “nice catch” virtually displayed on the video stream 300 near the person 304 .
  • the virtual object 302 and the person 304 appear to be interacting with one another, as if the virtual object 302 were real and present within the movie theater.
  • the virtual object 302 may also have a logo of a business, or an advertisement, on it. Therefore, while the people 102 are having fun catching virtual ribbons or rainbows, for instance (i.e., interacting with the virtual content), they are more likely to continue watching the video stream 300 displayed on the screen 106 , and thus more likely to view the logo or the advertisement, instead of not concentrating on the screen.
  • the invention thus advantageously entertains the people 102 while they are waiting for a movie to start, while potentially providing increased advertising revenue to the movie theater.
  • FIGS. 2 and 3 are examples of games.
  • the people within the video stream are analyzed, and virtual content overlaid onto the video stream, to result in one or more of the people playing games in relation to the virtual content.
  • the game is to hit a virtual beach ball, whereas in FIG. 3 , the game is to catch a virtual ribbon or rainbow.
  • FIG. 4 shows an example of virtual content overlaid onto a video stream 400 of the people 102 in a movie theater, according to a third embodiment of the invention.
  • the video stream 400 is displayed on the screen 106 .
  • the video stream 400 is of the people 102 seated in the movie theater.
  • a virtual object 402 has been overlaid onto the video stream 400 . That is, the virtual object 402 does not actually exist in the movie theater, but rather is overlaid onto the video stream 400 in FIG. 4 .
  • the virtual object 402 is a divider, which logically divides the people 102 into two groups, a left group and a right group.
  • Virtual text 404 also is overlaid onto the video stream 400 .
  • the text 404 is a trivia question or a poll question.
  • the people 102 are requested to wave their hands when the choice they want to select is virtually displayed on the video stream 400 projected onto the screen 106 . After each choice is virtually displayed, the motion of the people within the video stream 400 is detected. In the example specifically depicted in FIG. 4 , the people 102 have been logically divided into groups on either side of the virtual object 402 , and are asked via the virtual text 404 to wave their hands when the correct answer to a movie trivia question is shown.
  • the top choice selected by each group of the people 102 by virtue of their detected motion is determined.
  • the top choice for each group may then be virtually displayed. For instance, virtual text may be overlaid onto the video stream 400 that says “you guys prefer soft drink A, while you guys prefer soft drink B,” and so on. There may be a number of such poll questions.
  • the people 102 within the video stream 400 are analyzed, and the virtual content 402 and 404 overlaid onto the video stream 400 , to result in one or more of the people 102 answering a question.
  • Analyzing the people 102 within the video stream 400 in this embodiment encompasses logically dividing the people 102 into a number of groups and detecting motion of the people 102 within each group.
  • the virtual content is ultimately overlaid onto the video stream 400 based on the motions of the people 102 within the groups—such as which group answered which trivia questions correctly, and so on.
  • the embodiment of FIG. 4 may also be a game that is played by the people 102 before the movie starts.
  • the virtual object 402 , the virtual text 404 , and/or other virtual objects may include logos of businesses, or advertisements. Therefore, while the people 102 are having fun answering poll or trivia questions, for instance (i.e., interacting with the virtual content), they are more likely to continue watching the video stream 400 displayed on the screen 106 , and thus more likely to view the logos or the advertisements.
  • the invention thus advantageously entertains the people 102 while they are waiting for a movie to start, while potentially providing increased advertising revenue to the theater.
  • FIG. 5 shows an example of virtual content overlaid onto a video stream 500 of the people 102 in a movie theater, according to a fourth embodiment of the invention.
  • the video stream 500 is displayed on the screen 106 .
  • the video stream 500 is of the people 102 seated in the movie theater.
  • a virtual character 504 has been overlaid onto an empty seat 502 in the video stream 500 .
  • the virtual character 504 does not actually exist and is not present in the movie theater, but rather is overlaid onto the video stream 500 in FIG. 5 .
  • the virtual character 504 is a one-eye alien, such as a Cyclops.
  • the virtual character 504 appears to be sitting in the empty seat 502 as if the character 504 were real and present within the movie theater.
  • Analyzing the video stream 500 therefore includes locating an empty seat within the movie theater onto which to overlay the virtual character 504 .
  • the virtual character 504 may be overlaid in conjunction with an advertisement.
  • the virtual text 506 may be a teaser advertisement associated with a movie to be released in the future.
  • the virtual character 504 is overlaid onto the video stream 500 .
  • the invention thus advantageously entertains the people 102 while they are waiting for a movie to start, and increasing interest in the advertisement with which the virtual text 506 is associated, by overlaying the virtual character 504 onto the video stream 500 .
  • FIG. 6 shows an example of virtual content overlaid onto a video stream 600 of the people 102 in a movie theater, according to a fifth embodiment of the invention.
  • the video stream 600 is displayed on the screen 106 .
  • the video stream 600 is of the people 102 seated in the movie theater.
  • a virtual object 604 has been overlaid onto the video stream 600 .
  • the virtual object 604 is a large arrow, which draws or calls attention to an actual and real given person 602 seated in the movie theater. Analyzing the video stream 600 therefore includes locating and selecting a person, such as randomly, within the movie theater. Virtual text 606 may also be overlaid onto the video stream 600 , to describe the person selected, such as “smart guy!” in FIG. 6 .
  • the invention thus advantageously entertains the people 102 while they are waiting for a movie to start. If there is additional text overlaid onto the video stream 600 associated with an advertisement or a logo of a business, the virtual object 604 and the virtual text 606 increases the likelihood that the people 102 will view and see the advertisement or logo. That is, the virtual object 604 is attending to draw interested of the people 102 to watch the screen 106 even before the movie starts.
  • FIG. 7 shows a system 700 , according to an embodiment of the invention.
  • the system 700 includes the video camera 108 and a computing device 704 .
  • the system 700 can also include one or more lights 712 to illuminate the people within the movie theater or other venue, and the projector 104 .
  • the video camera 108 generates a video stream 702 of the people within the movie theater or other venue.
  • the computing device 704 receives the video stream 702 .
  • the computing device 704 includes at least a processor 706 and a computer-readable storage medium 708 , such as semiconductor memory and/or a hard disk drive.
  • the computing device 704 can and typically does include other components.
  • the computer-readable storage medium 708 stores a computer program 710 that is executed by the processor 706 .
  • the computer program 710 when executed by the processor 706 , analyzes the people within the video stream 702 , and based on this analysis, overlays virtual content onto the video stream 702 , to result in a video stream 702 ′ that has virtual content overlaid thereon. Examples of such virtual content have been described above.
  • the computer program 710 transmits the video stream 702 ′ to the projector 104 , which displays the video stream 702 ′ on a screen within the movie theater or other venue.
  • FIG. 8 shows a method 800 , according to an embodiment of the invention.
  • the method 800 can be performed as a result of execution of the computer program 710 stored on the computer-readable storage medium 708 , by the processor 706 .
  • the video stream 702 of the people within a movie theater or other venue is received ( 802 ), as generated or recorded by the video camera 108 .
  • the people within the video stream 702 are analyzed ( 804 ). Such analysis is performed by performing appropriate image processing and/or computer vision techniques, as can be appreciated by those of ordinary skill within the art. For instance, the locations of the people within the video stream 702 may be determined, the motion of the people within the stream 702 may be detected, the outlines or contours of the people within the stream 702 may be detected, and so on. As another example, the various body parts of the people, such as their faces, hands, and other parts, may be detected and tracked within the video stream 702 .
  • Virtual content is then overlaid onto the video stream 702 , based on the analysis of the people that has been performed ( 806 ).
  • Static or animated virtual content such as borders, graphics, and so on, may be synthesized based on the location, motion, and/or action of the people within the video stream 702 , as can be appreciated by those of ordinary skill within the art.
  • the video stream 702 may be used in whole or in part with the overlaid content.
  • the resulting video stream 702 ′, with the virtual content overlaid thereon, is then displayed, or caused to be displayed, on a screen within the movie theater or other venue ( 808 ).
  • FIG. 9 shows a method 900 , according to another embodiment of the invention.
  • the method 900 can be performed as a result of execution of the computer program 710 stored on the computer-readable storage medium 708 , by the processor 706 .
  • the (first) video stream 702 of the people within a movie theater or other venue is received ( 902 ), as generated or recorded by the video camera 108 .
  • the people within the (first) video stream 702 are analyzed ( 904 ). Such analysis is performed by performing appropriate image processing and/or computer vision techniques, as can be appreciated by those of ordinary skill within the art. For instance, the locations of the people within the video stream 702 may be determined, the motion of the people within the stream 702 may be detected, the outlines or contours of the people within the stream 702 may be detected, and so on. As another example, the various body parts of the people, such as their faces, hands, and other parts, may be detected and tracked within the video stream 702 .
  • a portion of the (first) video stream 702 is integrated within another (second) video stream, based on the analysis of the people that has been performed ( 906 ). For example, at least a part of one person within the (first) video stream 702 may be integrated within the second video stream.
  • the second video stream, with the portion of the (first) video stream 702 integrated therein, is then displayed, or caused to be displayed, on a screen within the movie theater or other venue ( 908 ).
  • FIG. 10 shows an example of a (second) video stream 1000 with a portion of a (first) video stream 1002 integrated therein, according to an embodiment of the invention.
  • the video stream 1000 with the portion of the video stream 1002 integrated therein is displayed on the screen 106 .
  • the portion of the video stream 1002 is the head of a person seated in the movie theater in which the screen 106 is located.
  • the video stream 1000 is a promotional trailer for a movie.
  • the head of a person seated in the movie theater is transposed onto the body 1004 within the promotional trailer for a movie.
  • the purpose is to increase the audience's attention of the promotional trailer, by substituting the head of the actor within the promotional trailer for the head of a person seated in the movie theater. This may be done to comedic effect, as well.
  • the body 1004 is that of a bodybuilder, whereas the audience member within the video stream 1002 having the head that is transposed onto the body 1004 may not be a bodybuilder at all.
  • this example shows how in one embodiment, a portion of the video stream of the people within a venue may be integrated with another video stream, such as that of a promotional trailer for a movie.
  • the portion of the video stream of the people within a venue may be a static image in one embodiment.
  • the head of a member of the audience in a movie theater is transposed onto the body of an actor within a promotional trailer for a movie.
  • the promotional trailer for a movie may involve the primary actors sitting in a room with a number of secondary actors, known as “extras,” sitting in the background. Some members of the audience within the (first) video stream may be displayed within the (second) video stream of the promotional trailer on the screen within the movie theater, in addition to and/or in lieu of the extras originally present within the promotional trailer.
  • Embodiments of the invention thus include this, and other exemplary scenarios, as well, as encompassed by the claims.

Abstract

A video stream of people within a venue like a movie theater is received. The people within the video stream are analyzed. Based on analysis of the people within the video stream, virtual content is overlaid onto the video stream. The video stream, with the virtual content overlaid thereon, is then displayed onto a screen within the venue. As such, the virtual content and one or more of the people within the venue can appear to be interacting with one another as if the virtual content were real and present within the venue.

Description

    RELATED APPLICATIONS
  • The present patent application is a divisional of the previously filed patent application entitled “Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream,” filed on Feb. 16, 2010, and assigned Ser. No. 12/706,206, which is a continuation-in-part of the previously filed patent application entitled “Interactive, video-based content for theaters,” filed on Jul. 29, 2006, and assigned Ser. No. 11/460,981, which claims the benefit of the previously filed provisional patent application having the same title, filed on Aug. 6, 2005, and assigned Ser. No. 60/705,746. The content of all the above-referenced patent applications is hereby incorporated into the present patent application by reference.
  • BACKGROUND
  • Patrons of a movie theater typically arrive some time before the show time of a movie to which they bought tickets. During this time, they may buy concessions, and then settle into their seats in the movie theater, waiting for the movie to start. Movie theaters have tried to engage their customers during this time, by showing advertisements on the screen, and so on. However, many customers tune out these advertisements, reducing their effectiveness. Furthermore, younger patrons in particular can become bored, and start doing things that the movie theatres would prefer they not, such as causing problems with other patrons, raising their voices too much, and so on.
  • SUMMARY
  • The present invention overlays virtual content onto a video stream of the people within a movie theater, based on an analysis of the people within the video stream. In one embodiment, a video stream of the people within a movie theater is received. A processor of a computing device analyzes the people within the video stream, and overlays virtual content onto the video stream based on this analysis. The video stream, with the virtual content overlaid thereon, is displayed on a screen within the movie theater. For instance, the virtual content and one or more of the people may appear to be interacting with one another, as if the virtual content were real and present within the movie theater.
  • The virtual content may include advertisements, such as logos of businesses. Because of the interactive nature of the virtual content, the patrons within the movie theater are less likely to tune out the virtual content, increasing the effectiveness of the advertisements. The virtual content may also engage patrons that would otherwise become bored, reducing the likelihood that the patrons start partaking in conduct that the movie theatres would prefer they not do. Still other advantages, aspects, and embodiments of the invention will become apparent by reading the detailed description that follows, and by referring to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The drawings referenced herein form a part of the specification. Features shown in the drawing are meant as illustrative of only some embodiments of the invention, and not of all embodiments of the invention, unless explicitly indicated, and implications to the contrary are otherwise not to be made.
  • FIG. 1 is a diagram of a movie theater, according to an embodiment of the invention.
  • FIGS. 2-6 are diagrams of examples of virtual content that may be overlaid onto a video stream of people within a movie theater, according to an embodiment of the invention.
  • FIG. 7 is a diagram of a system, according to an embodiment of the invention.
  • FIG. 8 is a flowchart of a method, according to an embodiment of the invention.
  • FIG. 9 is a flowchart of a method, according to another embodiment of the invention.
  • FIG. 10 is a diagram of an example of a first video stream integrated within a second video stream, according to an embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized, and logical, mechanical, and other changes may be made without departing from the spirit or scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
  • FIG. 1 shows a representative movie theater 100, according to an embodiment of the invention. The movie theater 100 is more generally a venue. A number of people 102 are seated within the movie theater 100 towards a screen 106. A projector 104 projects a video stream onto the screen 106, for viewing by the people 102. A video camera 108 records or generates a video stream of the people 102.
  • In general, the video stream of the people 102 recorded or generated by the video camera 108 is analyzed, and virtual content is overlaid onto the video stream based on this analysis. The projector 104 then displays the video stream of the people 102, within which the virtual content has been overlaid, onto the screen 106. This process occurs in real time or in near-real time.
  • There may be more than one video camera 108. For instance, more than one video camera 108 may be used to provide for better coverage of the people 102 within the theater 100, as well as different types of coverage of the people 102 within the theater 100. As examples of the latter, stereo and time-of-flight video cameras may be employed.
  • Different examples of such virtual content, according to different embodiments of the invention, are now described. The present invention, however, is not limited to these examples. Other embodiments of the invention may employ other types of virtual content, in addition to and/or in lieu of those described herein. In some embodiments, the virtual content is overlaid so that it appears one or more of the people within the movie theater are interacting with the virtual content as if the virtual content were real and present within the theater.
  • FIG. 2 shows an example of virtual content overlaid onto a video stream 200 of the people 102 in a movie theater, according to an embodiment of the invention. The video stream 200 is displayed on the screen 106. The video stream 200 is of the people 102 seated in the movie theater.
  • A virtual object 202 has been overlaid onto the video stream 200. That is, the virtual object 202 does not actually exist in the movie theater, but rather is overlaid onto the video stream 200 in FIG. 2. The virtual object 202 is a moving object, and has motion to approximate or mirror that of a real physical object, like an inflated beach ball.
  • When the virtual object 202 is first overlaid onto the video stream 200, it may movie as if it had dropped from the ceiling of the movie theater. The video stream 200 is analyzed to detect which person is close to the virtual object 202, and to detect motion of this person. The motion of the virtual object 202 as overlaid onto the video stream 200 is then changed as if the virtual object 202 were real, and this person were interacting with the virtual object 202.
  • For example, as specifically depicted in FIG. 2, the person 204 is raising his or her hands to hit the virtual object 202. As such, the motion of the virtual object 202 as overlaid onto the video stream 200 will change so that it appears the object 202 has bounced off or has been hit by the person 204. In this respect, the virtual object 202 and the person 204 appear to be interacting with one another, as if the virtual object 202 were real and present within the movie theater.
  • The virtual object 202 may have a logo of a business, or an advertisement, on it. Therefore, while the people 102 are having fun playing with a virtual beach ball, for instance (i.e., interacting with the virtual content), they are more likely to continue watching the video stream 200 displayed on the screen 106, and thus more likely to view the logo or the advertisement, instead of not concentrating on the screen 106. The invention thus advantageously entertains the people 102 while they are waiting for a movie to start, while potentially providing increased advertising revenue to the movie theater.
  • FIG. 3 shows an example of virtual content overlaid onto a video stream 300 of the people 102 in a movie theater, according to a second embodiment of the invention. The video stream 300 is displayed on the screen 106. The video stream 300 is of the people 102 seated in the movie theater.
  • A virtual object 302 has been overlaid onto the video stream 300. That is, the virtual object 302 does not actually exist in the movie theater, but rather is overlaid onto the video stream 300 in FIG. 3. The virtual object 302 is a ribbon or a rainbow, that starts from the top of the video stream 300 and lengthens and extends downward. The video stream 300 is analyzed to detect which person is close to the virtual object 302, and to detect motion of this person to see if he or she is trying to catch the object 302. If this person does not appear to be trying to catch the virtual object 302, then the object 302 continues to length and extend downwards towards the bottom of the video stream 300.
  • For example, as specifically depicted in FIG. 3, the person 304 is raising his or her hands so that it appears the person 304 has caught the virtual object 302. Once the person 304 has caught the virtual object 302, the virtual object 302 may disappear, and words like “good job” or “nice catch” virtually displayed on the video stream 300 near the person 304. In this respect, the virtual object 302 and the person 304 appear to be interacting with one another, as if the virtual object 302 were real and present within the movie theater.
  • The virtual object 302 may also have a logo of a business, or an advertisement, on it. Therefore, while the people 102 are having fun catching virtual ribbons or rainbows, for instance (i.e., interacting with the virtual content), they are more likely to continue watching the video stream 300 displayed on the screen 106, and thus more likely to view the logo or the advertisement, instead of not concentrating on the screen. The invention thus advantageously entertains the people 102 while they are waiting for a movie to start, while potentially providing increased advertising revenue to the movie theater.
  • The embodiments of FIGS. 2 and 3, among other embodiments of the invention, are examples of games. In these games, the people within the video stream are analyzed, and virtual content overlaid onto the video stream, to result in one or more of the people playing games in relation to the virtual content. In FIG. 2, the game is to hit a virtual beach ball, whereas in FIG. 3, the game is to catch a virtual ribbon or rainbow.
  • FIG. 4 shows an example of virtual content overlaid onto a video stream 400 of the people 102 in a movie theater, according to a third embodiment of the invention. The video stream 400 is displayed on the screen 106. The video stream 400 is of the people 102 seated in the movie theater.
  • A virtual object 402 has been overlaid onto the video stream 400. That is, the virtual object 402 does not actually exist in the movie theater, but rather is overlaid onto the video stream 400 in FIG. 4. The virtual object 402 is a divider, which logically divides the people 102 into two groups, a left group and a right group.
  • Virtual text 404 also is overlaid onto the video stream 400. The text 404 is a trivia question or a poll question. The people 102 are requested to wave their hands when the choice they want to select is virtually displayed on the video stream 400 projected onto the screen 106. After each choice is virtually displayed, the motion of the people within the video stream 400 is detected. In the example specifically depicted in FIG. 4, the people 102 have been logically divided into groups on either side of the virtual object 402, and are asked via the virtual text 404 to wave their hands when the correct answer to a movie trivia question is shown.
  • Once all the choices have been virtually displayed, in the case of a trivia question, it is determined which choice each group of the people 102 selected by virtue of their detected motion. The correct choice may then be virtually displayed, along with which group or groups of the people 102, if any, selected the correct choice. There may be a number of such trivia questions. As such, the groups of the people 102 are playing a trivia game against each other.
  • In the case of a poll, once all the choices have been virtually displayed, the top choice selected by each group of the people 102 by virtue of their detected motion is determined. The top choice for each group may then be virtually displayed. For instance, virtual text may be overlaid onto the video stream 400 that says “you guys prefer soft drink A, while you guys prefer soft drink B,” and so on. There may be a number of such poll questions.
  • In the embodiment represented by FIG. 4, then, the people 102 within the video stream 400 are analyzed, and the virtual content 402 and 404 overlaid onto the video stream 400, to result in one or more of the people 102 answering a question. Analyzing the people 102 within the video stream 400 in this embodiment encompasses logically dividing the people 102 into a number of groups and detecting motion of the people 102 within each group. The virtual content is ultimately overlaid onto the video stream 400 based on the motions of the people 102 within the groups—such as which group answered which trivia questions correctly, and so on.
  • The embodiment of FIG. 4 may also be a game that is played by the people 102 before the movie starts. The virtual object 402, the virtual text 404, and/or other virtual objects may include logos of businesses, or advertisements. Therefore, while the people 102 are having fun answering poll or trivia questions, for instance (i.e., interacting with the virtual content), they are more likely to continue watching the video stream 400 displayed on the screen 106, and thus more likely to view the logos or the advertisements. The invention thus advantageously entertains the people 102 while they are waiting for a movie to start, while potentially providing increased advertising revenue to the theater.
  • FIG. 5 shows an example of virtual content overlaid onto a video stream 500 of the people 102 in a movie theater, according to a fourth embodiment of the invention. The video stream 500 is displayed on the screen 106. The video stream 500 is of the people 102 seated in the movie theater.
  • A virtual character 504 has been overlaid onto an empty seat 502 in the video stream 500. The virtual character 504 does not actually exist and is not present in the movie theater, but rather is overlaid onto the video stream 500 in FIG. 5. In the example of FIG. 5, for instance, the virtual character 504 is a one-eye alien, such as a Cyclops. Thus, the virtual character 504 appears to be sitting in the empty seat 502 as if the character 504 were real and present within the movie theater. Analyzing the video stream 500 therefore includes locating an empty seat within the movie theater onto which to overlay the virtual character 504.
  • The virtual character 504 may be overlaid in conjunction with an advertisement. For example, the virtual text 506 may be a teaser advertisement associated with a movie to be released in the future. As a way to increase interest in the movie, the virtual character 504 is overlaid onto the video stream 500. The invention thus advantageously entertains the people 102 while they are waiting for a movie to start, and increasing interest in the advertisement with which the virtual text 506 is associated, by overlaying the virtual character 504 onto the video stream 500.
  • FIG. 6 shows an example of virtual content overlaid onto a video stream 600 of the people 102 in a movie theater, according to a fifth embodiment of the invention. The video stream 600 is displayed on the screen 106. The video stream 600 is of the people 102 seated in the movie theater.
  • A virtual object 604 has been overlaid onto the video stream 600. The virtual object 604 is a large arrow, which draws or calls attention to an actual and real given person 602 seated in the movie theater. Analyzing the video stream 600 therefore includes locating and selecting a person, such as randomly, within the movie theater. Virtual text 606 may also be overlaid onto the video stream 600, to describe the person selected, such as “smart guy!” in FIG. 6.
  • The invention thus advantageously entertains the people 102 while they are waiting for a movie to start. If there is additional text overlaid onto the video stream 600 associated with an advertisement or a logo of a business, the virtual object 604 and the virtual text 606 increases the likelihood that the people 102 will view and see the advertisement or logo. That is, the virtual object 604 is attending to draw interested of the people 102 to watch the screen 106 even before the movie starts.
  • FIG. 7 shows a system 700, according to an embodiment of the invention. The system 700 includes the video camera 108 and a computing device 704. The system 700 can also include one or more lights 712 to illuminate the people within the movie theater or other venue, and the projector 104. The video camera 108 generates a video stream 702 of the people within the movie theater or other venue.
  • The computing device 704 receives the video stream 702. The computing device 704 includes at least a processor 706 and a computer-readable storage medium 708, such as semiconductor memory and/or a hard disk drive. The computing device 704 can and typically does include other components. The computer-readable storage medium 708 stores a computer program 710 that is executed by the processor 706.
  • The computer program 710, when executed by the processor 706, analyzes the people within the video stream 702, and based on this analysis, overlays virtual content onto the video stream 702, to result in a video stream 702′ that has virtual content overlaid thereon. Examples of such virtual content have been described above. The computer program 710 transmits the video stream 702′ to the projector 104, which displays the video stream 702′ on a screen within the movie theater or other venue.
  • FIG. 8 shows a method 800, according to an embodiment of the invention. The method 800 can be performed as a result of execution of the computer program 710 stored on the computer-readable storage medium 708, by the processor 706. The video stream 702 of the people within a movie theater or other venue is received (802), as generated or recorded by the video camera 108.
  • The people within the video stream 702 are analyzed (804). Such analysis is performed by performing appropriate image processing and/or computer vision techniques, as can be appreciated by those of ordinary skill within the art. For instance, the locations of the people within the video stream 702 may be determined, the motion of the people within the stream 702 may be detected, the outlines or contours of the people within the stream 702 may be detected, and so on. As another example, the various body parts of the people, such as their faces, hands, and other parts, may be detected and tracked within the video stream 702.
  • Virtual content is then overlaid onto the video stream 702, based on the analysis of the people that has been performed (806). Static or animated virtual content, such as borders, graphics, and so on, may be synthesized based on the location, motion, and/or action of the people within the video stream 702, as can be appreciated by those of ordinary skill within the art. The video stream 702 may be used in whole or in part with the overlaid content. The resulting video stream 702′, with the virtual content overlaid thereon, is then displayed, or caused to be displayed, on a screen within the movie theater or other venue (808).
  • It is noted that, although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Other applications and uses of embodiments of the invention, besides those described herein, are amenable to at least some embodiments. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is manifestly intended that this invention be limited only by the claims and equivalents thereof.
  • For example, FIG. 9 shows a method 900, according to another embodiment of the invention. Like the method 800 of FIG. 8, the method 900 can be performed as a result of execution of the computer program 710 stored on the computer-readable storage medium 708, by the processor 706. The (first) video stream 702 of the people within a movie theater or other venue is received (902), as generated or recorded by the video camera 108.
  • The people within the (first) video stream 702 are analyzed (904). Such analysis is performed by performing appropriate image processing and/or computer vision techniques, as can be appreciated by those of ordinary skill within the art. For instance, the locations of the people within the video stream 702 may be determined, the motion of the people within the stream 702 may be detected, the outlines or contours of the people within the stream 702 may be detected, and so on. As another example, the various body parts of the people, such as their faces, hands, and other parts, may be detected and tracked within the video stream 702.
  • A portion of the (first) video stream 702 is integrated within another (second) video stream, based on the analysis of the people that has been performed (906). For example, at least a part of one person within the (first) video stream 702 may be integrated within the second video stream. The second video stream, with the portion of the (first) video stream 702 integrated therein, is then displayed, or caused to be displayed, on a screen within the movie theater or other venue (908).
  • FIG. 10 shows an example of a (second) video stream 1000 with a portion of a (first) video stream 1002 integrated therein, according to an embodiment of the invention. The video stream 1000 with the portion of the video stream 1002 integrated therein is displayed on the screen 106. The portion of the video stream 1002 is the head of a person seated in the movie theater in which the screen 106 is located. By comparison, the video stream 1000 is a promotional trailer for a movie.
  • Therefore, the head of a person seated in the movie theater is transposed onto the body 1004 within the promotional trailer for a movie. The purpose is to increase the audience's attention of the promotional trailer, by substituting the head of the actor within the promotional trailer for the head of a person seated in the movie theater. This may be done to comedic effect, as well. In the example of FIG. 10, for instance, the body 1004 is that of a bodybuilder, whereas the audience member within the video stream 1002 having the head that is transposed onto the body 1004 may not be a bodybuilder at all.
  • In general, then, this example shows how in one embodiment, a portion of the video stream of the people within a venue may be integrated with another video stream, such as that of a promotional trailer for a movie. The portion of the video stream of the people within a venue may be a static image in one embodiment. As depicted in the example of FIG. 10, the head of a member of the audience in a movie theater is transposed onto the body of an actor within a promotional trailer for a movie.
  • This embodiment of course encompasses other examples as well. As just one example, the promotional trailer for a movie may involve the primary actors sitting in a room with a number of secondary actors, known as “extras,” sitting in the background. Some members of the audience within the (first) video stream may be displayed within the (second) video stream of the promotional trailer on the screen within the movie theater, in addition to and/or in lieu of the extras originally present within the promotional trailer. Embodiments of the invention thus include this, and other exemplary scenarios, as well, as encompassed by the claims.

Claims (2)

I claim:
1. A method comprising:
receiving a first video stream of a plurality of people within a venue;
analyzing, by a processor of a computing device, the people within the first video stream;
integrating, by the processor of the computing device, a portion of the first video stream within a second video stream, based on analysis of the people within the first video stream; and,
displaying the second video stream, with the portion of the first stream integrated therein, onto a screen within the venue.
2. The method of claim 1, wherein the portion of the first video stream comprises at least a part of one person within the first video stream.
US14/093,210 2005-08-06 2013-11-29 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream Abandoned US20140085450A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/093,210 US20140085450A1 (en) 2005-08-06 2013-11-29 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US70574605P 2005-08-06 2005-08-06
US11/460,981 US20070030343A1 (en) 2005-08-06 2006-07-29 Interactive, video-based content for theaters
US12/706,206 US8625845B2 (en) 2005-08-06 2010-02-16 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream
US14/093,210 US20140085450A1 (en) 2005-08-06 2013-11-29 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/706,206 Division US8625845B2 (en) 2005-08-06 2010-02-16 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream

Publications (1)

Publication Number Publication Date
US20140085450A1 true US20140085450A1 (en) 2014-03-27

Family

ID=42231180

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/706,206 Active 2028-08-03 US8625845B2 (en) 2005-08-06 2010-02-16 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream
US14/093,210 Abandoned US20140085450A1 (en) 2005-08-06 2013-11-29 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/706,206 Active 2028-08-03 US8625845B2 (en) 2005-08-06 2010-02-16 Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream

Country Status (1)

Country Link
US (2) US8625845B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9471954B2 (en) 2015-03-16 2016-10-18 International Business Machines Corporation Video sequence assembly

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9510148B2 (en) 2009-03-03 2016-11-29 Mobilitie, Llc System and method for wireless communication to permit audience participation
US20130229578A1 (en) * 2012-03-05 2013-09-05 Russell Benton Myers On-screen Additions to Movie Presentations
US10078917B1 (en) * 2015-06-26 2018-09-18 Lucasfilm Entertainment Company Ltd. Augmented reality simulation
US10484824B1 (en) 2015-06-26 2019-11-19 Lucasfilm Entertainment Company Ltd. Content presentation and layering across multiple devices
US10997630B2 (en) * 2018-12-20 2021-05-04 Rovi Guides, Inc. Systems and methods for inserting contextual advertisements into a virtual environment
CN110446092B (en) * 2019-07-25 2023-06-20 北京拉近众博科技有限公司 Virtual auditorium generation method, system, device and medium for sports game

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5830065A (en) * 1992-05-22 1998-11-03 Sitrick; David H. User image integration into audiovisual presentation system and methodology
US6466275B1 (en) * 1999-04-16 2002-10-15 Sportvision, Inc. Enhancing a video of an event at a remote location using data acquired at the event
US20050151743A1 (en) * 2000-11-27 2005-07-14 Sitrick David H. Image tracking and substitution system and methodology for audio-visual presentations
US20050207622A1 (en) * 2004-03-16 2005-09-22 Haupt Gordon T Interactive system for recognition analysis of multiple streams of video

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6400374B2 (en) * 1996-09-18 2002-06-04 Eyematic Interfaces, Inc. Video superposition system and method
US5959717A (en) * 1997-12-12 1999-09-28 Chaum; Jerry Motion picture copy prevention, monitoring, and interactivity system
US6095650A (en) * 1998-09-22 2000-08-01 Virtual Visual Devices, Llc Interactive eyewear selection system
EP1136869A1 (en) * 2000-03-17 2001-09-26 Kabushiki Kaisha TOPCON Eyeglass frame selecting system
JP4432246B2 (en) * 2000-09-29 2010-03-17 ソニー株式会社 Audience status determination device, playback output control system, audience status determination method, playback output control method, recording medium
US7259747B2 (en) 2001-06-05 2007-08-21 Reactrix Systems, Inc. Interactive video display system
US8035612B2 (en) 2002-05-28 2011-10-11 Intellectual Ventures Holding 67 Llc Self-contained interactive video display system
US7027101B1 (en) * 2002-05-13 2006-04-11 Microsoft Corporation Selectively overlaying a user interface atop a video signal
US7710391B2 (en) 2002-05-28 2010-05-04 Matthew Bell Processing an image utilizing a spatially varying pattern
US7348963B2 (en) 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
US7227976B1 (en) * 2002-07-08 2007-06-05 Videomining Corporation Method and system for real-time facial image enhancement
AU2003301043A1 (en) 2002-12-13 2004-07-09 Reactrix Systems Interactive directed light/sound system
US20040194128A1 (en) * 2003-03-28 2004-09-30 Eastman Kodak Company Method for providing digital cinema content based upon audience metrics
EP1621010A1 (en) 2003-05-02 2006-02-01 Allan Robert Staker Interactive system and method for video compositing
US20050011964A1 (en) 2003-07-16 2005-01-20 Greenlee Garrett M. System and method for controlling the temperature of an open-air area
WO2005041579A2 (en) 2003-10-24 2005-05-06 Reactrix Systems, Inc. Method and system for processing captured image information in an interactive video display system
EP1943841A2 (en) 2004-11-04 2008-07-16 Megamedia, LLC Apparatus and methods for encoding data for video compositing
US8081822B1 (en) 2005-05-31 2011-12-20 Intellectual Ventures Holding 67 Llc System and method for sensing a feature of an object in an interactive video display
US8098277B1 (en) 2005-12-02 2012-01-17 Intellectual Ventures Holding 67 Llc Systems and methods for communication between a reactive video system and a mobile communication device
JP5430572B2 (en) 2007-09-14 2014-03-05 インテレクチュアル ベンチャーズ ホールディング 67 エルエルシー Gesture-based user interaction processing
US8159682B2 (en) 2007-11-12 2012-04-17 Intellectual Ventures Holding 67 Llc Lens system
US8135724B2 (en) 2007-11-29 2012-03-13 Sony Corporation Digital media recasting
US20100039500A1 (en) 2008-02-15 2010-02-18 Matthew Bell Self-Contained 3D Vision System Utilizing Stereo Camera and Patterned Illuminator
US8259163B2 (en) 2008-03-07 2012-09-04 Intellectual Ventures Holding 67 Llc Display with built in 3D sensing
US8595218B2 (en) 2008-06-12 2013-11-26 Intellectual Ventures Holding 67 Llc Interactive display management systems and methods
US8824861B2 (en) 2008-07-01 2014-09-02 Yoostar Entertainment Group, Inc. Interactive systems and methods for video compositing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5830065A (en) * 1992-05-22 1998-11-03 Sitrick; David H. User image integration into audiovisual presentation system and methodology
US6466275B1 (en) * 1999-04-16 2002-10-15 Sportvision, Inc. Enhancing a video of an event at a remote location using data acquired at the event
US20050151743A1 (en) * 2000-11-27 2005-07-14 Sitrick David H. Image tracking and substitution system and methodology for audio-visual presentations
US20050207622A1 (en) * 2004-03-16 2005-09-22 Haupt Gordon T Interactive system for recognition analysis of multiple streams of video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9471954B2 (en) 2015-03-16 2016-10-18 International Business Machines Corporation Video sequence assembly
US10334217B2 (en) 2015-03-16 2019-06-25 International Business Machines Corporation Video sequence assembly

Also Published As

Publication number Publication date
US20100142928A1 (en) 2010-06-10
US8625845B2 (en) 2014-01-07

Similar Documents

Publication Publication Date Title
US20140085450A1 (en) Overlaying virtual content onto video stream of people within venue based on analysis of the people within the video stream
US20200344505A1 (en) Scheme for determining the locations and timing of advertisements and other insertions in media
US11918908B2 (en) Overlaying content within live streaming video
US9538049B2 (en) Scheme for determining the locations and timing of advertisements and other insertions in media
US11087135B2 (en) Virtual trading card and augmented reality movie system
ES2358889T3 (en) VISUAL ALTERATIONS POSPRODUCTION.
US8331760B2 (en) Adaptive video zoom
US8730354B2 (en) Overlay video content on a mobile device
US20120017236A1 (en) Supplemental video content on a mobile device
US20130183021A1 (en) Supplemental content on a mobile device
EP2523192B1 (en) Dynamic replacement of cinematic stage props in program content
US10467809B2 (en) Methods and systems for presenting a video stream within a persistent virtual reality world
Jain et al. Inferring artistic intention in comic art through viewer gaze
US20220020220A1 (en) Apparatus, system, and method of providing a three dimensional virtual local presence
US20210383579A1 (en) Systems and methods for enhancing live audience experience on electronic device
Maynard et al. Unpaid advertising: A case of Wilson the volleyball in Cast Away
US20110141359A1 (en) In-Program Trigger of Video Content
Fortunato The television framing methods of the national basketball association: An agenda‐setting application
US11694230B2 (en) Apparatus, system, and method of providing a three dimensional virtual local presence
NZ575492A (en) Active advertising method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION