WO1993004748A1 - Video game with interactive audiovisual dialogue - Google Patents

Video game with interactive audiovisual dialogue Download PDF

Info

Publication number
WO1993004748A1
WO1993004748A1 PCT/US1992/006030 US9206030W WO9304748A1 WO 1993004748 A1 WO1993004748 A1 WO 1993004748A1 US 9206030 W US9206030 W US 9206030W WO 9304748 A1 WO9304748 A1 WO 9304748A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
talking
voice
characters
verbal
Prior art date
Application number
PCT/US1992/006030
Other languages
French (fr)
Inventor
Robert Macandrew Best
Original Assignee
Best Robert M
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Best Robert M filed Critical Best Robert M
Publication of WO1993004748A1 publication Critical patent/WO1993004748A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • G09B7/04Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/47Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/63Methods for processing data by generating or executing the game program for controlling the execution of the game in time
    • A63F2300/632Methods for processing data by generating or executing the game program for controlling the execution of the game in time by branching, e.g. choosing one of several possible story developments at a given point in time
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This invention consists of video game methods of providing simulated voice dialog between a human game player (17) and two or more talking animated characters (12, 22) in different scenes (11, 19) which alternate on a video screen to give an illusion that the actions in both scenes are happening simultaneously, even though separed in space or time. The characters talk with each other through a two-way radio (31) or telephone or through an opening in a wall such as a window or door. Each player holds a controller (28) that displays two or more variable phrases (15, 20) for a human player to say to a character or for a character to say (14) or actions for a character to do. The player (17) responds by pressing a button (16) next to a selected phrase. An animated character then acts or vocally responds (18) as if it had been spoken to by the human player or by one of the characters. Human game players are thus given an illusion of active participation in dialog with these characters.

Description

VIDEO GAME WITH INTERACTIVE AUDIOVISUAL DIALOGUE
Technical Field
This invention relates to video games, animated cartoons, and picture/sound synchronization.
Background Art
Video games have some of the characteristics of motion picture film animation. In film terminology the editing together of several shots in a sequence to have a desired effect is called montage. There are several kinds of montage. Parallel montage means alternately cross-cutting between two shots or scenes to provide an illusion of simultaneity. For example, in a chase scene the montage alternates between shots of pursuer and pursued. This illusion of simultaneity is important if characters in different scenes are talking with each other on a telephone, two-way radio, or through a door, or if one character is distantly influenced by a character in another scene, either by hearing what the other character says or by watching what the other character does in the other scene. For example, a character in one scene may be watching another character through a window or on a television monitor or the like and thus be influenced by what the other character says or does.
In the video game art different scenes often alternate. For example, when a character goes through a door, a new scene may appear on the screen. It is also well known for video characters to talk to each other. It is well known for human players to input choices using any of a variety of input devices such as push buttons, rotatable knobs, pressure sensitive membrane, proximity sensitive pads or screen overlay, light pen, light sensitive gun, joy stick, mouse, track ball, moving a cursor or crosshairs or scrolling through highlighted options, icons, speech recognition, etc.
In the prior art, each choice by the human can be immediately followed by a synthesized voice or digitized voice recording that speaks the words selected by the human player, so the human will quickly adjust to the fact that the spoken words he hears for his side of the dialog are initiated by his ingers rather than his vocal cords.
The characters in prior-art video games and computer games, especially role-playing games, are of two types: player-controlled characters (or player characters) and non-player characters. A player-controlled character is a human player's animated counterpart and does and says what the human player chooses to have him do and say. Non-player characters are not directly controlled by a human player, but can be indirectly influenced by a human player, either by responding to an action selected by the human player or by responding to what a player- controlled character does or says.
Drawing Figures
FIG. 1 illustrates two scenes of an animated cartoon talking game with two characters in one scene and one character in a second scene talking with each other through time using a radio-like device. One or more human players (hands shown pushing buttons) select some of what the characters say.
FIG. 2 illustrates the scene following FIG. 1 in which one of the animated characters responds to danger by giving a verbal command to an off-screen character, a command actually selected by a human player (hand shown pushing button) .
FIG. 3 illustrates two scenes in which an animated character acts in response to the verbal command from FIG. 2 followed by a scene illustrating the effect of the character's action, that is a surprised dinosaur. FIG. 4 illustrates dialog between three characters in various scenes.
Description of Preferred Embodiments This invention is a video game method that simulates voice dialog between a human game player and two or more animated characters that appear on a video screen in different scenes that are set at different locations. For example, one character may be in a field with wild animals, while a second character is inside a building. When an animated character in one scene talks to a second character in a different scene who then talks back, the animated picture alternates between the two scenes to give an illusion that the actions in both scenes are happening simultaneously. A wall or equivalent separation device may be shown on the video screen to emphasize this scene separation. The characters, being separated in space in different scenes, are not able in this example to talk with each other face to face. Instead they are shown talking with each other through a voice communication apparatus such as a telephone or two-way radio or through an opening in a wall such as a window or door or tube or through an intermediary person such as a human game player. Human players may participate in dialog between scenes either by directly controlling the words a character says or indirectly by playing the role of a third off-screen character who talks with the on-screen characters. Each player has a hand÷held controller that displays two or more phrases or actions. A player responds by pressing a button next to a selected phrase or action or by moving a cursor. An animated character then acts or verbally responds as if it had been spoken to by the human player or by one of the other characters. Speech recognition is not required. Human game players are thus given an illusion of active participation in dialog and adventures with these characters.
Referring to FIG. 1, in one embodiment of this invention a video game system displays on a TV or video screen an animated picture sequence to one or more human game players. Each human player holds a hand-held controller 28 having about three push buttons 16 next to a liquid-crystal display. Every few seconds alternative words, phrases, sentences or other verbal expressions 15 are displayed on controller 28 to give the human player a selection of things to "say" to animated characters or for the characters to say to each other or to human players. If a player's controller is lacking a display, equivalent verbal expressions may be displayed on a TV or video screen and be manually selected by the human player pressing one or more push-buttons. Each time an animated character talks, his or her or its mouth should make appropriate speaking movements that are lip-synchronized with corresponding voice sounds.
As scene 11 begins, the video game system displays two animated characters 12 and 13 and nearby dinosaurs. Character 12 is talking into a hand-held radio-like device 31 to a third off-screen character 22 in another scene 19 to be subsequently displayed. Character 12 speaks the words "The dinosaurs have seen us." into radio-like device 31. Controller 28 then displays to the human player three sentences 15 of alternative words that character 12 can say next. Human player's hand 17 is shown pressing button 16 to select the words "Beam us back." The game system then generates the selected words in the voice sounds 14 of character 12 (indicated as a cartoon voice balloon for purposes of illustration) . The game system next displays scene 19 in which character 22 is an operator of a time-travel machine 32 which can communicate by voice and through time with character 12 in the dinosaur scene. Character 22 can watch characters 12 and 13 on video monitors or the like. Character 22 responds to the words "Beam us back" with voice sounds 18 in the distinctive voice of character 22. Controller 28 then displays three new alternative sentences 20 that character 22 can say next. The human player presses the button to select the words "I can move you sideways." The game system then generates voice sounds 21 of character 22 saying the selected words.
Referring to FIG. 2, the display changes back to the dinosaur scene 11 in which character 12 and 13 are in acute danger and character 13 is ready to speak. The controller then displays three alternative sentences 23 she can say. The human player selects the words "Move us quick." The game system then generates voice sounds 24 of character 13. Referring to FIG. 3, the game system next displays time-machine scene 19 again in which character 22 responds to the FIG. 2 request with voice sounds 25. The game system then changes back to scene 11 in which the dinosaur is shown with a surprised expression on its face because characters 12 and 13 have just disappeared as a result of the action by character 22 in the other scene 19.
Referring to FIG. 4 which illustrates a different story line, the game system displays a sequence of three scenes in which a human player 17 acts as an intermediary between character 22 in the time-machine scene 19 and character 12 in a tent scene 34. In the first scene 11 of this sequence, the characters sense danger and character 13 requests help from character 22 in time-machine scene 19. The controller 28 then displays three alternative sentences for character 12 to say in tent scene 34. The human player 17 selects "I don't see it." which character 12 then speaks as voice balloon 30. The game system then displays dinosaur scene 11 again where the danger has become acute for character 13 who speaks the words in balloon 33.
In these examples, character 12, 13 and 22 are player-controlled characters that the human player or players control. One or two or three human players may play the roles of the three animated characters. When a human player presses a button 16 of controller 28 the game system may generate voice sounds speaking the selected sentence or it may perform an action specified on controller 28. A button 16 selects a simulated verbal response to the previous words spoken by an animated character and also selects the new dialog sequence including the new alternative sentences that correspond to the selected simulated verbal response that was shown on controller 28. The selected dialog sequence that results includes the face and voice of the animated character speaking words which are responsive to the human player's selected verbal response.
If a voice communication apparatus is shown, a telephone or two-way radio or other voice communication apparatus may be substituted for radio-like device 31. A communication apparatus need not be shown. Various conversations between characters in two or more different scenes may also occur by characters speaking to each other through openings in a door, wall, roof, floor, car body or equivalent. The picture sequence need not explicitly show a wall or other scene separation device. Each animated character can be an animated cartoon, digitized live action, analog live action, a sprite or the like, or a composite thereof, and can be player controlled or not player controlled. The word "scene" has been used herein to mean a sequence of video frames showing substantially the same location. The details and background of a scene may change as in scrolling, panning, scaling, rotation and zoom while remaining the same scene.
The time-travel story is given here only by way of example and may be replaced by other game stories that use parallel montage scenes. For example, a chase scene may parallel a phone-the-police scene in which a human game player talks with the good guy who is driving the car chasing the bad guy and also talks with another good guy left behind who is phoning the police from a phone booth. The police dispatcher scene may then alternate with the phone booth scene with the human player selecting words to say in this dialog. The police dispatcher scene may then alternate with the patrol car scene with a human player again selecting some of the dialog. By playing the role of an off-screen character the human player may act as an intermediary between the two parallel scenes. Another example is an accident scene in parallel with a going-for-help scene. These scenes are separated in space but are parallel in time. Again a human game player may act as an intermediary or as two player-controlled characters that have dialog in their respective scenes with the victim in the accident scene and with the character who is going for help. Two-way or three-way dialog may be combined with parallel montage in any combination. For example, character 12 may speak to character 13 who may speak to character 22 in another scene who may speak to character 12 or 13 in the first scene, with a human playing the role of any of the three characters by manually selecting words to say intermingled with selecting actions for a character to do. Likewise, character 13 may speak to character 22 in another scene who may speak to character 12 in the first scene who may speak to character 13. A human may also play the role of an off-screen character who speaks to character 22 who speaks to character 12 in another scene who speaks to character 13. Or a human player may speak to character 12 who speaks to character 13 who speaks to character 22 in another scene. Human players may input choices on controller 28 using any of a variety of input devices such as push buttons, rotatable knobs, pressure sensitive membrane, proximity sensitive pads or screen overlay, light pen, light sensitive gun, joy stick, mouse, track ball, moving a cursor or crosshairs or scrolling through highlighted options, icons, speech recognition, etc.
Although I have described the preferred embodiments of my invention with a degree of particularity, it is understood that the present disclosure has been made only by way of example and that equivalent steps and components may be substituted and design details changed without departing from the spirit and scope of my invention.

Claims

1. A video game method of simulating voice dialog between a human player of the game and at least two talking animated characters in substantially different scenes and having different voices, comprising the steps of: generating an animated picture of a first scene showing a first character; generating first voice sounds in the voice of said first character; displaying to a human player a plurality of alternative verbal expressions responding to said first voice sounds; receiving from said human player an indication of which verbal expression is selected by the player from said plurality verbal expressions; generating an animated picture of a second talking character in a second scene substantially different from said first scene; and generating second voice sounds in the voice of said second talking character expressing or responding to said selected verbal expression.
2. The method of claim 1 wherein said plurality of alternative verbal expressions suggest alternative actions for one of said characters to perform.
3. The method of claim 1 wherein said first character is shown in a dangerous situation and said second character responds to said first voice sounds by performing an action that helps the first character to move out of the dangerous situation.
4. The method of claim 1 wherein said plurality of alternative verbal expressions suggest alternative ways that said second character can help said first character.
5. The method of claim 4 wherein said first voice sounds of said first character requests help from said second character.
6. The method of claim 4 wherein said second voice sounds of said second character offers help to said first character.
1. The method of claim 1 wherein said animated picture shows said first and second characters talking to a third character whose third voice sounds are responsive to the voice sounds of the first and second characters.
8. The method of claim 1 wherein said first and second characters are shown talking to each other on a telephone or other communication device.
9. The method of claim 8 wherein said first and second characters are shown talking to each other through time.
10. A video game method of simulating voice dialog between a human player of the game and at least two talking animated characters in substantially different scenes and having different voices, comprising the steps of: generating an animated picture of a first scene showing a first character talking to a second character shown later in a second substantially different scene; displaying to a human player a first plurality of alternative verbal expressions; receiving from said human player an indication of which first verbal expression is selected by the player from said first plurality of verbal expressions; generating first voice sounds in the voice of said first talking character expressing or responding to said first selected verbal expression; generating an animated picture of said second character in said second scene; displaying to a human player a second plurality of alternative verbal expressions responding to said first voice sounds; receiving from a human player an indication of which second verbal expression is selected by the player from said second plurality of verbal expressions; and generating second voice sounds in the voice of said second talking character expressing or responding to said second selected verbal expression.
AMENDED CLAIMS
[received by the International Bureau on 23 November 1992 (23.11.92) original claims 1-10 replaced by amended claims 1-10 (3 pages )]
1. A method of simulating voice conversations between at least two talking animated characters and a human , comprising the steps of:
generating animated pictures of a first talking character in a first scene;
generating first voice sounds in the voice of said first talking character;
displaying a plurality of alternative verbal expressions responding to said first voice sounds;
receiving an input signal selecting a verbal expression from said plurality of alternative verbal expressions;
generating animated pictures of a second talking character in a second scene substantially different from said first scene; and
generating second voice sounds in the voice of said second talking character expressing or responding to said selected verbal expression.
2. The method of claim 1, wherein said plurality of alternative verbal expressions suggest alternative actions for one of said characters to perform. 3. The method of claim 1, wherein said first character is shown in a dangerous situation and said second character responds to said first voice sounds by performing an action that helps the first character
5 to move out of the dangerous situation.
4. The method of claim 1, wherein said plurality of alternative verbal expressions suggest alternative ways that said second character can help said first 0 character.
5. The method of claim 4, wherein said first voice sounds of said first character requests help from said second character. 5
6. The method of claim 4, wherein said second voice sounds of said second character offers help to said first character.
0 V. The method of claim 1, wherein said animated pictures show said first and second characters talking to a third character whose third voice sounds are responsive to the voice sounds of the first and second characters.
25
8. The method of claim 1, wherein said first and second characters are shown talking to each other on a telephone or other communication device.
309. The method of claim 8, wherein said first and second characters are shown talking to each other through time. 10. A method of simulating voice conversations between at least two talking animated characters and a human, comprising the steps of:
5 generating animated pictures of a first scene showing a first character talking to a second character shown later in a second substantially different scene;
displaying a first plurality of alternative verbal o expressions;
receiving an input signal selecting a first verbal expression from said first plurality of alternative verbal expressions; 5 generating first voice sounds in the voice of said first talking character expressing or responding to said selected first verbal expression;
0 generating animated pictures of said second talking character in said second scene;
displaying a second plurality of alternative verbal expressions responding to said first voice sounds; 5 receiving an input signal selecting a second verbal expression from said second plurality of alternative verbal expressions; and
0 generating second voice sounds in the voice of said second talking character expressing or responding to said selected second verbal expression.
PCT/US1992/006030 1991-09-09 1992-07-16 Video game with interactive audiovisual dialogue WO1993004748A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US75635691A 1991-09-09 1991-09-09
US756,356 1991-09-09

Publications (1)

Publication Number Publication Date
WO1993004748A1 true WO1993004748A1 (en) 1993-03-18

Family

ID=25043116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/006030 WO1993004748A1 (en) 1991-09-09 1992-07-16 Video game with interactive audiovisual dialogue

Country Status (3)

Country Link
JP (1) JPH05111579A (en)
AU (1) AU2393092A (en)
WO (1) WO1993004748A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0747881A2 (en) * 1995-06-05 1996-12-11 AT&T IPM Corp. System and method for voice controlled video screen display
US8317611B2 (en) * 1992-05-22 2012-11-27 Bassilic Technologies Llc Image integration, mapping and linking system and methodology
US9135954B2 (en) 2000-11-27 2015-09-15 Bassilic Technologies Llc Image tracking and substitution system and methodology for audio-visual presentations
CN111790153A (en) * 2019-10-18 2020-10-20 厦门雅基软件有限公司 Object display method and device, electronic equipment and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0016314A1 (en) * 1979-02-05 1980-10-01 Best, Robert MacAndrew Method and apparatus for voice dialogue between a video picture and a human
US4846693A (en) * 1987-01-08 1989-07-11 Smith Engineering Video based instructional and entertainment system using animated figure
WO1992008531A1 (en) * 1990-11-14 1992-05-29 Best Robert M Talking video games

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0016314A1 (en) * 1979-02-05 1980-10-01 Best, Robert MacAndrew Method and apparatus for voice dialogue between a video picture and a human
US4846693A (en) * 1987-01-08 1989-07-11 Smith Engineering Video based instructional and entertainment system using animated figure
WO1992008531A1 (en) * 1990-11-14 1992-05-29 Best Robert M Talking video games

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8317611B2 (en) * 1992-05-22 2012-11-27 Bassilic Technologies Llc Image integration, mapping and linking system and methodology
US8905843B2 (en) 1992-05-22 2014-12-09 Bassilic Technologies Llc Image integration, mapping and linking system and methodology
EP0747881A2 (en) * 1995-06-05 1996-12-11 AT&T IPM Corp. System and method for voice controlled video screen display
EP0747881A3 (en) * 1995-06-05 1998-03-04 AT&T IPM Corp. System and method for voice controlled video screen display
US5890123A (en) * 1995-06-05 1999-03-30 Lucent Technologies, Inc. System and method for voice controlled video screen display
US9135954B2 (en) 2000-11-27 2015-09-15 Bassilic Technologies Llc Image tracking and substitution system and methodology for audio-visual presentations
CN111790153A (en) * 2019-10-18 2020-10-20 厦门雅基软件有限公司 Object display method and device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
AU2393092A (en) 1993-04-05
JPH05111579A (en) 1993-05-07

Similar Documents

Publication Publication Date Title
US5393070A (en) Talking video games with parallel montage
US5393072A (en) Talking video games with vocal conflict
US5358259A (en) Talking video games
US5393073A (en) Talking video games
US5393071A (en) Talking video games with cooperative action
US4333152A (en) TV Movies that talk back
US4569026A (en) TV Movies that talk back
US4445187A (en) Video games with voice dialog
EP1262955B1 (en) System and method for menu-driven voice contol of characters in a game environment
Tekin et al. Ways of spectating: Unravelling spectator participation in Kinect play
EP0659018A2 (en) An animated electronic meeting place
US20090191519A1 (en) Online and computer-based interactive immersive system for language training, entertainment and social networking
WO1993004748A1 (en) Video game with interactive audiovisual dialogue
AU2021221475A1 (en) System and method for performance in a virtual reality environment
US6982716B2 (en) User interface for interactive video productions
Nakatsu et al. Interactive movie system with multi-person participation and anytime interaction capabilities
JP2021087128A (en) Class system, viewing terminal, information processing method, and program
JPH05228260A (en) Method for dialogic video game by joint operation
KR200179150Y1 (en) System for chatting on a network
US20220040581A1 (en) Communication with in-game characters
US20220360827A1 (en) Content distribution system, content distribution method, and content distribution program
JPH05293252A (en) Dialog video game method utilizing sound contrast
US20240009559A1 (en) Communication with in-game characters
JP2001296877A (en) Program executing device which conducts voice conversation and its program
JPH09139928A (en) Multi-spot video conference system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH CS DE DK ES FI GB HU KP KR LK LU MG MN MW NL NO PL RO RU SD SE

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WA Withdrawal of international application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA