WO1998046323A1 - Computer games having optically acquired images which are combined with computer generated graphics and images - Google Patents

Computer games having optically acquired images which are combined with computer generated graphics and images Download PDF

Info

Publication number
WO1998046323A1
WO1998046323A1 PCT/US1997/006234 US9706234W WO9846323A1 WO 1998046323 A1 WO1998046323 A1 WO 1998046323A1 US 9706234 W US9706234 W US 9706234W WO 9846323 A1 WO9846323 A1 WO 9846323A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
computer
images
game
scene
Prior art date
Application number
PCT/US1997/006234
Other languages
French (fr)
Inventor
John Ellenby
Thomas Ellenby
Peter Ellenby
Original Assignee
Criticom Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Criticom Corporation filed Critical Criticom Corporation
Priority to PCT/US1997/006234 priority Critical patent/WO1998046323A1/en
Priority to AU28020/97A priority patent/AU2802097A/en
Publication of WO1998046323A1 publication Critical patent/WO1998046323A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Definitions

  • the field of the invention generally concerns computer games and particularly computer games having optically acquired images which are combined with computer generated images, either of which may be responsive to the other or to some user actions.
  • Computer games typically have displayed images or image series having objects and features therein which can be manipulated via a player's input.
  • a player interacts with and controls images displayed at a monitor. Games can be presented as scenarios having characters and objects taken from real life and fantasy worlds.
  • a player typically performs some series of tasks by manipulating a character or an object of the scene.
  • a popular game called "Street Fighter” pits human or pseudo-human characters against each other in a street fight.
  • a very simple example of a computer game is known as "PONG".
  • a player in control of a computer generated paddle, tries to "hit” a computer generated moving ball.
  • User input from a joystick directs the motion of the paddle to affect a "hit”.
  • the computer generated image is responsive to user actions.
  • the ball is responsive to the location of the paddle in the image field. If the ball is incident on the paddle, then it is deflected therefrom; if the ball advances past the paddle, it continues its course out of bounds of the image field. Therefore, elements of the computer generated image can be responsive to certain conditions or features of the image itself as well as user inputs.
  • the game Street Fighter is spectacular in that the scenes greatly resemble real world scenes including backgrounds which move with realistic perspective.
  • the background of the fictional scene is generally made to resemble some known locale or geographical region.
  • the background not only provides a realistic scene, but is also functional in some instances. Game characters can sometimes act on objects of the background.
  • the background is entirely comprised of computer generated imagery.
  • the background may resemble a known real scene, the game background has no relationship to the actual location of the game device. Typically located at a downtown drugstore, there is no interaction with any real scene associated between game images and the game's environment.
  • the entire images series including all objects and features thereof is contained in computer memory and is recalled at the appropriate time in a game scheme.
  • Computer games are generally comprised of: a computer operable for executing logic routines arranged into some game scheme and for generating video images in accordance therewith; tactile user input devices such as joysticks, track balls, control buttons, et cetera; and a graphical user interface or display monitor.
  • the realism of the game can depend on how the display is arranged to present images to the user.
  • Basic systems may use a simple cathode ray tube CRT display, while advanced "Virtual Reality" systems may employ surround sound and video to enhance the feel of the user's environment as it may relate to the game.
  • Virtual Reality (NR) refers to an environment where the user is "submersed" in a display. NR schemes are very useful to increase the realism of video game environments.
  • the systems provide a new feeling to video games as a player may be surrounded by images relating to the game. In this way, it is possible to have an opponent sneak up behind a play while the player is not looking in a particular direction. A player who physically turns his head or body around facing a different direction, faces a different part of the game scene. Therefore, the game scheme incorporates the sense of direction with respect to the game user's true environment into the images presented. Players using simple display devices look into them without the possibility of "turning around” to see what may be behind them. Viewing angles for simple monitors may be limited to a few degrees of solid angle, but could be as high as 4 ⁇ steradians for VR systems. "Looking around” becomes an important player activity in virtual reality games.
  • VR schemes provide basic interaction between a player and his real environment, that interaction is limited to the sense of direction.
  • the entire image catalog is recorded in memory or is generated according to rules of the game scheme designed to provide dynamic perspectives of a particular scenes.
  • the true scenes in the environment of the game are of no consequence to the game being played.
  • Augmented Reality refers to computer generated imagery which interacts with "live” video images of real scenes.
  • the computer generated imagery being the "augmentation” and the real scenes providing the "reality”.
  • Computer games employing AR techniques combine images of real scenes with computer generated images.
  • the computer generated images have characters and objects therein which are responsive to a player's input as well as being responsive to features of the images in accordance with some game scheme. It is a primary function of the invention to provide computer games which interact with a user's environment.
  • An optically acquired image herein refers to a "live” image of a real scene.
  • Live means the image of the scene is updated in a short period of time such that a user appears to be looking at the real scene as the scene exists at all times it is being addressed.
  • the system is said to respond in "real time”.
  • Game devices of the invention are equipped with an electronic camera operable for addressing a scene and producing an electronic signal representing an image thereof.
  • Electronic cameras are typically comprised of a lens having an axis which defines the cameras pointing direction and an image detector. By pointing the lens toward a scene, an optical input is converted to an image signal. It is desirable for devices of the invention that the image signal be in a format which is processable by a computer.
  • Optically acquired images are processed by the computer for content. Certain image features such as color, intensity, motion, or many others, are detected and used to form elements of a game scheme. Optically acquired images are processed in many ways to extract various types of information relating to the scene being addressed. It is important for the game concepts to extract information relating to the scene and providing a game response which relates to or is based on that information.
  • a computer generated image herein refers to images or portions of images generated by a computer either wholly synthesized or "clip art" recalled from computer memory. Computer generated images of the invention typically include a single object in many embodiments which when presented as a sequentially as an image series appears to form object motion. The computer is operable for generating and playing the series of computer generated images to form image video which moves in real time.
  • Optically acquired images and computer generated images can be processed such that they are combined, overlaid, or superimposed together to form a composite image.
  • Various image process routines can be employed to affect the combination of optically acquired images and computer generated images as is well known in the imaging arts, composite images are displayed to game user on a display which is aligned to the optical axis of the camera such that the is direction correspondence between the real scene and the displayed image. This allows the user the feeling of "looking through" the device at the real scene.
  • User actions including input and commands can be made to affect both types of images. For example, if the user pans the camera left then the scene being addressed changes and so does the optically acquired image. In addition, the user may employ tactile inputs to drive certain image activity.
  • a object in the computer generated imagery can be responsive to user direction via a joystick.
  • Objects in the computer generated image domain may also be responsive to features of the optically acquired image. For example, a game scheme may call for any computer generated image object which is superimposed onto a red image pixel be removed from the composite image.
  • the devices can rely on other methods of realizing such information. For example, certain features of a particular scene can be pre-recorded into a data store.
  • the computer can supply the recorded information to the game logic routine where it is used to control the game scheme.
  • One way in which a computer identifies or recognizes a scene being addressed is by measuring the position and attitude of the camera. Details of this method are set forth in detail in the parent application from which this application depends.
  • a plurality of players and game systems may be in communication with one another.
  • a single scene may be addressed by more than one user from various locations each having it own perspective of the particular scene.
  • Computer generated imagery in one player's displayed composite image can be made to correspond directly to computer generated imagery in a second players displayed composite image but in the proper perspective relating to that user.
  • the game scheme may incorporate the user's true position as well as a user's position as represented by a characters position within the scene in the game strategy.
  • a computer game apparatus of operable for combining images of real scenes with images generated by a computer to form a composite image, the composite image being responsive to features of the images of real scenes
  • the apparatus comprising: a camera; a computer; and a display, the camera having a lens axis which defines a camera pointing direction and being operable for: receiving an optical input, converting that input to an electrical signal processable by a computer; and transmitting that electrical signal to;
  • the computer having memory, a CPU, input/output means, et cetera and being operable for: receiving the electrical signal, processing the electrical signal according to a game scheme to extract information relating to features of the real scene, generating an image, combining that image with the image of the real scene to form a composite image, transmitting the composite image to;
  • the display having a substantially planar surface and associated normal direction and being operable for receiving the transmission of the composite image and presenting it as optical output, the display aligned to the camera pointing direction to provide a direct direction correspondence with the
  • a computer game method of combining images of real scenes with images generated by a computer, the images generated by the computer being responsive to features of the images of real scenes comprising the steps: a) addressing a scene; b) acquiring an optical input; c) converting the optical input to an electronic signal; d) processing electronic signal; e) forming a computer generated image; f) manipulating a computer generated image; g) combining the optically acquired image with the computer generated image to form a composite image; h) displaying the composite image, said addressing a scene step including pointing a camera at a scene; said acquiring an optical input step including receiving a optical input at a camera lens and forming an image of the scene being addressed onto a detector plane; said converting the optical input to an electronic signal step including detecting the light intensity and color in a spatial relationship and producing an electronic signal which corresponds thereto; said processing electronic signal step including detecting features of the image signal according to a predetermined rule set; said forming a computer generated image step including according to
  • Figure 1 is an image of a real scene of clouds taken from a game user's environment
  • Figure 2 is the image of figure 1 processed into to a binary intensity level array
  • Figure 3 is an image similarly processed to a different binary intensity level array
  • Figure 4 shows an image field containing certain computer generated objects
  • Figure 5 illustrates interaction between a processed optically acquired image and a computer generated objects
  • Figure 6 is a composite image containing optically acquired imagery and computer generated objects combined together as it may appear to a user;
  • Figure 7 is a line drawing example of a second real scene;
  • Figure 8 shows an image feature of the real scene which is processed to form basis for a game scheme
  • Figure 9 shows a computer logic domain result of considering activity in the scene and applying a methodology is accordance therewith;
  • Figure 10 shows computer generated objects and their actions within the real scene
  • Figure 11 shows how a composite image constructed according to a particular game scheme as it may be presented to a game user;
  • Figure 12 shows additional interaction in a composite image between computer generated objects and objects of the real scene
  • Figure 13 shows still another example of a certain real scene
  • Figure 14 shows a wireframe model of that scene which is known to a computer of the device
  • Figure 15 shows a computer generated object within the wireframe model of the scene
  • Figure 16 shows how a composite image of the game may look to a user
  • Figure 17 shows interaction of computer generated images and optically acquired images within a composite image
  • Figure 18 shows a cartoon drawing depicting still further another example of a real scene having computer generated characters therein;
  • Figure 19 shows a second perspective of the scene of Figure 18 as it may be viewed from a second game user in a different location than the first user;
  • Figure 20 shows a composite image of the present example formed with an optically acquired image and computer generated imagery
  • Figure 21 illustrates a few sophisticated interactions which may take place between an optically acquired image of a real scene and computer generated imagery.
  • each of the preferred embodiments of the invention there is an apparatus for and method of providing a computer game which responds to and interacts with the immediate environment. It will be appreciated that each of the embodiments described include both an apparatus and method and that the apparatus and method of one preferred embodiment may be different than the apparatus and method of another embodiment.
  • Figure 1 shows an image of a clouded sky. A user of the invention could address such a scene by pointing the camera up toward the clouds. We call the image of the sky and clouds "an image of a real scene".
  • real scene we simply mean some scene as it may appear to a person looking about ones environment.
  • scenes formed by artists on various media such as paintings or cartoons for example are not considered “real scenes” as objects within those scenes may be purely fictional.
  • Clouds are real objects which can be found in ones environment and the image of Figure 1 is therefore an "image of a real scene".
  • the image has a frame 1 which defines the extent of the image field. Certain areas of the image represent clouds, for example area 2; and certain other areas of the image represent clear sky, for example area 4.
  • An apparatus of the invention having a camera addressing the real scene containing clouds is useful for forming an electronic image thereof.
  • the image is comprised of picture elements or "pixels" of various intensity levels. Dark regions in the cloud's shadows 5 may appear as low intensity areas.
  • the image artifact that appears as small squares has been exaggerated in this image for discussion.
  • the small squares 3 or pixels are unit image areas of uniform intensity.
  • a game scheme which involves the real image.
  • This scheme may include actions to be taken if some condition is met.
  • An example of a logic step involving the condition may be: if the pixel is dark, then the condition is met; if the pixel is light, then the condition is not met.
  • Real images of clouds consist of continuous tones and the meaning of "dark” and "light” become ambiguous.
  • a processing step to the optically acquired continuous tone image. In a computer routine, an intensity threshold is applied at each pixel, and each pixel is then represented with either a "1" (dark) or a "0" (light).
  • Figure 2 having an extent 21 similar to Figure 1, has black regions 23 or white regions 22 for every pixel.
  • the threshold level can be changed as desired to produce an increase in either of the areas represented by dark areas or light areas.
  • Figure 3 shows the image having been processed with a higher threshold resulting in more white area 32 within the image field.
  • the condition is met or not. For example: at position 33 we say the condition is not met; at position 34 the condition is met. If we superimpose the optically acquired image with a computer generated image having certain objects therein to form a composite image, each of the objects will have associated with them certain positions. We can then ask by way of a computer program if the computer generated object has a position which is coincident with a pixel of the real image which meets the condition. The computer generated object can then be made to respond to the result of the condition test. For example, if the condition is met then the computer generated object is removed from the composite image.
  • Figure 4 shows an image field 44 containing computer generated imagery including: an object 41 which represents a stork; an object which represents a sack having a baby therein; and an object 43 which is a combination of the two.
  • attributes to the various objects.
  • the array of possible attributes are numerous and it not possible to define them all here.
  • Sacks which are not attached to storks "fall” or advance from the top of an image field toward the bottom.
  • the computer "launches" sacks into the image field near the top at various controlled intervals.
  • a stork controlled by a user can be manipulated to fly towards the sack.
  • We make a game rule where the stork is allowed to fly in light areas of the real scene but not in dark areas. For our game, storks must fly in the clouds but avoid clear skies.
  • Various game schemes may have various rules.
  • Figure 5 illustrates a game scheme as it shows how the computer generated image may behave within the computer logic domain.
  • a flight path 53 of the stork 52 is shown in the figure.
  • the stork must remain within the "clear" areas 56 of the image field 57 until it reaches an intercept point 54 where the sack 51 can be captured. If the stork "hits" or flies into a dark area 55 (where the condition is met), then the stork suffers some consequence (action); it disappears, dies, blows up, et cetera.
  • the object of the game is for the player, represented by the stork character, to navigate through the clouded sky and capture the sack before it hits the bottom of the image field where presumably something disastrous happens to the baby. If the stork captures the sack 43 the baby is "saved".
  • the real scene has been processed according to some design rules consistent with a game scheme such that computer generated objects are responsive to certain features (intensity patterns) of the real scene.
  • FIG. 5 The image of Figure 5 is not suitable for presentation to a user, but one which reminds us that "clear” areas and "prohibited” areas exist in the computer's logic domain.
  • the image presented to a user appears as the image in Figure 6.
  • an image field 62 contains a composite image having an optically acquired image of a real scene formed by an electronic camera combined with computer generated objects such as storks and sacks 61.
  • the composite image is a "live” image in that as the real clouds change, the image changes accordingly. This is easily accomplished with simple video type cameras such as a common camcorder. Note that game schemes should be developed in anticipation of real scenes which change.
  • the computer should process the images in "real time” to affect the condition of Figure 3 and determine the appropriate response of the computer generated image and combine the two images to form a composite image to be displayed. If the device is pointed in a different direction, different cloud patterns would be imaged and a different pattern of black and white regions will result after the processing step. This has strong implications regarding the computer generated objects and how they might respond to movement of the device. Although the above example is spectacular in that it incorporates into a game scheme images of real scenes from the user's environment, i.e. the sky and clouds about the user, a second and more remarkable feature can be understood by considering the following description. As a sack approaches the bottom of an image field, a user can "buy time” by panning the device downward.
  • the particular scene being addressed by the device changes in response to the direction in which the user points the device.
  • the sack In the image field, the sack would appear to rise back toward the top as the sack associates its fall rate by referencing a location (pixel) in the real scene. Regardless of the up and down pointing motion s of the device, the sack falls at a constant rate with respect to a point in the real scene. This offers more opportunity for the stork to capture and save the sack.
  • the panning does not go without limits. Eventually, the user reaches the limits of the sky when the device is pointing horizontally and the horizon comes within the field of view of the camera, the baby necessarily ends the journey if it is not captured before encountering the horizon.
  • the stork can be made to always appear in the center of the composite image. This is similar to "boresighted" crosshairs seen in the image field of simple cameras.
  • the user could point the device in various directions so that the center of the image field (always containing the stork) is always in light image areas.
  • the sack would necessarily track across the composite image field as left and right panning motions provide.
  • sacks can be made to appear in various sizes we can arrange them to appear to a user to be at various distances therefrom. Sacks appearing as large objects would represent sacks which were nearer to the user than small sacks. Since objects in real life appear to fall faster when they are near than when they are far, we can make a game having quickly falling near sacks and slowly falling distant sacks. By varying the number of sacks and their fall rates, the game can be made to accommodate various levels of skill.
  • the images of Figures 2 and 3 are optically acquired images of real scenes having been processed into some desirable format according to a schematic of the game as set forth by a game designer. As a great plurality of games are possible, each having its own objectives. The number of possible ways to process an optically acquired image are limitless and it is not practical to attempt to describe them all here. However, for illustration, a simple process of "binarization” or "intensity thresholding" has been chosen to show how optically acquired images can be processed and manipulated to cooperate with computer generated images and computer generated images controlled by a user. The foregoing examples are very specific. It will be appreciated that they are used only to illustrate how certain features of the invention interact with each other and with the user.
  • the invention does not concern a flying stork game, but a game having computer generated images which interact with optically acquired images of real scenes from a user's environment which interact with user actions including input and commands.
  • This first embodiment was specifically designed to illustrate two important features of the invention: firstly, that a computer game can be designed to be responsive to optically acquired images of real scenes from the user's environment; and secondly, that the pointing direction of the device dictates which real scene is being addressed and therefore the game can be responsive to a dynamic, user controlled, real scenes about the user's environment.
  • a computer generated target jet plane can fly about the sky in random patterns while a user controlled jet tries to shoot at the target plane.
  • the clouds could conceal the target jet and user jet from view in the composite image at various cloud locations. This compares to the previous example where the clouds might disqualify a player by cause his stork to "die”.
  • the interaction of computer generated imagery and optically acquired imagery can be embodied in many ways.
  • the computer contemplates the image of a real scene and makes some determinations regarding features of the scene.
  • Figure 7 shows a line drawing of common objects which may be found in a real scene. Trees 73 along a sidewalk next to a multilane roadway 74 next to a row of buildings 72 make up the real scene and the image field 71. A camera pointing at a real scene containing the objects described can acquire images which may be analyzed for certain features. Over some period of time, the pixels in the scene may change due to activity or motion in the real world. For example, if the roadway contains traffic, the pixels associated with some regions of the image will change as cars pass through the image field.
  • Figure 8 shows an image field 81 representing the scene where the buildings, trees and sidewalk are unchanged, but where the centers of the roadway lanes 82 have frequently changing pixel data (passing cars). Detecting movement in live images is not a new concept. Now, with knowledge of where there is apt to be great probability of movement a game scheme can provide for the computer to select a few image positions which correspond to those locations.
  • Figure 9 shows three "motion" positions 92 in the image field 91 which have been selected by a computer processing routine designed to choose locations associated with high frequency movement. It is further possible for this particular game scheme to probe the locations from time-to-time and present the following condition test: "has the pixel changed (color or intensity) since the last test?". A change means there is presently movement within the image at that test location.
  • the frog can remain safely at the location until movement in the real scene is detected or the user causes by controller (joystick) for the frog to jump as indicated by the arrow 103 to the next point 101.
  • controller controller
  • a player is represented by a frog and is tasked with the assignment of jumping across busy lanes of traffic.
  • Experts will recall a well known video game sometimes called "Frogger” where a similar strategy is called upon. Such recollection is very useful for distinguishing the present invention from the art.
  • the scenes in "Frogger” are completely generated by computers. If Frogger is "pointed" in a new direction there is no response in the game as there is no communication between the game device and the environment it is in.
  • Figure 11 is an illustration of how a composite image of a frog game might look in devices of the invention.
  • An image field 111 contains the composite image comprised of: a real scene having trees, buildings, roadways and traffic, and a computer generated image including the objects: computer generated frogs 112, hopping frogs, squishing frogs 113.
  • the game scheme may provide computer generated images having objects which do not respond to user movements but may randomly attack an icon (frog) representing a user.
  • Figure 12 is an image field 121 which illustrates another hazard in a frog game.
  • computer generated objects may be responsive to: 1) user commands such as a "hop" command; 2) objects in the image of the real scene such as moving cars; or 3) other computer generated objects appearing from time-to-time like a "road shark".
  • An image field 131 contains an image of a real scene having buildings 133, sidewalks 134, a stop sign 135, and people 132.
  • a computer could identify the exact scene being addressed and could recall from a previously programmed memory a model of important surfaces in the scene.
  • the people are highly mobile and it is unlikely that a model could include information about these people, the recalled model may only have information regarding some features of the real scene.
  • sidewalks and stop signs tend to remain fixed over long periods of time, we could expect the model to include representations thereof.
  • a model of the scene being addressed is presented. It contains lines representing edges 142 of objects for example a stop sign 143.
  • edge detection image processing techniques could be applied to images of real scenes to produce similar models, we use the example here of a model recalled from memory based on device position and attitude measurements. Recall that we do not actually present to a user an image of the model shown in Figure 14, but that the computer uses it to affect computer generated objects which interact with the scene. We say the model exists in the game logic domain. Now we suggest a scheme where computer generated objects interact with the real scene via a model which describes the real scene. Surfaces of the model are accurately located and correspond to surfaces in the real scene were the angles of those surfaces can be known with respect to the pointing direction of the device. A computer generated golfer 152 can "approach" a real scene and play a golf shot off the objects therein.
  • Figure 16 shows how a computer generated golfer 162 may look in a composite image field 161 containing a real scene.
  • the golfer 172 "hits" the ball 173 by way of a user input to the computer, the ball takes a trajectory 176 which responds 175 to a model surface, not shown but having a corresponding surface 174 in the real image, known by the computer and which fairly represents the scene being addressed.
  • a game scheme can be arranged to include more than one player; each player being remotely located with respect to each other, could have a unique perspective of the playing field. Further, a game scheme in which one player may attack another player via the player's game icon or the player's actual position in the real world is provided.
  • FIG. 18 shows a scene of the city as it may appear to the first user.
  • the image field 181 contains: 1) optically acquired imagery representing buildings 184, 2) computer generated imagery 183 and 182 representing players, 3) and combinations thereof 185 where real buildings (optically acquired) appear on fire (computer generated).
  • Figure 18 is a hand drawn cartoon where the buildings do not appear to be an image of a real scene, it is intended for illustration and comparison with Figure 19.
  • a true composite image which accurately reflects the way the image may appear to a game user is presented as Figure 20.
  • Figure 19 shows the same scene as Figure 18 from the point of view of the second player.
  • the optically acquired portion of the composite image containing the buildings 194 and 195 naturally appears in a different perspective compared to Figure 18; the computer generated objects 192 and 193 are shifted in perspective as compared to the first players display as prescribed by the game computers which track the positions of the objects and the players to allow for proper perspective as prescribed in the game scheme.
  • the events of the game are executed, both players see the same event from a different perspectives.
  • the lizard (183 in Figure 18 and 193 in Figure 19) turns his head 90 degrees to the lizard's right side, then the lizard of Figure 18 would be looking approximately into the direction of the first player's camera (west); the lizard of Figure 19 would be looking towards the right edge of the composite image field (west again, in agreement with the images at both systems).
  • the first player may have the ability to turn the head of the lizard as a game command.
  • the first user's computer must alert either a host computer or the other player's computer of the instruction so as to allow that second player's computer to respond to the instruction in a fashion which corresponds to first player's computer.
  • Game schemes may be created where certain instructions are "privileged" and are not shared with the other player but which still may affect the game. A condition where a player is low on weapons or fuel may be kept secret as part of the game strategy. Game designers will undoubtedly find many such arrangements. It is important here to realize that communication can exist between two systems at various locations about an environment which allow two players to address common scenes and play a common game. An action, for example a fire, within a scene is shared by both systems, albeit in different perspectives 185 and 195.
  • Figure 19 shows what the composite image field 201 may look like to the first player.
  • the opponent's player icon, the swordsman 202 is threatening the icon representing the first player, the lizard 203 as a building burns in the background 204.
  • a composite image field 221 contains a jet 222 which can be flown via user command input and a monster 223.
  • Bullets 224 can be fired by user command from the moving jet in accordance with common rules of motion. Bullets may strike real objects such as buildings and perhaps injure those buildings as they appear in the composite images.
  • the real building 225 may be completely intact in the real world but appear to be burning in the composite image.
  • the area 227 in the composite image where the top of the real building should appear has been replaced with computer generated sky and fire.
  • a computer generated building top 226 has been added to the composite image to appear in the hand of the monster.
  • the useful hardware to be assembled to perform the functions described include: an electronic camera; a computer; and a display each being commonly available equipment without special features.
  • the camera having a lens axis which defines the camera pointing direction.
  • the camera can receive optical input and form an image onto an image detector where it is converted to an electronic signal which represents the image.
  • the computer having memory and other sub-systems generally associated with computers and being in communication with the camera where it can receive signal representing images.
  • the computer is them operable for: processing the electrical signal according to a game scheme to extract information relating to features of the real scene, generating an image, combining that image with the image of the real scene to form a composite image, transmitting the composite image to a display. It is important to note that when the language "generate an image” is used, it is implicit that the word “signal” follows image.
  • the computer always handles information in digital form and when it "generates and image” it really produces a digital signal which represents an image when it is played to a device which converts such signals to optical image patterns.
  • the display being in communication with the computer and usable for receiving the composite image signal and presenting it as optical output to the user where the image is aligned with the camera pointing direction to provide a direct direction correspondence with the real scene giving the user the appearance of looking directly at the scene.
  • primary steps to be performed to realize the functions described include: addressing a real scene and forming an image thereof; processing the real scene to extract feature information therefrom; generating an image with a computer in accordance with the features extracted; combining the images according to a game scheme to form a composite image; and displaying the composite image aligned to the real scene.
  • the invention does not concern merely flying stork games, jet fighter games, frogging games, golf games, or monster games in particular, but games having computer generated images which interact with optically acquired images of real scenes from a user's environment in general.
  • Various embodiments were specifically designed to illustrate important features of the invention including: firstly, that a computer game can be designed to be responsive to optically acquired images of real scenes from the user's environment; and secondly, that the pointing direction of the device dictates which real scene is being addressed and therefore the game can be responsive to a dynamic, user controlled, real scenes about the user's environment.

Abstract

Computer games designed to interact with real world environments (Fig. 18, refs. 2-5; and 13 refs. 132-134). A camera captures an optical image (Figs. 1 and 13) of some scene which is processed (Figs. 2-3 and 14) for particular image features. The results of the processing are integrated into a game scheme and image is combined with computer generated imagery (Figs. 4-6, refs. 41-43 and 51-52; Figs. 15-16, refs. 152 and 162) to form a composite image (Figs. 6 and 17). The augmented real scene (Figs. 6 and 17) as a composite image is then sent to a display where it is displayed (Figs. 6 and 17) aligned with the real scene. Actions taken by the user drive both the computer generated imagery and the real scene imagery.

Description

Title: "Computer Games having Optically Acquired Images which are Combined with Computer Generated Graphics and Images"
Specification for a PCT patent application
Background The field of the invention generally concerns computer games and particularly computer games having optically acquired images which are combined with computer generated images, either of which may be responsive to the other or to some user actions.
Computer games, sometimes referred to as "video games", typically have displayed images or image series having objects and features therein which can be manipulated via a player's input. By way of some control mechanism, for example a joystick, a player interacts with and controls images displayed at a monitor. Games can be presented as scenarios having characters and objects taken from real life and fantasy worlds. A player typically performs some series of tasks by manipulating a character or an object of the scene. A popular game called "Street Fighter" pits human or pseudo-human characters against each other in a street fight.
A very simple example of a computer game is known as "PONG". A player, in control of a computer generated paddle, tries to "hit" a computer generated moving ball. User input from a joystick directs the motion of the paddle to affect a "hit". We say that the computer generated image is responsive to user actions. Furthermore, the ball is responsive to the location of the paddle in the image field. If the ball is incident on the paddle, then it is deflected therefrom; if the ball advances past the paddle, it continues its course out of bounds of the image field. Therefore, elements of the computer generated image can be responsive to certain conditions or features of the image itself as well as user inputs.
The game Street Fighter is spectacular in that the scenes greatly resemble real world scenes including backgrounds which move with realistic perspective. The background of the fictional scene is generally made to resemble some known locale or geographical region. The background not only provides a realistic scene, but is also functional in some instances. Game characters can sometimes act on objects of the background. As a reward in the "Street Fighter" game, a player is allowed to bash the hell out of a car after having successfully bashed the hell out of an opponent. The background is entirely comprised of computer generated imagery. Although the background may resemble a known real scene, the game background has no relationship to the actual location of the game device. Typically located at a downtown drugstore, there is no interaction with any real scene associated between game images and the game's environment. The entire images series including all objects and features thereof is contained in computer memory and is recalled at the appropriate time in a game scheme.
Computer games are generally comprised of: a computer operable for executing logic routines arranged into some game scheme and for generating video images in accordance therewith; tactile user input devices such as joysticks, track balls, control buttons, et cetera; and a graphical user interface or display monitor. The realism of the game can depend on how the display is arranged to present images to the user. Basic systems may use a simple cathode ray tube CRT display, while advanced "Virtual Reality" systems may employ surround sound and video to enhance the feel of the user's environment as it may relate to the game. Virtual Reality (NR) refers to an environment where the user is "submersed" in a display. NR schemes are very useful to increase the realism of video game environments. The systems provide a new feeling to video games as a player may be surrounded by images relating to the game. In this way, it is possible to have an opponent sneak up behind a play while the player is not looking in a particular direction. A player who physically turns his head or body around facing a different direction, faces a different part of the game scene. Therefore, the game scheme incorporates the sense of direction with respect to the game user's true environment into the images presented. Players using simple display devices look into them without the possibility of "turning around" to see what may be behind them. Viewing angles for simple monitors may be limited to a few degrees of solid angle, but could be as high as 4π steradians for VR systems. "Looking around" becomes an important player activity in virtual reality games.
Although VR schemes provide basic interaction between a player and his real environment, that interaction is limited to the sense of direction. However, like the previous games the entire image catalog is recorded in memory or is generated according to rules of the game scheme designed to provide dynamic perspectives of a particular scenes. The true scenes in the environment of the game are of no consequence to the game being played.
While the systems and inventions of the prior art are designed to achieve particular goals and objectives, some of those being no less than remarkable, these inventions have limitations which prevent their use in new ways now possible. Previous inventions are not used and cannot be used to realize the advantages and objectives of the present invention. Summary
Comes now, John, Thomas and Peter Ellenby with an invention of computer games including devices for and methods of providing games which interact with a user's environment. The present invention is concerned with the next bold step into completely new imaging techniques we call "Augmented Reality". Augmented Reality (AR) refers to computer generated imagery which interacts with "live" video images of real scenes. The computer generated imagery being the "augmentation" and the real scenes providing the "reality". Computer games employing AR techniques combine images of real scenes with computer generated images. The computer generated images have characters and objects therein which are responsive to a player's input as well as being responsive to features of the images in accordance with some game scheme. It is a primary function of the invention to provide computer games which interact with a user's environment. It is a contrast to methods and devices of the art that present systems involve imagery relating to real objects existing in the vicinity of the user. A fundamental difference between the computer games of the present invention and those of the art can be found when considering their behavior and response with regard to scenes related to their actual environment.
An optically acquired image herein refers to a "live" image of a real scene. "Live" means the image of the scene is updated in a short period of time such that a user appears to be looking at the real scene as the scene exists at all times it is being addressed. The system is said to respond in "real time". Game devices of the invention are equipped with an electronic camera operable for addressing a scene and producing an electronic signal representing an image thereof. Electronic cameras are typically comprised of a lens having an axis which defines the cameras pointing direction and an image detector. By pointing the lens toward a scene, an optical input is converted to an image signal. It is desirable for devices of the invention that the image signal be in a format which is processable by a computer.
Optically acquired images are processed by the computer for content. Certain image features such as color, intensity, motion, or many others, are detected and used to form elements of a game scheme. Optically acquired images are processed in many ways to extract various types of information relating to the scene being addressed. It is important for the game concepts to extract information relating to the scene and providing a game response which relates to or is based on that information. A computer generated image herein refers to images or portions of images generated by a computer either wholly synthesized or "clip art" recalled from computer memory. Computer generated images of the invention typically include a single object in many embodiments which when presented as a sequentially as an image series appears to form object motion. The computer is operable for generating and playing the series of computer generated images to form image video which moves in real time.
Optically acquired images and computer generated images can be processed such that they are combined, overlaid, or superimposed together to form a composite image. Various image process routines can be employed to affect the combination of optically acquired images and computer generated images as is well known in the imaging arts, composite images are displayed to game user on a display which is aligned to the optical axis of the camera such that the is direction correspondence between the real scene and the displayed image. This allows the user the feeling of "looking through" the device at the real scene.
User actions including input and commands can be made to affect both types of images. For example, if the user pans the camera left then the scene being addressed changes and so does the optically acquired image. In addition, the user may employ tactile inputs to drive certain image activity. A object in the computer generated imagery can be responsive to user direction via a joystick. Objects in the computer generated image domain may also be responsive to features of the optically acquired image. For example, a game scheme may call for any computer generated image object which is superimposed onto a red image pixel be removed from the composite image.
In advanced versions where processing of the optically acquired image is not sufficient for obtaining all of the desired information regarding the real scene, the devices can rely on other methods of realizing such information. For example, certain features of a particular scene can be pre-recorded into a data store. When the computer identifies that the particular scene is being addressed by the game device, the computer can supply the recorded information to the game logic routine where it is used to control the game scheme. One way in which a computer identifies or recognizes a scene being addressed is by measuring the position and attitude of the camera. Details of this method are set forth in detail in the parent application from which this application depends.
In further advanced versions a plurality of players and game systems may be in communication with one another. A single scene may be addressed by more than one user from various locations each having it own perspective of the particular scene. Computer generated imagery in one player's displayed composite image can be made to correspond directly to computer generated imagery in a second players displayed composite image but in the proper perspective relating to that user. In addition, the game scheme may incorporate the user's true position as well as a user's position as represented by a characters position within the scene in the game strategy. Whereas basic concepts have been presented, apparatus and methods have been devised which serve those concepts. Accordingly:, a computer game apparatus of operable for combining images of real scenes with images generated by a computer to form a composite image, the composite image being responsive to features of the images of real scenes, the apparatus comprising: a camera; a computer; and a display, the camera having a lens axis which defines a camera pointing direction and being operable for: receiving an optical input, converting that input to an electrical signal processable by a computer; and transmitting that electrical signal to; the computer having memory, a CPU, input/output means, et cetera and being operable for: receiving the electrical signal, processing the electrical signal according to a game scheme to extract information relating to features of the real scene, generating an image, combining that image with the image of the real scene to form a composite image, transmitting the composite image to; the display having a substantially planar surface and associated normal direction and being operable for receiving the transmission of the composite image and presenting it as optical output, the display aligned to the camera pointing direction to provide a direct direction correspondence with the real scene. and: a computer game method of combining images of real scenes with images generated by a computer, the images generated by the computer being responsive to features of the images of real scenes comprising the steps: a) addressing a scene; b) acquiring an optical input; c) converting the optical input to an electronic signal; d) processing electronic signal; e) forming a computer generated image; f) manipulating a computer generated image; g) combining the optically acquired image with the computer generated image to form a composite image; h) displaying the composite image, said addressing a scene step including pointing a camera at a scene; said acquiring an optical input step including receiving a optical input at a camera lens and forming an image of the scene being addressed onto a detector plane; said converting the optical input to an electronic signal step including detecting the light intensity and color in a spatial relationship and producing an electronic signal which corresponds thereto; said processing electronic signal step including detecting features of the image signal according to a predetermined rule set; said forming a computer generated image step including according to some predetermined game scheme rule set; said manipulating a computer generated image step including manipulating the computer generated image in accordance with the game scheme rule set; said combining the optically acquired image with the computer generated image to form a composite image step including forming a composite image comprised of information from the optically acquired image and the computer generated image; said displaying the composite image step including presenting the composite image aligned to the real scene, are provided.
It is a primary object of the invention to provide computer games which interact with a user's environment. It is further an object to provide vision system devices operable in a game mode. It is an object of the invention to provide systems of processing an optically acquired image in combination with a computer generated image in accordance with a game scheme or strategy. It is still further an object of the invention to provide games having augmented real images. It is still further an object of the invention to provide advanced computer game schemes allowing features of images and user inputs to control image activity.
A better understanding can be had with reference to the detailed description of Preferred Embodiments and with reference to the appended drawings. These embodiments represent particular ways to realize the invention and are not inclusive of all ways possible. Therefore, there may exist embodiments that do not deviate from the spirit and scope of this disclosure as set forth by the claims, but do not appear here as specific examples. It will be appreciated that a great plurality of alternate versions are possible.
Brief Description of the Drawings
These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims and drawings where: Figure 1 is an image of a real scene of clouds taken from a game user's environment;
Figure 2 is the image of figure 1 processed into to a binary intensity level array; Figure 3 is an image similarly processed to a different binary intensity level array; Figure 4 shows an image field containing certain computer generated objects;
Figure 5 illustrates interaction between a processed optically acquired image and a computer generated objects;
Figure 6 is a composite image containing optically acquired imagery and computer generated objects combined together as it may appear to a user; Figure 7 is a line drawing example of a second real scene;
Figure 8 shows an image feature of the real scene which is processed to form basis for a game scheme; Figure 9 shows a computer logic domain result of considering activity in the scene and applying a methodology is accordance therewith;
Figure 10 shows computer generated objects and their actions within the real scene; Figure 11 shows how a composite image constructed according to a particular game scheme as it may be presented to a game user;
Figure 12 shows additional interaction in a composite image between computer generated objects and objects of the real scene;
Figure 13 shows still another example of a certain real scene; Figure 14 shows a wireframe model of that scene which is known to a computer of the device;
Figure 15 shows a computer generated object within the wireframe model of the scene;
Figure 16 shows how a composite image of the game may look to a user; Figure 17 shows interaction of computer generated images and optically acquired images within a composite image;
Figure 18 shows a cartoon drawing depicting still further another example of a real scene having computer generated characters therein;
Figure 19 shows a second perspective of the scene of Figure 18 as it may be viewed from a second game user in a different location than the first user;
Figure 20 shows a composite image of the present example formed with an optically acquired image and computer generated imagery;
Figure 21 illustrates a few sophisticated interactions which may take place between an optically acquired image of a real scene and computer generated imagery.
Preferred Embodiments of the Invention
In accordance with each of the preferred embodiments of the invention, there is an apparatus for and method of providing a computer game which responds to and interacts with the immediate environment. It will be appreciated that each of the embodiments described include both an apparatus and method and that the apparatus and method of one preferred embodiment may be different than the apparatus and method of another embodiment. In a first preferred embodiment which has been constructed for its simplicity and is believed to best illustrate some of the basic concepts of the present invention, an image of a real scene which contains simple features is considered. Figure 1 shows an image of a clouded sky. A user of the invention could address such a scene by pointing the camera up toward the clouds. We call the image of the sky and clouds "an image of a real scene". By "real scene" we simply mean some scene as it may appear to a person looking about ones environment. By comparison, scenes formed by artists on various media such as paintings or cartoons for example are not considered "real scenes" as objects within those scenes may be purely fictional. Clouds are real objects which can be found in ones environment and the image of Figure 1 is therefore an "image of a real scene". The image has a frame 1 which defines the extent of the image field. Certain areas of the image represent clouds, for example area 2; and certain other areas of the image represent clear sky, for example area 4. An apparatus of the invention having a camera addressing the real scene containing clouds is useful for forming an electronic image thereof. The image is comprised of picture elements or "pixels" of various intensity levels. Dark regions in the cloud's shadows 5 may appear as low intensity areas. The image artifact that appears as small squares has been exaggerated in this image for discussion. The small squares 3 or pixels are unit image areas of uniform intensity.
Now considering the image of Figure 1, it is desirable to construct a game scheme which involves the real image. Features of the real image, for example features relating to intensity can be incorporated into the scheme. This scheme may include actions to be taken if some condition is met. An example of a logic step involving the condition may be: if the pixel is dark, then the condition is met; if the pixel is light, then the condition is not met. Real images of clouds consist of continuous tones and the meaning of "dark" and "light" become ambiguous. To clarify this ambiguity, we apply a processing step to the optically acquired continuous tone image. In a computer routine, an intensity threshold is applied at each pixel, and each pixel is then represented with either a "1" (dark) or a "0" (light). The image of Figure 2 having an extent 21 similar to Figure 1, has black regions 23 or white regions 22 for every pixel. The threshold level can be changed as desired to produce an increase in either of the areas represented by dark areas or light areas. Figure 3 shows the image having been processed with a higher threshold resulting in more white area 32 within the image field.
By selecting any position within the image, one can ask if the condition is met or not. For example: at position 33 we say the condition is not met; at position 34 the condition is met. If we superimpose the optically acquired image with a computer generated image having certain objects therein to form a composite image, each of the objects will have associated with them certain positions. We can then ask by way of a computer program if the computer generated object has a position which is coincident with a pixel of the real image which meets the condition. The computer generated object can then be made to respond to the result of the condition test. For example, if the condition is met then the computer generated object is removed from the composite image. One will appreciate that the possibilities are numerous but that this simple example is useful for presentation here.
Figure 4 shows an image field 44 containing computer generated imagery including: an object 41 which represents a stork; an object which represents a sack having a baby therein; and an object 43 which is a combination of the two. To further illustrate how a game strategy can be constructed, we assign attributes to the various objects. As usual, the array of possible attributes are numerous and it not possible to define them all here. Sacks which are not attached to storks "fall" or advance from the top of an image field toward the bottom. The computer "launches" sacks into the image field near the top at various controlled intervals. As a sack falls toward the bottom of the image field, a stork controlled by a user can be manipulated to fly towards the sack. We make a game rule where the stork is allowed to fly in light areas of the real scene but not in dark areas. For our game, storks must fly in the clouds but avoid clear skies. Various game schemes may have various rules.
Figure 5 illustrates a game scheme as it shows how the computer generated image may behave within the computer logic domain. A flight path 53 of the stork 52 is shown in the figure. According to the game scheme, the stork must remain within the "clear" areas 56 of the image field 57 until it reaches an intercept point 54 where the sack 51 can be captured. If the stork "hits" or flies into a dark area 55 (where the condition is met), then the stork suffers some consequence (action); it disappears, dies, blows up, et cetera. The object of the game is for the player, represented by the stork character, to navigate through the clouded sky and capture the sack before it hits the bottom of the image field where presumably something disastrous happens to the baby. If the stork captures the sack 43 the baby is "saved". The real scene has been processed according to some design rules consistent with a game scheme such that computer generated objects are responsive to certain features (intensity patterns) of the real scene.
The image of Figure 5 is not suitable for presentation to a user, but one which reminds us that "clear" areas and "prohibited" areas exist in the computer's logic domain. The image presented to a user appears as the image in Figure 6. In Figure 6, an image field 62 contains a composite image having an optically acquired image of a real scene formed by an electronic camera combined with computer generated objects such as storks and sacks 61. The composite image is a "live" image in that as the real clouds change, the image changes accordingly. This is easily accomplished with simple video type cameras such as a common camcorder. Note that game schemes should be developed in anticipation of real scenes which change. The computer should process the images in "real time" to affect the condition of Figure 3 and determine the appropriate response of the computer generated image and combine the two images to form a composite image to be displayed. If the device is pointed in a different direction, different cloud patterns would be imaged and a different pattern of black and white regions will result after the processing step. This has strong implications regarding the computer generated objects and how they might respond to movement of the device. Although the above example is spectacular in that it incorporates into a game scheme images of real scenes from the user's environment, i.e. the sky and clouds about the user, a second and more remarkable feature can be understood by considering the following description. As a sack approaches the bottom of an image field, a user can "buy time" by panning the device downward. In this way, the particular scene being addressed by the device changes in response to the direction in which the user points the device. In the image field, the sack would appear to rise back toward the top as the sack associates its fall rate by referencing a location (pixel) in the real scene. Regardless of the up and down pointing motion s of the device, the sack falls at a constant rate with respect to a point in the real scene. This offers more opportunity for the stork to capture and save the sack. The panning does not go without limits. Eventually, the user reaches the limits of the sky when the device is pointing horizontally and the horizon comes within the field of view of the camera, the baby necessarily ends the journey if it is not captured before encountering the horizon. In simple examples, the stork can be made to always appear in the center of the composite image. This is similar to "boresighted" crosshairs seen in the image field of simple cameras. To advance or "fly" the stork through the real scene as specified by the rules, the user could point the device in various directions so that the center of the image field (always containing the stork) is always in light image areas. The sack would necessarily track across the composite image field as left and right panning motions provide.
Better schemes may allow the sack and the stork to "lock" onto references in the real scene. The stork can then be advanced with respect to its reference via a joystick type controller. The sack would be locked onto a vertical reference line of the real scene as the sack advances downward. Panning of the device over large angles may cause computer generated objects which are locked to certain references in the real scene to pass outside the field limits of the displayed composite image. Locations of these computer generated objects which have been panned out of the limits of the displayed image are not necessarily lost. It is easy for a computer routine to be arranged to count pixels which pass an image field edge in the panning process. A panning motion in the opposite direction could then re-acquire the computer generated objects left "locked" onto a feature of the real scene. Consider the object 61 in Figure 6 which is about 10 pixels from the left edge of the image field 62 which is 128 pixels wide. If the width of the image field corresponds to 10 degrees of field-of-view, then a pan to the right of 20 degrees would necessarily mean that 256 pixels pass either edge. The magnitude of this action is easily accounted for in simple computer routines which operate on images. If the device is panned 20 degrees right and the object 61 is referenced to appear to remain with the cloud it is shown next to, then neither the cloud nor the object would appear in the new live image. The new image would simply contain the clouds which were 20 degrees away from the clouds shown in the figure as observed from the user's point of view. A return pan, 20 degrees left, could then re-acquire the object as the computer "knows" approximately where it left the object. Even if the clouds change pattern, the number of pixels to relocate the object could be easily tracked.
This is remarkable as extension of the same concept allows the game field-of- regard or playing region to extend over an entire 360 degrees of left-right panning. Since up and down directions could be similarly tracked, the games field of regard is a solid angle of 4π steradians. A user could be completely "surrounded" by dropping sacks. As the device is panned over any pointing directions, various positions may have falling sacks. In this way the stork could rescue one sack, pan 30 degrees and find a second sack falling all while a third sack is falling behind the user at 180 degrees. Admittedly, one might look quite silly panning a viewing device back and forth across the skies in search of falling sacks.
Now, as sacks can be made to appear in various sizes we can arrange them to appear to a user to be at various distances therefrom. Sacks appearing as large objects would represent sacks which were nearer to the user than small sacks. Since objects in real life appear to fall faster when they are near than when they are far, we can make a game having quickly falling near sacks and slowly falling distant sacks. By varying the number of sacks and their fall rates, the game can be made to accommodate various levels of skill.
The images of Figures 2 and 3 are optically acquired images of real scenes having been processed into some desirable format according to a schematic of the game as set forth by a game designer. As a great plurality of games are possible, each having its own objectives. The number of possible ways to process an optically acquired image are limitless and it is not practical to attempt to describe them all here. However, for illustration, a simple process of "binarization" or "intensity thresholding" has been chosen to show how optically acquired images can be processed and manipulated to cooperate with computer generated images and computer generated images controlled by a user. The foregoing examples are very specific. It will be appreciated that they are used only to illustrate how certain features of the invention interact with each other and with the user. The invention does not concern a flying stork game, but a game having computer generated images which interact with optically acquired images of real scenes from a user's environment which interact with user actions including input and commands. This first embodiment was specifically designed to illustrate two important features of the invention: firstly, that a computer game can be designed to be responsive to optically acquired images of real scenes from the user's environment; and secondly, that the pointing direction of the device dictates which real scene is being addressed and therefore the game can be responsive to a dynamic, user controlled, real scenes about the user's environment.
As a bit of imagination can be used specify new attributes of certain computer generated objects and of objects in the real scene, one can easily formulate a great plurality of new game schemes which rely on the same principals taught here. For example, a computer generated target jet plane can fly about the sky in random patterns while a user controlled jet tries to shoot at the target plane. The clouds could conceal the target jet and user jet from view in the composite image at various cloud locations. This compares to the previous example where the clouds might disqualify a player by cause his stork to "die". The interaction of computer generated imagery and optically acquired imagery can be embodied in many ways. In a second preferred embodiment which illustrates a more advanced image processing step, the computer contemplates the image of a real scene and makes some determinations regarding features of the scene. This is a more sophisticated version of image processing than was used in the previous example where simple intensity provided the feature of the image in which objects could respond. Figure 7 shows a line drawing of common objects which may be found in a real scene. Trees 73 along a sidewalk next to a multilane roadway 74 next to a row of buildings 72 make up the real scene and the image field 71. A camera pointing at a real scene containing the objects described can acquire images which may be analyzed for certain features. Over some period of time, the pixels in the scene may change due to activity or motion in the real world. For example, if the roadway contains traffic, the pixels associated with some regions of the image will change as cars pass through the image field. Figure 8 shows an image field 81 representing the scene where the buildings, trees and sidewalk are unchanged, but where the centers of the roadway lanes 82 have frequently changing pixel data (passing cars). Detecting movement in live images is not a new concept. Now, with knowledge of where there is apt to be great probability of movement a game scheme can provide for the computer to select a few image positions which correspond to those locations. Figure 9 shows three "motion" positions 92 in the image field 91 which have been selected by a computer processing routine designed to choose locations associated with high frequency movement. It is further possible for this particular game scheme to probe the locations from time-to-time and present the following condition test: "has the pixel changed (color or intensity) since the last test?". A change means there is presently movement within the image at that test location. No change in the pixel means that there is no movement at that point. One can appreciate that this routine is more complicated than intensity threshold processes but well within the "real time" capabilities of fast computers. With the real scene being addressed by the device and an optically acquired image having been processed as described, the computer is ready to provide computer generated imagery having objects which are designed to respond to the results of the condition tests. In Figure 10 a computer generated frog 102 is caused to appear at a "motion" position within the image. With each video frame, typically 1/30th or l/60th of a second, the computer could determine if there has been movement in the real scene at the location of the frog. In accordance with a certain game scheme, the frog can remain safely at the location until movement in the real scene is detected or the user causes by controller (joystick) for the frog to jump as indicated by the arrow 103 to the next point 101. In this way, a player is represented by a frog and is tasked with the assignment of jumping across busy lanes of traffic. Experts will recall a well known video game sometimes called "Frogger" where a similar strategy is called upon. Such recollection is very useful for distinguishing the present invention from the art. The scenes in "Frogger" are completely generated by computers. If Frogger is "pointed" in a new direction there is no response in the game as there is no communication between the game device and the environment it is in. Line drawings have been used here for simplicity and clarity. In these drawings it may be difficult to appreciate which portions of the composite image correspond to the "optically acquired image" and which parts correspond to the computer generated imagery. Therefore, to make this distinction more clear, the following figures have been provided. Figure 11 is an illustration of how a composite image of a frog game might look in devices of the invention. An image field 111 contains the composite image comprised of: a real scene having trees, buildings, roadways and traffic, and a computer generated image including the objects: computer generated frogs 112, hopping frogs, squishing frogs 113. In addition, the game scheme may provide computer generated images having objects which do not respond to user movements but may randomly attack an icon (frog) representing a user. Figure 12 is an image field 121 which illustrates another hazard in a frog game. Not only does a user face the possibility of being run down by a moving car 124, but being attacked by a randomly appearing "road shark" 122. Thus, computer generated objects may be responsive to: 1) user commands such as a "hop" command; 2) objects in the image of the real scene such as moving cars; or 3) other computer generated objects appearing from time-to-time like a "road shark".
The preceding examples were devised to show how an optically acquired image of a real scene may be processed by a computer routine to extract certain features which then can be manipulated to form a game scheme. They are simple examples to show that a computer may interrogate the optically acquired image to gather information regarding the scene. There are other methods of deducing information regarding the scene being addressed. One very sophisticated method involves determining the position and pointing attitude of the device to learn which scene is being addressed by the device. A vision system located on Alcatraz Island having a camera pointing west would necessarily be pointing at the Golden Gate Bridge in San Francisco. Details of such systems can be learned from the parent application cited above.
To illustrate how those systems can be used to produce games, consider the image of Figure 13. An image field 131 contains an image of a real scene having buildings 133, sidewalks 134, a stop sign 135, and people 132. With very accurate position and attitude determining means, a computer could identify the exact scene being addressed and could recall from a previously programmed memory a model of important surfaces in the scene. As the people are highly mobile and it is unlikely that a model could include information about these people, the recalled model may only have information regarding some features of the real scene. In particular, as sidewalks and stop signs tend to remain fixed over long periods of time, we could expect the model to include representations thereof. In Figure 14, a model of the scene being addressed is presented. It contains lines representing edges 142 of objects for example a stop sign 143. Although edge detection image processing techniques could be applied to images of real scenes to produce similar models, we use the example here of a model recalled from memory based on device position and attitude measurements. Recall that we do not actually present to a user an image of the model shown in Figure 14, but that the computer uses it to affect computer generated objects which interact with the scene. We say the model exists in the game logic domain. Now we suggest a scheme where computer generated objects interact with the real scene via a model which describes the real scene. Surfaces of the model are accurately located and correspond to surfaces in the real scene were the angles of those surfaces can be known with respect to the pointing direction of the device. A computer generated golfer 152 can "approach" a real scene and play a golf shot off the objects therein. Of course, a real golf ball should not be hit towards a real building. The model, although it regulates the actions of a computer generated ball, it is not useful for presentation in the composite image. In fact, it is desirable to omit the model from the composite image presented to the user. Figure 16 shows how a computer generated golfer 162 may look in a composite image field 161 containing a real scene. When the golfer 172 "hits" the ball 173 by way of a user input to the computer, the ball takes a trajectory 176 which responds 175 to a model surface, not shown but having a corresponding surface 174 in the real image, known by the computer and which fairly represents the scene being addressed.
In consideration of the examples above, one will appreciate many great advantages which the invention provides to computer games with regard to realism. Features of the game respond to the game user's real environment. Even the most sophisticated Virtual Reality games have minimal or no consideration of a user's surroundings.
In yet another preferred embodiment, we present some advanced interaction between game users and their environments. In particular, a game scheme can be arranged to include more than one player; each player being remotely located with respect to each other, could have a unique perspective of the playing field. Further, a game scheme in which one player may attack another player via the player's game icon or the player's actual position in the real world is provided.
Consider a first player who is located at a position five miles west of a city downtown. A second player is located five miles south of the same city. Each player has a game system allowing that player to view the real city scene as it appears from that player's location. The game system is capable of generating composite images having computer generated objects combined with images of real scenes. Each player's vision system computer is in communication with the others computer or with a neutral host computer which is operable for dispensing the game. Figure 18 shows a scene of the city as it may appear to the first user. The image field 181 contains: 1) optically acquired imagery representing buildings 184, 2) computer generated imagery 183 and 182 representing players, 3) and combinations thereof 185 where real buildings (optically acquired) appear on fire (computer generated). Although Figure 18 is a hand drawn cartoon where the buildings do not appear to be an image of a real scene, it is intended for illustration and comparison with Figure 19. A true composite image which accurately reflects the way the image may appear to a game user is presented as Figure 20. Figure 19 shows the same scene as Figure 18 from the point of view of the second player. The optically acquired portion of the composite image containing the buildings 194 and 195 naturally appears in a different perspective compared to Figure 18; the computer generated objects 192 and 193 are shifted in perspective as compared to the first players display as prescribed by the game computers which track the positions of the objects and the players to allow for proper perspective as prescribed in the game scheme. As the events of the game are executed, both players see the same event from a different perspectives. For example, if the lizard (183 in Figure 18 and 193 in Figure 19) turns his head 90 degrees to the lizard's right side, then the lizard of Figure 18 would be looking approximately into the direction of the first player's camera (west); the lizard of Figure 19 would be looking towards the right edge of the composite image field (west again, in agreement with the images at both systems). If it is the first player who is represented by the lizard, then the first player may have the ability to turn the head of the lizard as a game command. Upon such instruction by the first user, the first user's computer must alert either a host computer or the other player's computer of the instruction so as to allow that second player's computer to respond to the instruction in a fashion which corresponds to first player's computer. Game schemes may be created where certain instructions are "privileged" and are not shared with the other player but which still may affect the game. A condition where a player is low on weapons or fuel may be kept secret as part of the game strategy. Game designers will undoubtedly find many such arrangements. It is important here to realize that communication can exist between two systems at various locations about an environment which allow two players to address common scenes and play a common game. An action, for example a fire, within a scene is shared by both systems, albeit in different perspectives 185 and 195. Figure 19 shows what the composite image field 201 may look like to the first player. The opponent's player icon, the swordsman 202 is threatening the icon representing the first player, the lizard 203 as a building burns in the background 204. It may be possible through game strategy and rules to learn of an opponent's actual location in the real world not the location of the character icon which is readily available to both players, but the actual location of the user 5 miles from the city. In this case, the characters might be able to attack either of those positions and conquer the enemy. To the attacking player, the scene would show the character acting on a certain building known to contain the opponent; but to the player being attacked the opponent's character could fill the image field and appear to be directly attacking the player's person. It is now easy to appreciate the spectacular realism possible with systems of the present invention.
It is worthwhile to note some further advance interaction between objects of a real scene and computer generated objects and actions of a user. In Figure 21 a composite image field 221 contains a jet 222 which can be flown via user command input and a monster 223. Bullets 224 can be fired by user command from the moving jet in accordance with common rules of motion. Bullets may strike real objects such as buildings and perhaps injure those buildings as they appear in the composite images. The real building 225 may be completely intact in the real world but appear to be burning in the composite image. The area 227 in the composite image where the top of the real building should appear has been replaced with computer generated sky and fire. Similarly, a computer generated building top 226 has been added to the composite image to appear in the hand of the monster. These manipulations are easily realized via image processing tricks such as "morphing" and other cut and paste techniques known in the graphics and imaging arts.
Now, considering the above examples, the useful hardware to be assembled to perform the functions described include: an electronic camera; a computer; and a display each being commonly available equipment without special features. The camera having a lens axis which defines the camera pointing direction.
The camera can receive optical input and form an image onto an image detector where it is converted to an electronic signal which represents the image.
The computer having memory and other sub-systems generally associated with computers and being in communication with the camera where it can receive signal representing images. The computer is them operable for: processing the electrical signal according to a game scheme to extract information relating to features of the real scene, generating an image, combining that image with the image of the real scene to form a composite image, transmitting the composite image to a display. It is important to note that when the language "generate an image" is used, it is implicit that the word "signal" follows image. The computer always handles information in digital form and when it "generates and image" it really produces a digital signal which represents an image when it is played to a device which converts such signals to optical image patterns.
The display being in communication with the computer and usable for receiving the composite image signal and presenting it as optical output to the user where the image is aligned with the camera pointing direction to provide a direct direction correspondence with the real scene giving the user the appearance of looking directly at the scene.
Further, considering the above examples, primary steps to be performed to realize the functions described include: addressing a real scene and forming an image thereof; processing the real scene to extract feature information therefrom; generating an image with a computer in accordance with the features extracted; combining the images according to a game scheme to form a composite image; and displaying the composite image aligned to the real scene.
As a great plurality of games are possible, each having its own objectives, one will easily appreciate that the invention is concerned with the interaction of games with a user's environment but not limited to any particular game scheme presented here. The number of possible ways to process an optically acquired image are limitless and it is not practical to attempt to describe them all here. However, for illustration, have been chosen to show how optically acquired images can be processed and manipulated to cooperate with computer generated images. The foregoing examples are very specific. They are used only to illustrate how certain features of the invention interact with each other and with the user. The invention does not concern merely flying stork games, jet fighter games, frogging games, golf games, or monster games in particular, but games having computer generated images which interact with optically acquired images of real scenes from a user's environment in general. Various embodiments were specifically designed to illustrate important features of the invention including: firstly, that a computer game can be designed to be responsive to optically acquired images of real scenes from the user's environment; and secondly, that the pointing direction of the device dictates which real scene is being addressed and therefore the game can be responsive to a dynamic, user controlled, real scenes about the user's environment.
As good and valuable contributions from game designers who specify new attributes of certain computer generated objects and new ways of processing images of the real scene to realize various versions of this invention are fully anticipated, the claims are intended to protect the spirit of the invention which is to provide computer games which interact with a user's environment.
Although the present invention has been described in considerable detail with clear and concise language and with reference to certain preferred versions thereof including the best mode anticipated by the inventor, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited by the description of the preferred versions contained therein.

Claims

We claim
1) A computer game apparatus operable for combining "live" images of real scenes with images generated by the computer.
2) A computer game apparatus of claim 1 operable for combining images of real scenes with images generated by the computer, the images generated by the computer being responsive to features of the images of real scenes.
3) A computer game apparatus of claim 1 operable for combining images of real scenes with images generated by a computer to form a composite image, the composite image being responsive to features of the images of real scenes, the apparatus comprising: a camera; a computer; and a display, said camera having a lens axis which defines a camera pointing direction being operable for: receiving an optical input, converting that input to an electrical signal processable by a computer; and transmitting that electrical signal to; said computer being operable for: receiving the electrical signal, processing the electrical signal according to a game scheme to extract information relating to features of the real scene, generating an image, combining that image with the image of the real scene to form a composite image, transmitting the composite image to; said display being operable for receiving the transmission of the composite image and presenting it as optical output, said display being aligned to the camera pointing direction to provide a direct direction correspondence with the real scene.
4) A method of combining images of real scenes with images generated by a computer to form a game comprising the steps: a) addressing a real scene and forming an image thereof; b) generating an image with a computer; c) combining the images according to a game scheme to form a composite image; d) displaying the composite image.
5) A computer game method of combining images of real scenes with images generated by a computer, the images generated by the computer being responsive to features of the images of real scenes comprising the steps: a) addressing a scene; b) acquiring an optical input; c) converting the optical input to an electronic signal; d) processing electronic signal; e) forming a computer generated image; f) manipulating a computer generated image; g) combining the optically acquired image with the computer generated image to form a composite image; h) displaying the composite image, said addressing a scene step including pointing a camera at a scene; said acquiring an optical input step including receiving a optical input at a camera lens and forming an image of the scene being addressed onto a detector plane; said converting the optical input to an electronic signal step including detecting the light intensity and color in a spatial relationship and producing an electronic signal which corresponds thereto; said processing electronic signal step including detecting features of the image signal according to a predetermined rule set; said forming a computer generated image step including according to some predetermined game scheme rule set; said manipulating a computer generated image step including manipulating the computer generated image in accordance with the game scheme rule set; said combining the optically acquired image with the computer generated image to form a composite image step including forming a composite image comprised of information from the optically acquired image and the computer generated image; said displaying the composite image step including presenting the composite image aligned to the real scene.
6) The method of claim 5 where the processing electronic signal step is performing a pixel-by-pixel intensity threshold operation to realize an array of binary image intensity levels.
7) The method of claim 5 where the processing electronic signal step is performing motion detection to locate regions of the image tending to have motion activity and applying a game scheme in accordance therewith. 8) The method of claim 5 further comprising a step where information relating to the real scene is recalled in accordance with attitude and position measurements of the camera.
9) The method of claim 5 where a plurality of players are in communication with each other.
10) The method of claim 5 where a plurality of players are in communication with a central host computer dispensing game logic routines.
PCT/US1997/006234 1997-04-15 1997-04-15 Computer games having optically acquired images which are combined with computer generated graphics and images WO1998046323A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US1997/006234 WO1998046323A1 (en) 1997-04-15 1997-04-15 Computer games having optically acquired images which are combined with computer generated graphics and images
AU28020/97A AU2802097A (en) 1997-04-15 1997-04-15 Computer games having optically acquired images which are combined with computergenerated graphics and images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1997/006234 WO1998046323A1 (en) 1997-04-15 1997-04-15 Computer games having optically acquired images which are combined with computer generated graphics and images

Publications (1)

Publication Number Publication Date
WO1998046323A1 true WO1998046323A1 (en) 1998-10-22

Family

ID=22260717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/006234 WO1998046323A1 (en) 1997-04-15 1997-04-15 Computer games having optically acquired images which are combined with computer generated graphics and images

Country Status (2)

Country Link
AU (1) AU2802097A (en)
WO (1) WO1998046323A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10011473A1 (en) * 2000-03-09 2001-09-27 Norbert Hahn Device for cleaning of golf clubs and balls has housing for accommodation of cleaning unit, especially in form of brushes, for cleaning of golf club inside housing
EP1205221A2 (en) * 2000-11-09 2002-05-15 Sony Computer Entertainment Inc. Display control method
EP1260939A2 (en) * 2001-03-21 2002-11-27 Sony Computer Entertainment Inc. Data processing method
US6752720B1 (en) * 2000-06-15 2004-06-22 Intel Corporation Mobile remote control video gaming system
WO2006105686A1 (en) * 2005-04-06 2006-10-12 Eidgenössische Technische Hochschule Zürich Method of executing an application in a mobile device
US20060281511A1 (en) * 2005-05-27 2006-12-14 Nokia Corporation Device, method, and computer program product for customizing game functionality using images
WO2008011515A2 (en) 2006-07-19 2008-01-24 World Golf Tour, Inc. Photographic mapping in a simulation
US20110170747A1 (en) * 2000-11-06 2011-07-14 Cohen Ronald H Interactivity Via Mobile Image Recognition
WO2013032618A1 (en) * 2011-08-30 2013-03-07 Qualcomm Incorporated Indirect position and orientation tracking of mobile platforms via multi-user capture of multiple images for use in augmented or virtual reality gaming systems
US20130274013A1 (en) * 2000-11-06 2013-10-17 Nant Holdings Ip, Llc Image Capture and Identification System and Process
US8792750B2 (en) 2000-11-06 2014-07-29 Nant Holdings Ip, Llc Object information derived from object images
US8824738B2 (en) 2000-11-06 2014-09-02 Nant Holdings Ip, Llc Data capture and identification system and process
EP2764899A3 (en) * 2005-08-29 2014-12-10 Nant Holdings IP, LLC Interactivity via mobile image recognition
US9164723B2 (en) 2011-06-30 2015-10-20 Disney Enterprises, Inc. Virtual lens-rendering for augmented reality lens
US9310892B2 (en) 2000-11-06 2016-04-12 Nant Holdings Ip, Llc Object information derived from object images
US9761053B2 (en) 2013-08-21 2017-09-12 Nantmobile, Llc Chroma key content management systems and methods
US10617568B2 (en) 2000-11-06 2020-04-14 Nant Holdings Ip, Llc Image capture and identification system and process
US10719123B2 (en) 2014-07-15 2020-07-21 Nant Holdings Ip, Llc Multiparty object recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553864A (en) * 1992-05-22 1996-09-10 Sitrick; David H. User image integration into audiovisual presentation system and methodology

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553864A (en) * 1992-05-22 1996-09-10 Sitrick; David H. User image integration into audiovisual presentation system and methodology

Cited By (116)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10011473A1 (en) * 2000-03-09 2001-09-27 Norbert Hahn Device for cleaning of golf clubs and balls has housing for accommodation of cleaning unit, especially in form of brushes, for cleaning of golf club inside housing
US6752720B1 (en) * 2000-06-15 2004-06-22 Intel Corporation Mobile remote control video gaming system
US9141714B2 (en) 2000-11-06 2015-09-22 Nant Holdings Ip, Llc Image capture and identification system and process
US9148562B2 (en) 2000-11-06 2015-09-29 Nant Holdings Ip, Llc Image capture and identification system and process
US10772765B2 (en) 2000-11-06 2020-09-15 Nant Holdings Ip, Llc Image capture and identification system and process
US10639199B2 (en) 2000-11-06 2020-05-05 Nant Holdings Ip, Llc Image capture and identification system and process
US10635714B2 (en) 2000-11-06 2020-04-28 Nant Holdings Ip, Llc Object information derived from object images
US10617568B2 (en) 2000-11-06 2020-04-14 Nant Holdings Ip, Llc Image capture and identification system and process
US10509820B2 (en) 2000-11-06 2019-12-17 Nant Holdings Ip, Llc Object information derived from object images
US9154694B2 (en) 2000-11-06 2015-10-06 Nant Holdings Ip, Llc Image capture and identification system and process
US20110170747A1 (en) * 2000-11-06 2011-07-14 Cohen Ronald H Interactivity Via Mobile Image Recognition
US10500097B2 (en) 2000-11-06 2019-12-10 Nant Holdings Ip, Llc Image capture and identification system and process
US10095712B2 (en) 2000-11-06 2018-10-09 Nant Holdings Ip, Llc Data capture and identification system and process
US20130274013A1 (en) * 2000-11-06 2013-10-17 Nant Holdings Ip, Llc Image Capture and Identification System and Process
US8712193B2 (en) 2000-11-06 2014-04-29 Nant Holdings Ip, Llc Image capture and identification system and process
US8718410B2 (en) 2000-11-06 2014-05-06 Nant Holdings Ip, Llc Image capture and identification system and process
US8792750B2 (en) 2000-11-06 2014-07-29 Nant Holdings Ip, Llc Object information derived from object images
US8798322B2 (en) 2000-11-06 2014-08-05 Nant Holdings Ip, Llc Object information derived from object images
US8798368B2 (en) 2000-11-06 2014-08-05 Nant Holdings Ip, Llc Image capture and identification system and process
US8817045B2 (en) * 2000-11-06 2014-08-26 Nant Holdings Ip, Llc Interactivity via mobile image recognition
US8824738B2 (en) 2000-11-06 2014-09-02 Nant Holdings Ip, Llc Data capture and identification system and process
US8837868B2 (en) 2000-11-06 2014-09-16 Nant Holdings Ip, Llc Image capture and identification system and process
US8842941B2 (en) 2000-11-06 2014-09-23 Nant Holdings Ip, Llc Image capture and identification system and process
US8849069B2 (en) 2000-11-06 2014-09-30 Nant Holdings Ip, Llc Object information derived from object images
US8855423B2 (en) * 2000-11-06 2014-10-07 Nant Holdings Ip, Llc Image capture and identification system and process
US8861859B2 (en) 2000-11-06 2014-10-14 Nant Holdings Ip, Llc Image capture and identification system and process
US8867839B2 (en) 2000-11-06 2014-10-21 Nant Holdings Ip, Llc Image capture and identification system and process
US8873891B2 (en) 2000-11-06 2014-10-28 Nant Holdings Ip, Llc Image capture and identification system and process
US8885983B2 (en) 2000-11-06 2014-11-11 Nant Holdings Ip, Llc Image capture and identification system and process
US8885982B2 (en) 2000-11-06 2014-11-11 Nant Holdings Ip, Llc Object information derived from object images
US10089329B2 (en) 2000-11-06 2018-10-02 Nant Holdings Ip, Llc Object information derived from object images
US8923563B2 (en) 2000-11-06 2014-12-30 Nant Holdings Ip, Llc Image capture and identification system and process
US8938096B2 (en) 2000-11-06 2015-01-20 Nant Holdings Ip, Llc Image capture and identification system and process
US8948459B2 (en) 2000-11-06 2015-02-03 Nant Holdings Ip, Llc Image capture and identification system and process
US8948460B2 (en) 2000-11-06 2015-02-03 Nant Holdings Ip, Llc Image capture and identification system and process
US8948544B2 (en) 2000-11-06 2015-02-03 Nant Holdings Ip, Llc Object information derived from object images
US9014513B2 (en) 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Image capture and identification system and process
US9014515B2 (en) 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Image capture and identification system and process
US9014516B2 (en) 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Object information derived from object images
US9014514B2 (en) 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Image capture and identification system and process
US9014512B2 (en) 2000-11-06 2015-04-21 Nant Holdings Ip, Llc Object information derived from object images
US9020305B2 (en) 2000-11-06 2015-04-28 Nant Holdings Ip, Llc Image capture and identification system and process
US9025813B2 (en) 2000-11-06 2015-05-05 Nant Holdings Ip, Llc Image capture and identification system and process
US9025814B2 (en) 2000-11-06 2015-05-05 Nant Holdings Ip, Llc Image capture and identification system and process
US9031290B2 (en) 2000-11-06 2015-05-12 Nant Holdings Ip, Llc Object information derived from object images
US9031278B2 (en) 2000-11-06 2015-05-12 Nant Holdings Ip, Llc Image capture and identification system and process
US9036947B2 (en) 2000-11-06 2015-05-19 Nant Holdings Ip, Llc Image capture and identification system and process
US9036948B2 (en) 2000-11-06 2015-05-19 Nant Holdings Ip, Llc Image capture and identification system and process
US9036949B2 (en) 2000-11-06 2015-05-19 Nant Holdings Ip, Llc Object information derived from object images
US9036862B2 (en) 2000-11-06 2015-05-19 Nant Holdings Ip, Llc Object information derived from object images
US9046930B2 (en) 2000-11-06 2015-06-02 Nant Holdings Ip, Llc Object information derived from object images
US9076077B2 (en) 2000-11-06 2015-07-07 Nant Holdings Ip, Llc Interactivity via mobile image recognition
US9087270B2 (en) 2000-11-06 2015-07-21 Nant Holdings Ip, Llc Interactivity via mobile image recognition
US9087240B2 (en) 2000-11-06 2015-07-21 Nant Holdings Ip, Llc Object information derived from object images
US9104916B2 (en) 2000-11-06 2015-08-11 Nant Holdings Ip, Llc Object information derived from object images
US9110925B2 (en) 2000-11-06 2015-08-18 Nant Holdings Ip, Llc Image capture and identification system and process
US9116920B2 (en) 2000-11-06 2015-08-25 Nant Holdings Ip, Llc Image capture and identification system and process
US10080686B2 (en) 2000-11-06 2018-09-25 Nant Holdings Ip, Llc Image capture and identification system and process
US9135355B2 (en) 2000-11-06 2015-09-15 Nant Holdings Ip, Llc Image capture and identification system and process
US9844468B2 (en) 2000-11-06 2017-12-19 Nant Holdings Ip Llc Image capture and identification system and process
US10509821B2 (en) 2000-11-06 2019-12-17 Nant Holdings Ip, Llc Data capture and identification system and process
US9154695B2 (en) 2000-11-06 2015-10-06 Nant Holdings Ip, Llc Image capture and identification system and process
US9152864B2 (en) 2000-11-06 2015-10-06 Nant Holdings Ip, Llc Object information derived from object images
US9844466B2 (en) 2000-11-06 2017-12-19 Nant Holdings Ip Llc Image capture and identification system and process
US9170654B2 (en) 2000-11-06 2015-10-27 Nant Holdings Ip, Llc Object information derived from object images
US9844469B2 (en) 2000-11-06 2017-12-19 Nant Holdings Ip Llc Image capture and identification system and process
US9182828B2 (en) 2000-11-06 2015-11-10 Nant Holdings Ip, Llc Object information derived from object images
US9235600B2 (en) 2000-11-06 2016-01-12 Nant Holdings Ip, Llc Image capture and identification system and process
US9244943B2 (en) 2000-11-06 2016-01-26 Nant Holdings Ip, Llc Image capture and identification system and process
US9262440B2 (en) 2000-11-06 2016-02-16 Nant Holdings Ip, Llc Image capture and identification system and process
US9288271B2 (en) 2000-11-06 2016-03-15 Nant Holdings Ip, Llc Data capture and identification system and process
US9310892B2 (en) 2000-11-06 2016-04-12 Nant Holdings Ip, Llc Object information derived from object images
US9311554B2 (en) 2000-11-06 2016-04-12 Nant Holdings Ip, Llc Image capture and identification system and process
US9311552B2 (en) 2000-11-06 2016-04-12 Nant Holdings IP, LLC. Image capture and identification system and process
US9311553B2 (en) 2000-11-06 2016-04-12 Nant Holdings IP, LLC. Image capture and identification system and process
US9317769B2 (en) 2000-11-06 2016-04-19 Nant Holdings Ip, Llc Image capture and identification system and process
US9324004B2 (en) 2000-11-06 2016-04-26 Nant Holdings Ip, Llc Image capture and identification system and process
US9330328B2 (en) 2000-11-06 2016-05-03 Nant Holdings Ip, Llc Image capture and identification system and process
US9330326B2 (en) 2000-11-06 2016-05-03 Nant Holdings Ip, Llc Image capture and identification system and process
US9330327B2 (en) 2000-11-06 2016-05-03 Nant Holdings Ip, Llc Image capture and identification system and process
US9336453B2 (en) 2000-11-06 2016-05-10 Nant Holdings Ip, Llc Image capture and identification system and process
US9342748B2 (en) 2000-11-06 2016-05-17 Nant Holdings Ip. Llc Image capture and identification system and process
US9360945B2 (en) 2000-11-06 2016-06-07 Nant Holdings Ip Llc Object information derived from object images
US9536168B2 (en) 2000-11-06 2017-01-03 Nant Holdings Ip, Llc Image capture and identification system and process
US9844467B2 (en) 2000-11-06 2017-12-19 Nant Holdings Ip Llc Image capture and identification system and process
US9578107B2 (en) 2000-11-06 2017-02-21 Nant Holdings Ip, Llc Data capture and identification system and process
US9824099B2 (en) 2000-11-06 2017-11-21 Nant Holdings Ip, Llc Data capture and identification system and process
US9613284B2 (en) 2000-11-06 2017-04-04 Nant Holdings Ip, Llc Image capture and identification system and process
US9808376B2 (en) 2000-11-06 2017-11-07 Nant Holdings Ip, Llc Image capture and identification system and process
US9785651B2 (en) 2000-11-06 2017-10-10 Nant Holdings Ip, Llc Object information derived from object images
US9785859B2 (en) 2000-11-06 2017-10-10 Nant Holdings Ip Llc Image capture and identification system and process
US9805063B2 (en) 2000-11-06 2017-10-31 Nant Holdings Ip Llc Object information derived from object images
EP1205221A3 (en) * 2000-11-09 2004-04-14 Sony Computer Entertainment Inc. Display control method
EP1205221A2 (en) * 2000-11-09 2002-05-15 Sony Computer Entertainment Inc. Display control method
EP1260939A2 (en) * 2001-03-21 2002-11-27 Sony Computer Entertainment Inc. Data processing method
EP1260939A3 (en) * 2001-03-21 2006-08-09 Sony Computer Entertainment Inc. Data processing method
US7145569B2 (en) 2001-03-21 2006-12-05 Sony Computer Entertainment Inc. Data processing method
WO2006105686A1 (en) * 2005-04-06 2006-10-12 Eidgenössische Technische Hochschule Zürich Method of executing an application in a mobile device
US8226011B2 (en) 2005-04-06 2012-07-24 Eidgenoessische Technische Hochshcule Zuerich Method of executing an application in a mobile device
US9566522B2 (en) * 2005-05-27 2017-02-14 Nokia Technologies Oy Device, method, and computer program product for customizing game functionality using images
US20060281511A1 (en) * 2005-05-27 2006-12-14 Nokia Corporation Device, method, and computer program product for customizing game functionality using images
US10617951B2 (en) 2005-08-29 2020-04-14 Nant Holdings Ip, Llc Interactivity with a mixed reality
EP2764899A3 (en) * 2005-08-29 2014-12-10 Nant Holdings IP, LLC Interactivity via mobile image recognition
US9600935B2 (en) 2005-08-29 2017-03-21 Nant Holdings Ip, Llc Interactivity with a mixed reality
US10463961B2 (en) 2005-08-29 2019-11-05 Nant Holdings Ip, Llc Interactivity with a mixed reality
EP2064697A4 (en) * 2006-07-19 2015-10-28 World Golf Tour Inc Photographic mapping in a simulation
WO2008011515A2 (en) 2006-07-19 2008-01-24 World Golf Tour, Inc. Photographic mapping in a simulation
US9164723B2 (en) 2011-06-30 2015-10-20 Disney Enterprises, Inc. Virtual lens-rendering for augmented reality lens
WO2013032618A1 (en) * 2011-08-30 2013-03-07 Qualcomm Incorporated Indirect position and orientation tracking of mobile platforms via multi-user capture of multiple images for use in augmented or virtual reality gaming systems
US10008047B2 (en) 2013-08-21 2018-06-26 Nantmobile, Llc Chroma key content management systems and methods
US9761053B2 (en) 2013-08-21 2017-09-12 Nantmobile, Llc Chroma key content management systems and methods
US10019847B2 (en) 2013-08-21 2018-07-10 Nantmobile, Llc Chroma key content management systems and methods
US10733808B2 (en) 2013-08-21 2020-08-04 Nantmobile, Llc Chroma key content management systems and methods
US10255730B2 (en) 2013-08-21 2019-04-09 Nantmobile, Llc Chroma key content management systems and methods
US11495001B2 (en) 2013-08-21 2022-11-08 Nantmobile, Llc Chroma key content management systems and methods
US10719123B2 (en) 2014-07-15 2020-07-21 Nant Holdings Ip, Llc Multiparty object recognition

Also Published As

Publication number Publication date
AU2802097A (en) 1998-11-11

Similar Documents

Publication Publication Date Title
US20230302359A1 (en) Reconfiguring reality using a reality overlay device
CN109478345B (en) Simulation system, processing method, and information storage medium
US6155926A (en) Video game system and method with enhanced three-dimensional character and background control
US6267673B1 (en) Video game system with state of next world dependent upon manner of entry from previous world via a portal
US6139433A (en) Video game system and method with enhanced three-dimensional character and background control due to environmental conditions
WO1998046323A1 (en) Computer games having optically acquired images which are combined with computer generated graphics and images
US20180191990A1 (en) Projection system
TWI469813B (en) Tracking groups of users in motion capture system
EP0844587B1 (en) Image processor, image processing method, game machine and recording medium
Thomas A survey of visual, mixed, and augmented reality gaming
US7847808B2 (en) Photographic mapping in a simulation
US6139434A (en) Three-dimensional image processing apparatus with enhanced automatic and user point of view control
US7071914B1 (en) User input device and method for interaction with graphic images
JP4425274B2 (en) Method and apparatus for adjusting the view of a scene being displayed according to the motion of the head being tracked
US6570569B1 (en) Image processing device and image processing method
EP1047022B1 (en) Image generating device
US20150309571A1 (en) Eye tracking enabling 3d viewing on conventional 2d display
Wolf 3 Space in the Video Game
US20090318228A1 (en) Apparatus and method of interaction with a data processor
EP1431922A2 (en) Image displaying device, image processing device and image displaying system
CN106664401A (en) Systems and methods for providing feedback to a user while interacting with content
KR20000064948A (en) Image processing apparatus and image processing method
US20200086219A1 (en) Augmented reality-based sports game simulation system and method thereof
JP4282112B2 (en) Virtual object control method, virtual object control apparatus, and recording medium
CN114470775A (en) Object processing method, device, equipment and storage medium in virtual scene

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CH JP KR NZ US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998543841

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA