WO2002030535A1 - Method of displaying and evaluating motion data used in motion game apparatus - Google Patents

Method of displaying and evaluating motion data used in motion game apparatus Download PDF

Info

Publication number
WO2002030535A1
WO2002030535A1 PCT/KR2001/001710 KR0101710W WO0230535A1 WO 2002030535 A1 WO2002030535 A1 WO 2002030535A1 KR 0101710 W KR0101710 W KR 0101710W WO 0230535 A1 WO0230535 A1 WO 0230535A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
original actor
displaying
position points
game player
Prior art date
Application number
PCT/KR2001/001710
Other languages
French (fr)
Inventor
Gerard Jounghyun Kim
Ungyeon Yang
Euijae Ahn
Original Assignee
Dotace.Com Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dotace.Com Co., Ltd. filed Critical Dotace.Com Co., Ltd.
Priority to AU2001294329A priority Critical patent/AU2001294329A1/en
Publication of WO2002030535A1 publication Critical patent/WO2002030535A1/en

Links

Classifications

    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1012Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals involving biosensors worn by the player, e.g. for measuring heart beat, limb activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8005Athletics

Definitions

  • the present invention relates to method of playing and evaluating a motion data in a motion game apparatus, and more particularly to methods of playing the motion data so that a game player can easily follow the motion of an original actor and of evaluating the motion of the game player following the motion data.
  • a DDR game apparatus has been all the fashion. It has a music play device and a floor body sensing the foot action of a game player.
  • the foot action in which the game player will follows is informed to the game player by either a monitor or a foot action direction of the floor body together with music played by the music play device, and the game player follows the foot action in accordance with timing.
  • a sensor for sensing hand stretching action as well as the foot action of the game player is installed at a certain position thereof has been introduced. It can check the motions of both foot and hand actions using the sensor.
  • the one object of the present invention can be achieved by a method of displaying a motion data in a motion game apparatus having information of a basic frame displaying main motion of an original actor, and playing the motion data of the original actor consisting of a plurality of frames on a display device, comprising : a first step of setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor, and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time ⁇ t; a second step of displaying the continuous action of the original actor at the play position points Xf, Yf and Zf based on present time t, and simultaneously displaying the basic frame at the initial position points Xi, Yi and
  • the one object of the present invention can be achieved by a method of displaying a motion data in a motion game apparatus having information of a basic frame displaying main motion of an original actor, consisting of a plurality of frames, having play position points Xf, Yf and Zf on a display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time ⁇ t, and displaying the motion data of the original actor on the display device, comprising ; a first step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on a present time t, and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + ⁇ t; and a second step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Z
  • the other object of the present invention can be achieved by a method of evaluating the motion of a game player following the motion data of an original actor displayed on a display device in a motion game apparatus having the information of a basic frame displaying main motion of the original actor, consisting of a plurality of frames, and setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time ⁇ t, comprising ; a first step of storing the three-dimensional motion data of the original actor retargeted after converting the three-dimensional motion data of the original actor by reflecting the body size of the game player; a second step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on present time t using the retargeted three-dimensional motion data of the original actor and simultaneously displaying the basic frame at the initial position points Xi,
  • FIG.1 is a configuration diagram of a motion game apparatus of the present invention.
  • FIG. 2 is a block diagram of a motion game apparatus applied to the present invention.
  • FIG. 3 is a flow chart showing retargeting processing sequence.
  • FIG. 4 is a frame structure diagram for describing one example of an image frame configuration of a three-dimensional original actor according to the present invention.
  • FIG. 5 and FIG. 6 are description drawings for describing a method of displaying preparation action of an original actor.
  • FIG. 7 is a flow chart showing overall action displaying and evaluating a motion using a motion frame of FIG. 4.
  • FIG. 8 is a flow chart describing a method of evaluating a motion of a game player.
  • FIG. 9a to FIG. 9d are a flow chart showing performance sequence of the present invention in detail.
  • FIG. 1 is a configuration diagram of a motion game apparatus of the present invention.
  • the motion game apparatus includes a plurality of cameras 100, a display device 200 displaying a motion of an original actor, an input section receiving input from a game player, and a sound device.
  • the game player attaches a plurality of optical sensors (not shown) to his body and takes a motion in a certain region range capable of be sensed by the camera.
  • the camera 100 monitors the motion of the game player using the optical sensors attached to the body of the game player.
  • the optical sensors are attached to all articulation parts of the game player if possible. More preferably, the number of the optical sensors is appropriately selected in view of processing rate because the number of image processing operations increase as the number of the optical sensors increases. Furthermore, while it is possible to obtain precise data as the number of the camera increase, it is preferable to appropriately select the number of the cameras in view of the same problem as the number of the sensors above described.
  • FIG. 2 is a block diagram of a motion game apparatus applied to the present invention.
  • the motion game apparatus of the present invention includes an input section receiving an input from a game player, a control section, a sensor/camera, an image output device, a music D/B, a character/stage DB, a sound output device, and an image memory.
  • the control section has a motion selection section outputting a motion selection signal in accordance with an output of the input section, a motion data storage section storing a plurality of motion data of an original actor, a retargeting processing section retargeting the motion of the original actor, a retargeted original actor motion data storage section storing a retargeted motion data, a game player motion data storage section storing the motion data of the game player, a motion evaluation section evaluating after comparing the retargeted motion data of the original actor with the motion data of the game player, a motion capture section capturing the motion of the game player, and a resultant image output section displaying the output of the motion evaluation section on the image output device.
  • the operation flow of a motion game apparatus in FIG. 2 is will be now explained. Firstly, the game player selects one of the plurality of motion data using the input section.
  • the motion select section selects the one of the plurality of motion data stored to the motion data storage section and then inputs the selected data to the retargeting processing section.
  • the retargeting processing section receives a body size from the game player or senses the body size by image processing using the camera, extracts the body size difference between the game player and the original actor, carries out retargeting process for the motion data of the original actor, and stores the retargeted motion data of the original actor in the motion data storage section. Specific description with respect to the retargeting will be described in FIG. 3 later.
  • the retargeted motion of the original actor is provided to the game player by the image processing section, the image memory, and the image output device.
  • the basic frame for main motion which will be provided is provided to the game player by a frame storage section and a sliding control section so that the game player can easily follow the motion of the original actor.
  • the motion of the game player is stored in the game player motion data storage section real-timely.
  • the motion evaluation section evaluates the motion of the game player after comparing the motion of the game player with the retargeted motion data of the original actor. Finally, the result is provided to the game player by the image output device.
  • the retargeting must be performed. That is to say, the retargeting is a processing to remove the body size difference.
  • two data with respect to body sizes of the original actor and game player are received by input from a user or image capture of the camera using an image processing.
  • the two data are analyzed, and the motion data of the original actor is converted so as to be appropriate to the body size of the game player.
  • the three-dimensional motion data of the original actor is magnified or reduced based on the specific part (mostly, waist) of the three-dimensional motion data of game player and then the center of magnified or reduced the three-dimensional motion data of the original actor is moved.
  • the retargeted motion data of the original actor may be obtained by performing conversion to satisfy a restriction condition.
  • FIG. 4 is a frame structure diagram for describing one example of an image frame configuration of a three-dimensional original actor according to the present invention.
  • each of the boxes designates a motion frame stored to unit time intervals and each of the numbers on the boxes shows a frame number.
  • the image frame of an original actor consists of a start frame, an end frame and motion frames.
  • the motion frames are divided into basic motion frames and common motion frames.
  • a special motion (namely, main motion of dance actions) of these motion frames is defined as the basic motion frame.
  • the information with respect to the basic motion frame may be marked using a management program marking "a specific pose" by an expert (content maker) after a point of time creating a motion data or storing the motion data.
  • the information with respect to the basic motion frame may be included in a portion of the motion frame displaying the motion of the original actor or be used as an additional file.
  • the motion frame is generally stored in the form of file to display a motion image such as for example, a BVH file.
  • a BVH file The format of the BVH file firstly describes about a triple structure consisting of a several sensors for designating each of the parts of the body, and then displays formations on an angle and a coordinate value versus time of the sensors corresponding to each of the parts of the body. The coordinate versus time may be understood by this BVH file.
  • FIG. 5 and FIG. 6 are description drawings for describing a method of displaying preparation action of an original actor.
  • FIG. 5 shows an original actor motion frame displaying the motion of the original actor
  • (b) is a basic motion frame displaying an information on the basic motion of the motion frame of the original actor
  • (c) shows a game player motion frame storing the motion of a game player following the motion of the original actor, respectively.
  • 5th, 10th, 17th, 19th, 21st, 25th, 27th, 28th, 31st, 37th, 38th, 40th, 42nd and 48th frames are designated as the basic frames.
  • ⁇ t is to display time interval for detecting the basic motion frame.
  • ⁇ t shows the state that 6th frame is ruled of the time displayed.
  • the 16th motion frame of the original actor is provided to the game player at the present time t, and the basic motion frames (17th, 19th and 21st frames) between t and t + ⁇ t are simultaneously displayed, for showing beforehand to the game player.
  • FIG. 6 is description drawing for describing a method of displaying such points with a display device. As shown in FIG. 6 (a), continuous motions (the 16th frame in FIG.
  • a coordinate value Yi on Y axis starting a guide action and a coordinate value Yf on Y axis of the screen displaying real-timely continuous actions must satisfy following equation (1).
  • the coordinate value Yi on Y axis starting the guide action and the coordinate value Yf on Y axis of the screen displaying real-timely continuous actions must satisfy following equation (2).
  • the direction of a vector consisting of starting points Xi, Yi and Zi and end points Xf, Yf and Zf can be easily determined by analyzing motion data such as BVH of the original actor including the position values of the sensor attached on the articulation parts of the original actor. Therefore, when displaying the guide action in view of the direction of the vector, it is possible to display the guide action more actually. Furthermore, it is preferable to locate position points Xf, Yf and Zf on the screen displaying the continuous actions of the original actor on the center of the screen, for providing the game player with facility. Moreover, it is preferable to appropriately locate start points Xi, Yi and Zi in view of the number of preparation action frame which will be displayed and horizontal size of the screen. The eye height He of the game player can be easily calculated using statistical value from the height of the game player to his eye position. Here, the height of the game player is input by the game player or operated by a camera.
  • FIG. 7 is a flow chart showing overall action displaying and evaluating the motion using the motion frame of FIG. 4.
  • a motion data, a music DB and a character DB are selected from an input section by the game player (step S71).
  • the body size of an original actor is compared with that of the game player using the body size inputted by the game player or obtained by analyzing the image of the camera (step S72).
  • a three-dimensional image coordinate value data of the original actor is converted using this comparison value (step S73).
  • the number and position of sensors attached on the game player are different from the number and position of sensors attached on the original actor for capturing the motions of the original actor, it is necessary to primarily agree with targets which will be compared for the purpose of evaluating the targets appropriately.
  • the position data of the reference sensor is calculated from the motion data of the original actor based on the number and position of the sensors attached on the game player (step S74). Thereafter, the three-dimensional motion data of the original actor is real-timely displayed, and the guide action is simultaneously displayed (step 75). While the three-dimensional motion of the original actor may be displayed prior to retargeting the motion of the original actor, it is preferable to display the three-dimensional motion of the original actor after retargeting. In case of displaying the three-dimensional motion of the original actor prior to retargeting, while it is possible to substantially display the motion of the original actor, it is
  • step S76 The motion of the game player following the motion of the original actor displayed on the screen is stored real-timely by the frame (step S76).
  • step 77 whether or not the three-dimensional motion frame of the original actor displayed at time t, is the basic motion frame, is determined (step 77).
  • the motion frame of the game player identified real-timely is compared with the position data of the reference sensor calculated at the step 74 and evaluated (step S78).
  • the steps S76 and S77 may be performed regardless of sequence.
  • the step S74 may be also performed after step S77.
  • the calculated resultant value can be compared with the sensor position obtained from the motion of the game player and evaluated.
  • the step S76 or the step S74 is performed after the step S77, only the motion frame of the game player following the basic motion frame of the original actor is stored, so that only the position data of the reference sensor will be operated.
  • step S79 Thereafter, whether or not the frame is the last frame, is determined (step S79). In case of the last frame, the step S75 is again started, while in case of not being the last frame, the final result is displayed using the resultant value of the comparison and evaluation (step S80).
  • step 1 Since it must be performed real-timely within a short time to compare and evaluate whether or not the motion of the game player is agreed with the basic motion frame of the original actor, it is performed by procedure proposed in FIG.8 for the purpose of reducing time required for an image evaluation procedure. 1.
  • step 2 The step of operating the position value of the reference sensor (step 1)
  • the three-dimensional position value of the reference sensor is operated from the reference motion data of the original actor based on the number and position of the sensors attached on the game player. This is defined as the position value of the reference sensor. It is assumed that the position value of the reference sensor for a sensor 1 attached on the game player is (XI, Y1,Z1), for facilitating description.
  • step S82 The step of calculating two-dimensional coordinate values corresponding to each of the cameras.
  • a plurality of cameras are used for identifying the three-dimensional positions of the sensors attached on the game player.
  • the three-dimensional image is obtained by synthesizing the two-dimensional image obtained from the plurality of the cameras.
  • the two-dimensional coordinate value of corresponding to sensor position included in each of the cameras for embodying corresponding to the three-dimension image of the original actor can be calculated.
  • a two-dimensional coordinate value (XI 1, Yl l) displayed on a first camera, a two-dimensional coordinate value (X21, Y21) displayed on a second camera, ..., a two-dimensional coordinate value (Xnl, Ynl) displayed on a nth camera can be respectively calculated from the reference position value (XI, YI, ZI) of the step S81 by this image processing method.
  • step S83 The step of calculating a target region of each of cameras
  • a certain region is set at a exact position where a sensor is discovered by each of the cameras. For example, when a game player exactly follows the motion of an original actor, the exact position value of the sensor 1 discovered by the camera 1 must be (XI 1, YI 1). However, it is impossible that the game player exactly follows the motion without a selected error. Accordingly, when setting a certain region having for example, margin of 5 based on (XI 1, Yl l), a square region consisting of (XI 1 - 5, YI 1 - 5), (XI 1 + 5, YI 1 - 5), (XI 1 + 5, YI 1 + 5), and (XI 1 - 5, YI 1 + 5) is selected. The selected region is referred as a target region for facility.
  • step S84 The step of checking whether or not the sensor is detected in the target region
  • Whether or not the image coordinate of the sensor attached on the game player is introduced in the target region set at step S83 among the image data of each of the cameras is checked.
  • a conventional image process method in case using the method that overall region of image data outputted from each of the cameras is checked, the position value of the sensor attached on the game. player is identified, and compared with the sensor position value of the original actor, long time is required for the image processing. Accordingly, the target region where the sensor will be discovered is set beforehand. Furthermore, by only checking not the overall region but a certain target region, image processing time can be considerably reduced.
  • the three-dimensional position information can be obtained by combining the two-dimensional data detected in this manner. Namely/ a method that the three-dimensional information which will be finally used, is calculated by the two-dimensional detection information obtained from at least two cameras based on one sensor, and the three-dimensional data of the original actor is then compared with the calculated information, can be used. Since an algorithm in which a three-dimensional information is obtained from a plurality of two-dimensional information, follows a conventional theory of a computer vision field, the description of the above algorithm is omitted here.
  • FIG. 9 is a flow chart showing performance sequence of the present invention in detail. All the body action of an user can be pursued using music in accordance with selection of the user and high performance motion capture system (steps S101 and SI 02). A character and stage which can be applied to the case viewing this apart from the image of the original actor are then selected (steps SI 03 and SI 04). Next, the motion data of the original actor is read from his the motion capture DB (step S105), and a character image is overlaid on the motion data of the original actor (step SI 06). Since the size of the image file increases when using the image of the original actor according to that, the amount of data processing can be reduced by overlaying the character image stored at the motion coordinate of the original actor beforehand.
  • step SI 07 an operation matching the motion data with the music data is performed (step SI 07).
  • a display device is used for the motion data of the original actor and a music is outputted by a sound device (step SI 08).
  • the user takes motion according to the output data (step SI 09).
  • all the body motions are captured by a plurality of cameras, a mark is identified from the image, and the motion is pursued by the frame (steps SI 10 and Si l l).
  • the two-dimensional coordinate value is extracted using the image data obtained by this method, and the three-dimensional motion coordinate value is obtained from the extracted data (steps SI 13 and SI 14)
  • the body size of the original actor is compared with that of the learner (steps SI 15 and SI 16).
  • retargeting is performed (steps SI 17 and SI 18).
  • the retargeting can be performed during the motion-capture of the learner in accordance with flow as shown in FIG. 9, or prior to carrying out the motion-capture.
  • whether or not the data of the original are agreed with the data of the learner is checked by comparing with the two data (steps SI 19 and S120).
  • step S122 to S124 When the two data are agreed with each other, a score is given, and the total time agreed is counted, thereby displaying advertisement objects obtained using action agreement message or advertisement object production DB (steps S122 to S124).
  • a motion capture data is stored in a user motion storage section, and then whether or not the frame is a last frame is checked (steps S125 and SI 26). Thereafter, while in case of not being the last frame, next motion is captured, in case of the last frame, the game is over.
  • step SI 27 When the motion data of the original actor is not agreed with the motion data of the learner, after checking whether or not a function which the game is over in accordance with disagreement time is set (step SI 27), in case that the two data are not agreed with each other more than certain times, the game is over intentionally.
  • the motion of a game player following the motion of a dancer can be real-timely evaluated by a motion game apparatus.
  • a method playing a motion data is proposed so that the game player can easily follow the motion of the original actor in the motion game apparatus displaying the motion of the original actor, thereby improving facility of user. Moreover, the motion of the game player following the motion of the original actor is real-timely and rapidly evaluated, thereby enabling the motion game apparatus to be utilized.

Abstract

Methods of displaying and evaluating of a motion data in a motion game apparatus are disclosed. The objective of the present invention can be achieved by steps as follow and the image method helping a game player follow the motion of a dancer easily: The first step in which one calculate the basic sensor position that must be detected when a game player follows the motion of a dancer, and the second step in which one calculates the sensor position of each camera in order to obtain the basic sensor position, and the third step in which one establishes the target domain of each camera having the particular domain limit around the sensor position of each camera, and the fourth step in which one judges if the sensor of a game player is detected in the target domain.

Description

METHOD OF DISPLAYING AND EVALUATING MOTION DATA USED
IN MOTION GAME APPARATUS
TECHNICAL FIELD
The present invention relates to method of playing and evaluating a motion data in a motion game apparatus, and more particularly to methods of playing the motion data so that a game player can easily follow the motion of an original actor and of evaluating the motion of the game player following the motion data.
BACKGROUND ART
Recently, what is called, a DDR game apparatus has been all the fashion. It has a music play device and a floor body sensing the foot action of a game player. In this DDR game apparatus, the foot action in which the game player will follows, is informed to the game player by either a monitor or a foot action direction of the floor body together with music played by the music play device, and the game player follows the foot action in accordance with timing. As an apparatus further advanced than the DDR game apparatus, another game apparatus in which a sensor for sensing hand stretching action as well as the foot action of the game player is installed at a certain position thereof has been introduced. It can check the motions of both foot and hand actions using the sensor.
However, there are problems that it is impossible to display natural motion of the original actor and to evaluate overall motion of the game player following the motion, since the above described conventional game apparatuses can only sense the foot action or the foot action and hand stretching actions at a designated position. DISCLOSURE OF INVENTION
It is therefore one object of the present invention to provide a method of playing a motion data in which a game player can easily follow the motion of an original actor in a motion game apparatus displaying the motion of the original actor, for solving the above problems.
Furthermore, it is the other object of the present invention to provide a method of evaluating a motion data capable of real-time and rapidly evaluating the motion of a game player following the motion of an original actor in a motion game apparatus displaying the motion of the original actor. The one object of the present invention can be achieved by a method of displaying a motion data in a motion game apparatus having information of a basic frame displaying main motion of an original actor, and playing the motion data of the original actor consisting of a plurality of frames on a display device, comprising : a first step of setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor, and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time Δt; a second step of displaying the continuous action of the original actor at the play position points Xf, Yf and Zf based on present time t, and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; and a third step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi . and Zi to the play position points Xf, Yf and Zf as time goes by.
Furthermore, the one object of the present invention can be achieved by a method of displaying a motion data in a motion game apparatus having information of a basic frame displaying main motion of an original actor, consisting of a plurality of frames, having play position points Xf, Yf and Zf on a display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time Δt, and displaying the motion data of the original actor on the display device, comprising ; a first step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on a present time t, and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; and a second step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Zf as time goes by. Moreover, the other object of the present invention can be achieved by a method of evaluating the motion of a game player following the motion data of an original actor displayed on a display device in a motion game apparatus having the information of a basic frame displaying main motion of the original actor, consisting of a plurality of frames, and setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time Δt, comprising ; a first step of storing the three-dimensional motion data of the original actor retargeted after converting the three-dimensional motion data of the original actor by reflecting the body size of the game player; a second step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on present time t using the retargeted three-dimensional motion data of the original actor and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; a third step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Zf as time goes by; a fourth step of extracting and storing the three-dimensional motion data of the game player learning the same motion as the original actor with following the motion of the original actor displayed; a fifth step of calculating the coordinate value of a reference sensor for the original actor corresponding to a sensor position attached to the game player from the basic frame displayed at the play position points Xf, Yf and Zf of the third step; and a sixth step of comparing the three-dimensional motion data of the game player with the coordinate value of the reference sensor.
The nature, utility, and further features of the present invention will be clearly apparent from the following detailed description with respect to preferred embodiments of the invention when read in conjunction with the accompanying drawings briefly described below.
BRIEF DESCRIPTION OF THE DRAWINGS
Further objects and other advantages of the present invention will be apparent from the following description in conjunction with the attached drawings, in which:
FIG.1 is a configuration diagram of a motion game apparatus of the present invention.
FIG. 2 is a block diagram of a motion game apparatus applied to the present invention.
FIG. 3 is a flow chart showing retargeting processing sequence.
FIG. 4 is a frame structure diagram for describing one example of an image frame configuration of a three-dimensional original actor according to the present invention. FIG. 5 and FIG. 6 are description drawings for describing a method of displaying preparation action of an original actor.
FIG. 7 is a flow chart showing overall action displaying and evaluating a motion using a motion frame of FIG. 4. FIG. 8 is a flow chart describing a method of evaluating a motion of a game player.
FIG. 9a to FIG. 9d are a flow chart showing performance sequence of the present invention in detail.
DETAILED DESCRIPTION OF THE INVENTION Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
Hereinafter, the present invention will be explained in detail with reference to the accompanying drawings.
FIG. 1 is a configuration diagram of a motion game apparatus of the present invention. The motion game apparatus includes a plurality of cameras 100, a display device 200 displaying a motion of an original actor, an input section receiving input from a game player, and a sound device. The game player attaches a plurality of optical sensors (not shown) to his body and takes a motion in a certain region range capable of be sensed by the camera. The camera 100 monitors the motion of the game player using the optical sensors attached to the body of the game player.
Preferably, the optical sensors are attached to all articulation parts of the game player if possible. More preferably, the number of the optical sensors is appropriately selected in view of processing rate because the number of image processing operations increase as the number of the optical sensors increases. Furthermore, while it is possible to obtain precise data as the number of the camera increase, it is preferable to appropriately select the number of the cameras in view of the same problem as the number of the sensors above described.
FIG. 2 is a block diagram of a motion game apparatus applied to the present invention. The motion game apparatus of the present invention includes an input section receiving an input from a game player, a control section, a sensor/camera, an image output device, a music D/B, a character/stage DB, a sound output device, and an image memory. The control section has a motion selection section outputting a motion selection signal in accordance with an output of the input section, a motion data storage section storing a plurality of motion data of an original actor, a retargeting processing section retargeting the motion of the original actor, a retargeted original actor motion data storage section storing a retargeted motion data, a game player motion data storage section storing the motion data of the game player, a motion evaluation section evaluating after comparing the retargeted motion data of the original actor with the motion data of the game player, a motion capture section capturing the motion of the game player, and a resultant image output section displaying the output of the motion evaluation section on the image output device.
The operation flow of a motion game apparatus in FIG. 2 is will be now explained. Firstly, the game player selects one of the plurality of motion data using the input section. The motion select section selects the one of the plurality of motion data stored to the motion data storage section and then inputs the selected data to the retargeting processing section.
The retargeting processing section receives a body size from the game player or senses the body size by image processing using the camera, extracts the body size difference between the game player and the original actor, carries out retargeting process for the motion data of the original actor, and stores the retargeted motion data of the original actor in the motion data storage section. Specific description with respect to the retargeting will be described in FIG. 3 later. The retargeted motion of the original actor is provided to the game player by the image processing section, the image memory, and the image output device. At this time, the basic frame for main motion which will be provided, is provided to the game player by a frame storage section and a sliding control section so that the game player can easily follow the motion of the original actor. The motion of the game player is stored in the game player motion data storage section real-timely. The motion evaluation section then evaluates the motion of the game player after comparing the motion of the game player with the retargeted motion data of the original actor. Finally, the result is provided to the game player by the image output device.
To solve problems which may be occurred due to the body size difference between the original actor and the game player prior to analyzing the motion, the retargeting must be performed. That is to say, the retargeting is a processing to remove the body size difference.
The processing sequence of the retargeting will be explained with reference to FIG. 3.
Firstly, two data with respect to body sizes of the original actor and game player are received by input from a user or image capture of the camera using an image processing. The two data are analyzed, and the motion data of the original actor is converted so as to be appropriate to the body size of the game player. In general, the three-dimensional motion data of the original actor is magnified or reduced based on the specific part (mostly, waist) of the three-dimensional motion data of game player and then the center of magnified or reduced the three-dimensional motion data of the original actor is moved. After moving the center, finally, the retargeted motion data of the original actor may be obtained by performing conversion to satisfy a restriction condition. The reason why the conversion is required to satisfy the restriction condition is that foot is floated in the air, if a three-dimensional motion data of a basketball player with height of 2m is reduced based on his waist with respect to an elementary pupil with height of lm. Therefore, the restriction condition that "the foot is necessarily touched to the ground" is required, and the three-dimensional motion data of the basketball player must be converted considering this restrictive condition. FIG. 4 is a frame structure diagram for describing one example of an image frame configuration of a three-dimensional original actor according to the present invention.
In FIG. 4, each of the boxes designates a motion frame stored to unit time intervals and each of the numbers on the boxes shows a frame number. It should be understood that the image frame of an original actor consists of a start frame, an end frame and motion frames. The motion frames are divided into basic motion frames and common motion frames. A special motion (namely, main motion of dance actions) of these motion frames is defined as the basic motion frame. The information with respect to the basic motion frame may be marked using a management program marking "a specific pose" by an expert (content maker) after a point of time creating a motion data or storing the motion data. The information with respect to the basic motion frame may be included in a portion of the motion frame displaying the motion of the original actor or be used as an additional file.
The motion frame is generally stored in the form of file to display a motion image such as for example, a BVH file. The format of the BVH file firstly describes about a triple structure consisting of a several sensors for designating each of the parts of the body, and then displays formations on an angle and a coordinate value versus time of the sensors corresponding to each of the parts of the body. The coordinate versus time may be understood by this BVH file.
FIG. 5 and FIG. 6 are description drawings for describing a method of displaying preparation action of an original actor. In FIG. 5, (a) shows an original actor motion frame displaying the motion of the original actor, (b) is a basic motion frame displaying an information on the basic motion of the motion frame of the original actor, and (c) shows a game player motion frame storing the motion of a game player following the motion of the original actor, respectively. As shown in FIG. 5(b), it will be understood that 5th, 10th, 17th, 19th, 21st, 25th, 27th, 28th, 31st, 37th, 38th, 40th, 42nd and 48th frames are designated as the basic frames. As present time t is the point of time when the game player must follow the motion of the original actor, the real-time motion of the original actor is provided. Furthermore, Δt is to display time interval for detecting the basic motion frame. According to FIG. 5, Δt shows the state that 6th frame is ruled of the time displayed. Furthermore, according to FIG. 5, the 16th motion frame of the original actor is provided to the game player at the present time t, and the basic motion frames (17th, 19th and 21st frames) between t and t + Δt are simultaneously displayed, for showing beforehand to the game player. For example, FIG. 6 is description drawing for describing a method of displaying such points with a display device. As shown in FIG. 6 (a), continuous motions (the 16th frame in FIG. 5) of the original actor in which a present game player must follow, are displayed on one region 61 of a screen, and main motions (17th, 19th and 21st frames in FIG.5) in which the game player will follow at next stage are sequentially and real-timely displayed according to time on the other regions 62, 63 and 64 of the screen. By using this display method, the game player identifies beforehand next main motions, so that he can easily follow the motions of the original actor.
As shown in FIG. 6 (a), when setting a coordinate on the screen based on a coordinate axis, in case that the eye height He of the game player is longer than distance Dy between the foot position of the game player and the center position on the screen of the display device, a coordinate value Yi on Y axis starting a guide action and a coordinate value Yf on Y axis of the screen displaying real-timely continuous actions must satisfy following equation (1).
Yi > Yj equation (1)
This is to further improve the property of a three-dimensional image by displaying the guide action in view of the eye height of the game player.
Similarly, in case that the eye height He of the game player is shorter than distance Dy between the foot position of the game player and the center position on the screen of the display device, the coordinate value Yi on Y axis starting the guide action and the coordinate value Yf on Y axis of the screen displaying real-timely continuous actions must satisfy following equation (2).
Yi < Yj equation (2)
At this time, the direction of a vector consisting of starting points Xi, Yi and Zi and end points Xf, Yf and Zf can be easily determined by analyzing motion data such as BVH of the original actor including the position values of the sensor attached on the articulation parts of the original actor. Therefore, when displaying the guide action in view of the direction of the vector, it is possible to display the guide action more actually. Furthermore, it is preferable to locate position points Xf, Yf and Zf on the screen displaying the continuous actions of the original actor on the center of the screen, for providing the game player with facility. Moreover, it is preferable to appropriately locate start points Xi, Yi and Zi in view of the number of preparation action frame which will be displayed and horizontal size of the screen. The eye height He of the game player can be easily calculated using statistical value from the height of the game player to his eye position. Here, the height of the game player is input by the game player or operated by a camera.
FIG. 7 is a flow chart showing overall action displaying and evaluating the motion using the motion frame of FIG. 4. A motion data, a music DB and a character DB are selected from an input section by the game player (step S71). Next, the body size of an original actor is compared with that of the game player using the body size inputted by the game player or obtained by analyzing the image of the camera (step S72). A three-dimensional image coordinate value data of the original actor is converted using this comparison value (step S73). At this time, since the number and position of sensors attached on the game player are different from the number and position of sensors attached on the original actor for capturing the motions of the original actor, it is necessary to primarily agree with targets which will be compared for the purpose of evaluating the targets appropriately. Accordingly, the position data of the reference sensor is calculated from the motion data of the original actor based on the number and position of the sensors attached on the game player (step S74). Thereafter, the three-dimensional motion data of the original actor is real-timely displayed, and the guide action is simultaneously displayed (step 75). While the three-dimensional motion of the original actor may be displayed prior to retargeting the motion of the original actor, it is preferable to display the three-dimensional motion of the original actor after retargeting. In case of displaying the three-dimensional motion of the original actor prior to retargeting, while it is possible to substantially display the motion of the original actor, it is
difficult to exactly evaluate the motions of the game player and original actor. In case of displaying the three-dimensional motion of the retargeted original actor, it is possible to easily compare the motion of the game player with the motion of the original actor. Namely, in case of displaying the motion of the game player by overlapping the motion of the original actor with the motion of the game player, there is advantage that the game player may easily identify how different from the motion of the original actor by using the retargeted motion.
The motion of the game player following the motion of the original actor displayed on the screen is stored real-timely by the frame (step S76). At this time, whether or not the three-dimensional motion frame of the original actor displayed at time t, is the basic motion frame, is determined (step 77). In case of the basic motion frame, the motion frame of the game player identified real-timely is compared with the position data of the reference sensor calculated at the step 74 and evaluated (step S78). The steps S76 and S77 may be performed regardless of sequence. The step S74 may be also performed after step S77. Namely, whether or not the three-dimensional motion frame of the original actor is the basic motion frame, is determined, and with calculating the position data of the reference sensor for the basic motion frame, the calculated resultant value can be compared with the sensor position obtained from the motion of the game player and evaluated. When either the step S76 or the step S74 is performed after the step S77, only the motion frame of the game player following the basic motion frame of the original actor is stored, so that only the position data of the reference sensor will be operated.
Thereafter, whether or not the frame is the last frame, is determined (step S79). In case of the last frame, the step S75 is again started, while in case of not being the last frame, the final result is displayed using the resultant value of the comparison and evaluation (step S80).
Since it must be performed real-timely within a short time to compare and evaluate whether or not the motion of the game player is agreed with the basic motion frame of the original actor, it is performed by procedure proposed in FIG.8 for the purpose of reducing time required for an image evaluation procedure. 1. The step of operating the position value of the reference sensor (step
S81)
As above described, to appropriately evaluate targets in the number and position of sensors attached on the original actor for capturing his the motion and attached on the game player, it is necessary to primarily agree with the comparison targets. Accordingly, the three-dimensional position value of the reference sensor is operated from the reference motion data of the original actor based on the number and position of the sensors attached on the game player. This is defined as the position value of the reference sensor. It is assumed that the position value of the reference sensor for a sensor 1 attached on the game player is (XI, Y1,Z1), for facilitating description.
2. The step of calculating two-dimensional coordinate values corresponding to each of the cameras (step S82) As above described, a plurality of cameras are used for identifying the three-dimensional positions of the sensors attached on the game player. The three-dimensional image is obtained by synthesizing the two-dimensional image obtained from the plurality of the cameras.
By inversely performing procedure the same as method obtaining the three-dimensional image from the two-dimensional image, the two-dimensional coordinate value of corresponding to sensor position included in each of the cameras for embodying corresponding to the three-dimension image of the original actor, can be calculated. A two-dimensional coordinate value (XI 1, Yl l) displayed on a first camera, a two-dimensional coordinate value (X21, Y21) displayed on a second camera, ..., a two-dimensional coordinate value (Xnl, Ynl) displayed on a nth camera, can be respectively calculated from the reference position value (XI, YI, ZI) of the step S81 by this image processing method.
3. The step of calculating a target region of each of cameras (step S83)
A certain region is set at a exact position where a sensor is discovered by each of the cameras. For example, when a game player exactly follows the motion of an original actor, the exact position value of the sensor 1 discovered by the camera 1 must be (XI 1, YI 1). However, it is impossible that the game player exactly follows the motion without a selected error. Accordingly, when setting a certain region having for example, margin of 5 based on (XI 1, Yl l), a square region consisting of (XI 1 - 5, YI 1 - 5), (XI 1 + 5, YI 1 - 5), (XI 1 + 5, YI 1 + 5), and (XI 1 - 5, YI 1 + 5) is selected. The selected region is referred as a target region for facility.
4. The step of checking whether or not the sensor is detected in the target region (step S84)
Whether or not the image coordinate of the sensor attached on the game player is introduced in the target region set at step S83 among the image data of each of the cameras is checked. As a conventional image process method, in case using the method that overall region of image data outputted from each of the cameras is checked, the position value of the sensor attached on the game. player is identified, and compared with the sensor position value of the original actor, long time is required for the image processing. Accordingly, the target region where the sensor will be discovered is set beforehand. Furthermore, by only checking not the overall region but a certain target region, image processing time can be considerably reduced. When checking the certain region in this manner, it can be easily evaluated that in case the sensor is discovered in the target region of all the cameras, the motion of the game player is the same as motion of the original actor, while in case the sensor is not discovered, the motion of the game player is different from the motion of the original actor. Furthermore, the three-dimensional position information can be obtained by combining the two-dimensional data detected in this manner. Namely/ a method that the three-dimensional information which will be finally used, is calculated by the two-dimensional detection information obtained from at least two cameras based on one sensor, and the three-dimensional data of the original actor is then compared with the calculated information, can be used. Since an algorithm in which a three-dimensional information is obtained from a plurality of two-dimensional information, follows a conventional theory of a computer vision field, the description of the above algorithm is omitted here.
FIG. 9 is a flow chart showing performance sequence of the present invention in detail. All the body action of an user can be pursued using music in accordance with selection of the user and high performance motion capture system (steps S101 and SI 02). A character and stage which can be applied to the case viewing this apart from the image of the original actor are then selected (steps SI 03 and SI 04). Next, the motion data of the original actor is read from his the motion capture DB (step S105), and a character image is overlaid on the motion data of the original actor (step SI 06). Since the size of the image file increases when using the image of the original actor according to that, the amount of data processing can be reduced by overlaying the character image stored at the motion coordinate of the original actor beforehand. Next, an operation matching the motion data with the music data is performed (step SI 07). A display device is used for the motion data of the original actor and a music is outputted by a sound device (step SI 08). The user takes motion according to the output data (step SI 09). Thereafter, all the body motions are captured by a plurality of cameras, a mark is identified from the image, and the motion is pursued by the frame (steps SI 10 and Si l l). The two-dimensional coordinate value is extracted using the image data obtained by this method, and the three-dimensional motion coordinate value is obtained from the extracted data (steps SI 13 and SI 14)
Next, using the coordinate data of the original actor and the' three-dimensional coordinate value of the learner motion -captured, the body size of the original actor is compared with that of the learner (steps SI 15 and SI 16). When the two body sizes are different from each other in real comparative resultant, retargeting is performed (steps SI 17 and SI 18). The retargeting can be performed during the motion-capture of the learner in accordance with flow as shown in FIG. 9, or prior to carrying out the motion-capture. After retargeting, in case of the basic frame, whether or not the data of the original are agreed with the data of the learner is checked by comparing with the two data (steps SI 19 and S120). When the two data are agreed with each other, a score is given, and the total time agreed is counted, thereby displaying advertisement objects obtained using action agreement message or advertisement object production DB (steps S122 to S124). A motion capture data is stored in a user motion storage section, and then whether or not the frame is a last frame is checked (steps S125 and SI 26). Thereafter, while in case of not being the last frame, next motion is captured, in case of the last frame, the game is over. When the motion data of the original actor is not agreed with the motion data of the learner, after checking whether or not a function which the game is over in accordance with disagreement time is set (step SI 27), in case that the two data are not agreed with each other more than certain times, the game is over intentionally. As above described, according to the present invention, the motion of a game player following the motion of a dancer can be real-timely evaluated by a motion game apparatus.
Furthermore, a method playing a motion data is proposed so that the game player can easily follow the motion of the original actor in the motion game apparatus displaying the motion of the original actor, thereby improving facility of user. Moreover, the motion of the game player following the motion of the original actor is real-timely and rapidly evaluated, thereby enabling the motion game apparatus to be utilized.
The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description and all changes which come within the meaning the range of equivalency of the claims are therefore intended to be embraced therein.

Claims

CLAIM:What is claimed is:
1. A method of displaying a motion data in a motion game apparatus having information of a basic frame displaying main motion of an original actor, and playing the motion data of the original actor consisting of a plurality of frames on a display device, comprising: a first step of setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time Δt; a second step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on present time t, and simultaneously displaying the basic frame at the initial position points Xi,
Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; and a third step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Zf as time goes by.
2. The method according to claim 1, wherein the image size of the basic frame displayed at the initial position points Xi, Yi and Zi is smaller than the image size for playing the continuous action of the original actor displayed at the play position points Xf, Yf and Zf, and wherein the movement at the third step is magnification movement.
3. The method according to claim 1, wherein the play position points Xf, Yf and Zf are set to the center of the screen of the display device, and wherein the initial position points Xi, Yi and Zi are changed up and down based on the level of the display device in accordance with the eye height of a game player following the motion of the original actor.
4. A method of displaying a motion data in a motion game apparatus having information of a basic frame displaying main motion of an original actor, consisting of a plurality of frames, having play position points Xf, Yf and Zf on a display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time Δt, and displaying the motion data of the original actor on the display device, comprising: a first step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on a present time t and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; and a second step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Zf as time goes by.
5. The method according to claim' 4, wherein the image size of the basic frame displayed at the initial position points Xi, Yi and Zi is smaller than the image size for playing the continuous action of the original actor displayed at the play position points Xf, Yf and Zf, and wherein the movement at the second step is magnification movement.
6. A method of evaluating the motion of a game player following the motion data of an original actor displayed on a display device in a motion game apparatus having information of a basic frame displaying main motion of an original actor, consisting of a plurality of frames, and setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand the basic frame which will be played after a predetermined time Δt, and comprising: a first step of storing a three-dimensional motion data of the original actor retargeted after converting the three-dimensional motion data of the original actor by reflecting body size of the game player; a second step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on present time t using the three-dimensional motion data of the retargeted original actor and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; a third step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Zf as time goes by; a fourth step of extracting and storing the three-dimensional motion data of the game player learning the same motion with following the motion of the original actor displayed; a fifth step of calculating a coordinate value of a reference sensor for the original actor corresponding to a sensor position attached to the game player from the basic frame displayed at the play position points Xf, Yf and Zf of the third step; and a sixth step of comparing the three-dimensional motion data of the game player with the coordinate value of the reference sensor.
7. The method according to claim 6, wherein the fourth and fifth steps are performed regardless of sequence, and wherein the fifth step is performed between the first and sixth steps.
8. A method of evaluating a motion of a game player following a motion data of an original actor displayed on a display device in a motion game apparatus having information of a basic frame displaying main motion of an original actor, consisting of a plurality of frames, setting play position points Xf, Yf and Zf on the display device for playing continuous action of the original actor and initial position points Xi, Yi and Zi on the display device for displaying beforehand a basic frame which will be played after a predetermined time Δt, and having a plurality of optical sensors attached to the game player, comprising: a first step of storing a three-dimensional motion data of the original actor retargeted after converting the three-dimensional motion data of the original actor by reflecting body size of the game player; a second step of calculating positions of the plurality of optical sensors attached to the game player from the three motion data of the retargeted original actor; a third step of displaying continuously the continuous action of the original actor at the play position points Xf, Yf and Zf based on a present time t using the three-dimensional motion data of the retargeted original actor and simultaneously displaying the basic frame at the initial position points Xi, Yi and Zi after drawing out the basic frame which will be displayed at t + Δt; a fourth step of displaying with gradually moving the basic frame displayed at the initial position points Xi, Yi and Zi to the play position points Xf, Yf and Zf as time goes by; a fifth step of extracting and storing the position of the optical sensor of the game player learning same motion with following the motion of the original actor displayed; and a sixth step of comparing the position of the sensor calculated from the basic frame displayed at the play position points Xf, Yf and Zf of the fourth step with the position of the sensor of the game player stored at the fifth step.
9. A method of evaluating a motion of a game player following a motion data of an original actor with displaying the motion data in a motion game apparatus having a plurality of camera for monitoring a plurality of sensors attached the body of the game player, comprising: a first step of calculating a position value of a reference sensor detected when the game player exactly follows the motion of the original actor from the motion data of the original actor; a second step of calculating the position values of the sensors corresponding to each of the cameras, the sensors being installed to each of the cameras for accomplishing the position value of the reference sensor; a third step of setting a target region corresponding to each of the cameras having a certain region range based on the position values of the sensors corresponding to each of the cameras; and a fourth step of determining whether or not the sensor attached on the game player is detected at the target region.
10. The method according to claim 9, wherein the position value of the reference senor is a three-dimensional coordinate value, and wherein the position values of the sensors corresponding to each of the cameras are two-dimensional coordinate values.
PCT/KR2001/001710 2000-10-11 2001-10-11 Method of displaying and evaluating motion data used in motion game apparatus WO2002030535A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001294329A AU2001294329A1 (en) 2000-10-11 2001-10-11 Method of displaying and evaluating motion data used in motion game apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2000-0059646A KR100412932B1 (en) 2000-10-11 2000-10-11 Method of displaying and evaluating motion data using in motion game apparatus
KR2000/59646 2000-10-11

Publications (1)

Publication Number Publication Date
WO2002030535A1 true WO2002030535A1 (en) 2002-04-18

Family

ID=19692866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2001/001710 WO2002030535A1 (en) 2000-10-11 2001-10-11 Method of displaying and evaluating motion data used in motion game apparatus

Country Status (3)

Country Link
KR (1) KR100412932B1 (en)
AU (1) AU2001294329A1 (en)
WO (1) WO2002030535A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004007034A1 (en) * 2002-07-12 2004-01-22 Awaba Group Pty Ltd A dance training device
GB2495551A (en) * 2011-10-14 2013-04-17 Sony Comp Entertainment Europe A motion comparison arrangement with variable error tolerance
US8690670B2 (en) 2007-06-14 2014-04-08 Harmonix Music Systems, Inc. Systems and methods for simulating a rock band experience
US8874243B2 (en) 2010-03-16 2014-10-28 Harmonix Music Systems, Inc. Simulating musical instruments
US9024166B2 (en) 2010-09-09 2015-05-05 Harmonix Music Systems, Inc. Preventing subtractive track separation
US9358456B1 (en) 2010-06-11 2016-06-07 Harmonix Music Systems, Inc. Dance competition game
US9981193B2 (en) 2009-10-27 2018-05-29 Harmonix Music Systems, Inc. Movement based recognition and evaluation
US10357714B2 (en) 2009-10-27 2019-07-23 Harmonix Music Systems, Inc. Gesture-based user interface for navigating a menu

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101271688B1 (en) 2011-11-03 2013-06-05 삼성중공업 주식회사 Device and method for measuring motion of floating dock
KR101711488B1 (en) * 2015-01-28 2017-03-03 한국전자통신연구원 Method and System for Motion Based Interactive Service
CN112069075B (en) * 2020-09-09 2023-06-30 网易(杭州)网络有限公司 Fashionable dress test method and device for game roles and game client
KR102456055B1 (en) * 2020-09-28 2022-10-19 한국생산기술연구원 Apparatus and method for quantitatively analyzing and evaluating posture to train exercise by repetitive motion
KR102396882B1 (en) * 2020-10-08 2022-05-11 주식회사 참핏 Apparatus, system and method for avaluating flexibility of the body

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03224580A (en) * 1990-01-31 1991-10-03 Fuji Electric Co Ltd Processing method of moving image
US5554033A (en) * 1994-07-01 1996-09-10 Massachusetts Institute Of Technology System for human trajectory learning in virtual environments
JPH11212582A (en) * 1998-01-27 1999-08-06 Daiichikosho Co Ltd Karaoke device provided with choreography scoring function
JP2000037490A (en) * 1998-07-24 2000-02-08 Konami Co Ltd Dancing game device
KR20000054349A (en) * 2000-06-02 2000-09-05 김용환 3 -Dimetional Dance Simulator with the Capability of Free Step

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010027314A (en) * 1999-09-13 2001-04-06 윤종용 karaoke system for marking motion and marking method
KR20000024237A (en) * 2000-01-31 2000-05-06 김완호 Music accompaniment system having function of dance appraisal and guidance and method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03224580A (en) * 1990-01-31 1991-10-03 Fuji Electric Co Ltd Processing method of moving image
US5554033A (en) * 1994-07-01 1996-09-10 Massachusetts Institute Of Technology System for human trajectory learning in virtual environments
JPH11212582A (en) * 1998-01-27 1999-08-06 Daiichikosho Co Ltd Karaoke device provided with choreography scoring function
JP2000037490A (en) * 1998-07-24 2000-02-08 Konami Co Ltd Dancing game device
KR20000054349A (en) * 2000-06-02 2000-09-05 김용환 3 -Dimetional Dance Simulator with the Capability of Free Step

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004007034A1 (en) * 2002-07-12 2004-01-22 Awaba Group Pty Ltd A dance training device
US8690670B2 (en) 2007-06-14 2014-04-08 Harmonix Music Systems, Inc. Systems and methods for simulating a rock band experience
US10357714B2 (en) 2009-10-27 2019-07-23 Harmonix Music Systems, Inc. Gesture-based user interface for navigating a menu
US10421013B2 (en) 2009-10-27 2019-09-24 Harmonix Music Systems, Inc. Gesture-based user interface
US9981193B2 (en) 2009-10-27 2018-05-29 Harmonix Music Systems, Inc. Movement based recognition and evaluation
US8874243B2 (en) 2010-03-16 2014-10-28 Harmonix Music Systems, Inc. Simulating musical instruments
US9278286B2 (en) 2010-03-16 2016-03-08 Harmonix Music Systems, Inc. Simulating musical instruments
US9358456B1 (en) 2010-06-11 2016-06-07 Harmonix Music Systems, Inc. Dance competition game
US9024166B2 (en) 2010-09-09 2015-05-05 Harmonix Music Systems, Inc. Preventing subtractive track separation
GB2495551B (en) * 2011-10-14 2014-04-09 Sony Comp Entertainment Europe Motion scoring method and apparatus
US10086283B2 (en) 2011-10-14 2018-10-02 Sony Interactive Entertainment Europe Limited Motion scoring method and apparatus
GB2495551A (en) * 2011-10-14 2013-04-17 Sony Comp Entertainment Europe A motion comparison arrangement with variable error tolerance
EP2581121B1 (en) * 2011-10-14 2023-03-15 Sony Interactive Entertainment Europe Limited Motion scoring method, apparatus and program

Also Published As

Publication number Publication date
KR20020028578A (en) 2002-04-17
AU2001294329A1 (en) 2002-04-22
KR100412932B1 (en) 2003-12-31

Similar Documents

Publication Publication Date Title
EP1324269B1 (en) Image processing apparatus, image processing method, record medium, computer program, and semiconductor device
CN103797783B (en) Comment information generating means and comment information generation method
CN103514432B (en) Face feature extraction method, equipment and computer program product
WO2002030535A1 (en) Method of displaying and evaluating motion data used in motion game apparatus
JP7127659B2 (en) Information processing device, virtual/reality synthesis system, method for generating learned model, method for executing information processing device, program
JPH1186004A (en) Moving body tracking device
JP4555690B2 (en) Trajectory-added video generation apparatus and trajectory-added video generation program
KR101558659B1 (en) Image processing device, image processing method, and storage medium
JP4886707B2 (en) Object trajectory identification device, object trajectory identification method, and object trajectory identification program
WO2016021121A1 (en) Correcting and verifying method, and correcting and verifying device
US11882363B2 (en) Control apparatus and learning apparatus and control method
CN114445853A (en) Visual gesture recognition system recognition method
JP2004248725A (en) Analysis device and method of shot ball
KR101124560B1 (en) Automatic object processing method in movie and authoring apparatus for object service
JP7059701B2 (en) Estimator, estimation method, and estimation program
CN100359437C (en) Interactive image game system
KR20010107478A (en) Motion game apparatus
KR101892514B1 (en) Automatic score calculation system for billiard game using coordinate values
KR200239844Y1 (en) Simulation game system using machine vision and pattern-recognition
JP2020126383A (en) Moving object detection device, moving object detection method, and moving body detection program
JP2006279890A (en) Method and device for tracking correlation
EP3862970A1 (en) Information processing apparatus, information processing method, and program
CN116385273B (en) Method, system and storage medium for moving points in stepping panoramic roaming
WO2023170744A1 (en) Image processing device, image processing method, and recording medium
JP4615252B2 (en) Image processing apparatus, image processing method, recording medium, computer program, semiconductor device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP