WO2001035208A1 - Method for acquisition of motion capture data - Google Patents

Method for acquisition of motion capture data Download PDF

Info

Publication number
WO2001035208A1
WO2001035208A1 PCT/KR2000/001158 KR0001158W WO0135208A1 WO 2001035208 A1 WO2001035208 A1 WO 2001035208A1 KR 0001158 W KR0001158 W KR 0001158W WO 0135208 A1 WO0135208 A1 WO 0135208A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
cameras
motions
motion capture
coordinate values
Prior art date
Application number
PCT/KR2000/001158
Other languages
French (fr)
Inventor
Byoung Ick Hwang
Original Assignee
Byoung Ick Hwang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Byoung Ick Hwang filed Critical Byoung Ick Hwang
Priority to JP2001536676A priority Critical patent/JP2003514298A/en
Priority to AU79672/00A priority patent/AU7967200A/en
Priority to DE10083785T priority patent/DE10083785T1/en
Publication of WO2001035208A1 publication Critical patent/WO2001035208A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • G06F7/548Trigonometric functions; Co-ordinate transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the present invention relates in general to the acquisition of motion capture data, and more particularly to a method for acquisition of motion capture data, wherein motions of a person or animal are captured using color markers and three-dimensional motion data is produced on the basis of the captured motions.
  • Most conventional motion capture systems are adapted to attach markers or sensors to the human body, analyze output data from the sensors or marker images picked-up by cameras and measure the positions and orientations of respective joints of the human body in accordance with the analyzed results.
  • Such conventional motion capture systems may be classified into four types, an acoustic type, mechanical type, magnetic type and optical type, according to the operational types of markers or sensors used therein. Among them, the systems of the magnetic type and optical type are most widely used now.
  • the motion capture system of the acoustic type comprises a plurality of acoustic sensors and three acoustic receivers.
  • the acoustic sensors are attached to respective joints of an actor to sequentially generate acoustic waves, and the acoustic receivers are disposed apart from the acoustic sensors to receive the acoustic waves generated by the sensors.
  • This motion capture system is adapted to calculate distances from the acoustic sensors to the acoustic receivers using time periods required for the receivers to receive the acoustic waves from the sensors.
  • the position of each of the sensors in a three-dimensional space is obtained on the basis of a triangular surveying principle using values calculated respectively by the three receivers.
  • This system of the acoustic type is disadvantageous in that the sampling frequency is low and the number of acoustic generators available at the same time is small. Also, motions of the actor may become unnatural because the sensors are large in size. Moreover, the acoustic-type motion capture system is greatly influenced by acoustic reflex because of the properties of its acoustic arrangement.
  • the motion capture system of the mechanical type comprises a combination of a potentiometer and a slider for measuring motions of joints of an actor. Because this motion capture system has no receiver, it provides an absolute measurement arrangement which does not suffer any environmental interference or effects. Accordingly, as compared with the system of the acoustic type, the system of the mechanical type scarcely requires an initial calibration and need not have high-cost studio equipment. Further, as compared with other motion capture systems, the mechanical-type system is low in cost and very high in sampling frequency. However, the motion capture system of the mechanical type has a disadvantage in that the actor may unnaturally move because of mechanical units attached thereto. Moreover, the measurement accuracy of the mechanical-type system depends on how accurately the mechanical units are positioned on the respective joints of the actor.
  • the motion capture system of the magnetic type comprises a plurality of sensors attached to respective joints of an actor for sensing a magnetic field.
  • the sensors sense magnetic field variations, which are then transformed into spatial values used for motion measurement.
  • the sensors, magnetic field generator and system body are interconnected via cables or, recently, wirelessly.
  • the motion capture system of the magnetic type is advantageous in that it is relatively low in cost and excellent in cost-to-performance ratio and can perform real-time processing.
  • the sensors are of a wired type, the actor must move within a limited sphere due to cable connections from the sensors to his or her body, thereby making it difficult for the actor to naturally express his or her complex and rapid motions.
  • the actor cannot naturally move, either, because of the large size of the transmitter attached to his or her body. Further, the actor must move within a limited sphere because he or she must remain within the range of the magnetic field.
  • the motion capture system of the optical type comprises a plurality of reflex markers attached respectively to main joints of an actor for reflecting infrared rays, and a plurality of cameras, each having three to sixteen infrared filters mounted thereto.
  • the cameras are adapted to create two-dimensional coordinates of reflected marker images.
  • a dedicated program is used to analyze the two-dimensional coordinates captured by the cameras and transform them into three-dimensional spatial coordinates as a result of the analysis.
  • This conventional optical-type motion capture system is disadvantageous in that it lays emphasis on the realistic expression of motions of a character, resulting in the need for an extraordinary large amount of data and a high-performance computer to process it.
  • the present invention has been made in view of the above problems, and it is an object of the present invention to provide a method for acquisition of motion capture data which is capable of processing a reduced amount of data by extracting and processing only coordinate values varying with motions.
  • a provision of a method for acquisition of motion capture data in a motion capture system which is capable of recording motions of a person or object in a computer-processable form.
  • This method is based on a variety of technical requirements, or a color extraction algorithm, a color position tracking algorithm, a technique for tracking motions of an actor using a plurality of color markers with different color values, an image processing technique for perceiving images of 24 to 30 frames per second by a USB camera, and a technique for transforming extracted two-dimensional data into three-dimensional data.
  • This motion capture technique has a basic interface function of acquiring data regarding motions of a character prior to realistically expressing the motions of the character.
  • the present invention proposes a method capable of readily acquiring data about motions of an actor.
  • Fig. 1 is a view showing the construction of a system for execution of a method for acquisition of motion capture data in accordance with the present invention
  • Fig. 2 is a view showing an arrangement of cameras for acquisition of three-dimensional data in accordance with the present invention
  • Fig. 3 is a schematic view illustrating the detection of X and Z coordinate values by a front camera in Fig. 2;
  • Fig. 4 is a schematic view illustrating the detection of X, Y and Z coordinate values by a left camera in Fig. 2
  • Fig. 5 is a schematic view illustrating the detection of X, Y and Z coordinate values by a right camera in Fig. 2;
  • Fig. 6 is a view showing a virtual motion area of an object whose motions are to be captured.
  • Figs. 7 to 9 are views showing respective dead zones of the front, left and right cameras in Fig. 2 and illustrating the detection of data in the dead zones.
  • a plurality of color pointers with different color values are attached respectively to main joint portions and terminal portions of the body of a data provider.
  • Three or more cameras are equiangularly arranged while being spaced apart from one another at regular intervals, to shoot actions and motions of the data provider.
  • a personal computer is used to convert image data picked-up by the cameras into three-dimensional coordinate values (X, Y and Z), detect position variations of the color pointers on the respective body portions of the data provider from the converted three-dimensional coordinate values and store the detected position variations as motion capture data.
  • the computer is adapted to receive image signals from the cameras through its buffer and extract only position coordinate values about motions of the color pointers on the respective body portions of the data provider from the received image signals. Because the computer extracts and stores not all the image signals, but only position coordinate values varying with motions of the color pointers on the respective body portions, the amount of data to be processed is reduced, resulting in an increase in processing speed.
  • Table 1 shows an example of the assignment of 13 primary colors among 256 colors to main body portions of an actor.
  • the resolution is set to 160 pixels x 120 pixels
  • the color depth is set to 16 bits
  • the frame rate is set to 30 frames/sec
  • N represents the number of color pointers.
  • a camera extracts three primary color elements, or red, green and blue elements, from input images and transfers them to a personal computer (PC).
  • the result is 1152000 bytes/sec.
  • calculating it in terms of Kbytes the result is 1152 Kbytes/sec. In conclusion, data of 1152 Kbytes per second is transferred from the camera to the computer.
  • Coordinate systems shown in Figs. 1 to 9 are absolute coordinate systems available for computer three-dimensional (3D) graphic and animation technologies. Taking a front view of the coordinate systems shown in Figs. 1 to 9, the Z axis is placed in the upward and downward directions, the X axis is placed in the left and right directions and the Y axis is placed in the forward and backward directions.
  • three or more cameras shoot actions and motions of a data provider and transfer the resulting images respectively to associated computers.
  • the computers extract position variations of a plurality of color pointers, attached respectively to main joint portions and terminal portions of the body of the data provider, from the images transferred from the associated cameras and obtain two- dimensional motion data coordinate values from the extracted position variations.
  • the computers transfer the obtained two-dimensional motion data coordinate values over an internal network to a server computer, which then transforms the transferred coordinate values into three-dimensional motion capture data according to a synthesis program based on a three-dimensional synthesis function engine.
  • Fig. 1 is a view showing the construction of a system for execution of a method for acquisition of motion capture data in accordance with the present invention.
  • three or more cameras are provided to shoot motions of a data provider and transfer the resulting images respectively to associated computers.
  • the computers are adapted to process the images transferred from the associated cameras in the above calculation manner and transfer the processed results to a server computer over an internal network.
  • the server computer is adapted to obtain three-dimensional motion capture data on the basis of data transferred from the above computers.
  • Fig. 2 is a view showing an arrangement of cameras for acquisition of three-dimensional data in accordance with the present invention. As shown in this drawing, three cameras, or a front camera 100, left camera 200 and right camera 300 are disposed around an object, or a data provider.
  • the three cameras are equiangularly arranged in such a manner that they are spaced apart from one another at regular angles of 120 degrees.
  • a virtual area is set to a predetermined range around a cross point of views of the three cameras, and it is recognized as a coordinate system by computers.
  • the three cameras acquire Z coordinate values on all sides of an object.
  • X and Y coordmate values of the object are obtained by performing an appropriate calculation algorithm for X coordinate value variations acquired by the front camera 100 and +X, -Y, +Y and - Y data acquired by the left and right cameras 200 and 300. Motion capture data on all sides of the object can be acquired in this manner.
  • Fig. 3 is a schematic view illustrating the detection of X and Z coordinate values by the front camera 100 in Fig. 2. All X and Z position coordinate values of pointers on the front side of an object are detected through the front camera 100 and then provided as a reference for comparison with data detected through the left and right cameras 200 and 300. The X and Z coordinate values detected by the front camera 100 are also provided as a reference for correction of data detected by the other cameras.
  • Fig. 4 is a schematic view illustrating the detection of X (30°), Y (60°) and Z coordinate values by the left camera 200 in Fig. 2.
  • Z coordinate values and X and Y coordinate values of pointers on the left and rear sides of an object are detected through the left camera 200.
  • the detected X and Y coordinate values are rotated on the basis of the detected Z coordinate values and then provided as a reference for comparison with data detected by the front camera 100 and right camera 300.
  • Fig. 5 is a schematic view illustrating the detection of Y (60°), X (30°) and Z coordinate values by the right camera 300 in Fig. 2.
  • Z coordinate values and X and Y coordinate values of pointers on the right and rear sides of an object are detected through the right camera 300.
  • the detected X and Y coordinate values are rotated on the basis of the detected Z coordinate values and then provided as a reference for comparison with data detected by the front camera 100 and left camera 200.
  • Fig. 6 is a view showing a virtual motion area (an X and Y coordinate system) of an object whose motions are to be captured.
  • the virtual area is set to a predetermined range around a cross point of views of the three cameras 100, 200 and 300, and all motions of the object are limited to within the virtual area.
  • a dead zone of each of the cameras is detected through the crossing views of the other two cameras and accurate coordinate values thereof are obtained through verification and correction operations.
  • Coordinate values of pointers positioned in the dead zone of the front camera 100 in Fig. 7 are determined by comparing data detected by the left and right cameras 200 and 300 with each other and verifying and correcting data detected by the front camera 100 on the basis of the compared result.
  • Coordinate values of pointers positioned in the dead zone of the left camera 200 in Fig. 8 are determined by comparing data detected by the front and right cameras 100 and 300 with each other and verifying and correcting data detected by the left camera 200 on the basis of the compared result.
  • Coordinate values of pointers positioned in the dead zone of the right camera 300 in Fig. 9 are determined by comparing data detected by the front and left cameras 100 and 200 with each other and verifying and correcting data detected by the right camera 300 on the basis of the compared result.
  • the present invention provides a method for acquisition of motion capture data which is capable of extracting and processing only color pointer data representing motions of an actor. Therefore, the amount of data to be processed is significantly reduced, resulting in an increase in processing speed as compared with the processing of the entire image. Further, a low-cost personal computer, camcorder and USB camera are used with no need for complex or high-cost peripheral equipment, thereby implementing a more cost- effective system capable of very simply and conveniently performing an initial calibration for motion capture at any place.
  • the entire performance of the present system can be simpler improved by merely upgrading software, as compared with the conventional performance improvement based on hardware.
  • the present invention is widely applicable to an entertainment field, a virtual reality field and other image production fields.

Abstract

A method for acquisition of motion capture data which is capable of reducing the amount of data indicative of motions, readily expressing natural motions in image production fields such as game character production, digital animation production, etc. and enabling a data provider to act and move with no limitation in sphere because of the absence of cable connections to his body. A plurality of color pointers with different color values are attached respectively to main joints and terminal portions of the body of the data provider. Three digital cameras (100, 200 and 300) or more are used to shoot actions and motions of the data provider. Acomputer is used to convert images picked-up by the digital cameras into X, Y and Z coordinate values. The motions of the body are extracted as three-dimensional position coordinate values based on motions of the color pointers.

Description

METHOD FOR ACQUISITION OF MOTION CAPTURE DATA
Technical Field
The present invention relates in general to the acquisition of motion capture data, and more particularly to a method for acquisition of motion capture data, wherein motions of a person or animal are captured using color markers and three-dimensional motion data is produced on the basis of the captured motions.
Background Art
With the development of motion picture and game industries, there have been introduced three-dimensional modeling, rendering and animation technologies taking a concentrated interest in a technique capable of producing natural motions.
In order to meet such an interest, there has been proposed a motion capture technique which is capable of directly capturing motions of a person, numerically expressing the captured motions, entering the resulting motion data in a computer and moving a character, etc. on the basis of the entered motion data.
In this motion capture technique, it is most important to rapidly acquire data regarding natural body motions.
Most conventional motion capture systems are adapted to attach markers or sensors to the human body, analyze output data from the sensors or marker images picked-up by cameras and measure the positions and orientations of respective joints of the human body in accordance with the analyzed results.
Such conventional motion capture systems may be classified into four types, an acoustic type, mechanical type, magnetic type and optical type, according to the operational types of markers or sensors used therein. Among them, the systems of the magnetic type and optical type are most widely used now.
The above conventional motion capture systems of the various types have the following characteristics and problems. The motion capture system of the acoustic type comprises a plurality of acoustic sensors and three acoustic receivers. The acoustic sensors are attached to respective joints of an actor to sequentially generate acoustic waves, and the acoustic receivers are disposed apart from the acoustic sensors to receive the acoustic waves generated by the sensors. This motion capture system is adapted to calculate distances from the acoustic sensors to the acoustic receivers using time periods required for the receivers to receive the acoustic waves from the sensors. The position of each of the sensors in a three-dimensional space is obtained on the basis of a triangular surveying principle using values calculated respectively by the three receivers. This system of the acoustic type is disadvantageous in that the sampling frequency is low and the number of acoustic generators available at the same time is small. Also, motions of the actor may become unnatural because the sensors are large in size. Moreover, the acoustic-type motion capture system is greatly influenced by acoustic reflex because of the properties of its acoustic arrangement.
The motion capture system of the mechanical type comprises a combination of a potentiometer and a slider for measuring motions of joints of an actor. Because this motion capture system has no receiver, it provides an absolute measurement arrangement which does not suffer any environmental interference or effects. Accordingly, as compared with the system of the acoustic type, the system of the mechanical type scarcely requires an initial calibration and need not have high-cost studio equipment. Further, as compared with other motion capture systems, the mechanical-type system is low in cost and very high in sampling frequency. However, the motion capture system of the mechanical type has a disadvantage in that the actor may unnaturally move because of mechanical units attached thereto. Moreover, the measurement accuracy of the mechanical-type system depends on how accurately the mechanical units are positioned on the respective joints of the actor.
The motion capture system of the magnetic type comprises a plurality of sensors attached to respective joints of an actor for sensing a magnetic field. When the actor moves around a magnetic field generator, the sensors sense magnetic field variations, which are then transformed into spatial values used for motion measurement. The sensors, magnetic field generator and system body are interconnected via cables or, recently, wirelessly. The motion capture system of the magnetic type is advantageous in that it is relatively low in cost and excellent in cost-to-performance ratio and can perform real-time processing. However, in the case where the sensors are of a wired type, the actor must move within a limited sphere due to cable connections from the sensors to his or her body, thereby making it difficult for the actor to naturally express his or her complex and rapid motions. For a wireless system employing a wireless transmitter instead of the cables, the actor cannot naturally move, either, because of the large size of the transmitter attached to his or her body. Further, the actor must move within a limited sphere because he or she must remain within the range of the magnetic field.
The motion capture system of the optical type comprises a plurality of reflex markers attached respectively to main joints of an actor for reflecting infrared rays, and a plurality of cameras, each having three to sixteen infrared filters mounted thereto. The cameras are adapted to create two-dimensional coordinates of reflected marker images. A dedicated program is used to analyze the two-dimensional coordinates captured by the cameras and transform them into three-dimensional spatial coordinates as a result of the analysis. This conventional optical-type motion capture system is disadvantageous in that it lays emphasis on the realistic expression of motions of a character, resulting in the need for an extraordinary large amount of data and a high-performance computer to process it.
Disclosure of the Invention
Therefore, the present invention has been made in view of the above problems, and it is an object of the present invention to provide a method for acquisition of motion capture data which is capable of processing a reduced amount of data by extracting and processing only coordinate values varying with motions.
It is another object of the present invention to provide a method for acquisition of motion capture data which is capable of very simply and conveniently performing an initial calibration for motion capture. It is a further object of the present invention to provide a method for acquisition of motion capture data wherein the amount of data to be processed is small and a low-cost personal computer, camcorder and USB camera are used with no need for complex or high-cost peripheral equipment, thereby implementing a more cost-effective system. It is another object of the present invention to provide a method for acquisition of motion capture data wherein no cable is connected between an actor and a motion data processing system, thereby making it possible to conveniently extract motions of the actor, including a violent action.
It is yet another object of the present invention to provide a method for acquisition of motion capture data which is capable of accurately measuring motions of an actor irrespective of ambient environments such as an ambient illumination, radio interference, etc.
In accordance with the present invention, the above and other objects can be accomplished by a provision of a method for acquisition of motion capture data in a motion capture system which is capable of recording motions of a person or object in a computer-processable form. This method is based on a variety of technical requirements, or a color extraction algorithm, a color position tracking algorithm, a technique for tracking motions of an actor using a plurality of color markers with different color values, an image processing technique for perceiving images of 24 to 30 frames per second by a USB camera, and a technique for transforming extracted two-dimensional data into three-dimensional data.
This motion capture technique has a basic interface function of acquiring data regarding motions of a character prior to realistically expressing the motions of the character. In view of the fact that the motion capture technique is a kind of interface method, the present invention proposes a method capable of readily acquiring data about motions of an actor.
Brief Description of the Drawings
The above and other objects, features and other advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a view showing the construction of a system for execution of a method for acquisition of motion capture data in accordance with the present invention;
Fig. 2 is a view showing an arrangement of cameras for acquisition of three-dimensional data in accordance with the present invention;
Fig. 3 is a schematic view illustrating the detection of X and Z coordinate values by a front camera in Fig. 2;
Fig. 4 is a schematic view illustrating the detection of X, Y and Z coordinate values by a left camera in Fig. 2; Fig. 5 is a schematic view illustrating the detection of X, Y and Z coordinate values by a right camera in Fig. 2;
Fig. 6 is a view showing a virtual motion area of an object whose motions are to be captured; and
Figs. 7 to 9 are views showing respective dead zones of the front, left and right cameras in Fig. 2 and illustrating the detection of data in the dead zones.
Best Mode for Carrying Out the Invention
As seen from Figs. 1 and 2, a plurality of color pointers with different color values are attached respectively to main joint portions and terminal portions of the body of a data provider. Three or more cameras are equiangularly arranged while being spaced apart from one another at regular intervals, to shoot actions and motions of the data provider. A personal computer is used to convert image data picked-up by the cameras into three-dimensional coordinate values (X, Y and Z), detect position variations of the color pointers on the respective body portions of the data provider from the converted three-dimensional coordinate values and store the detected position variations as motion capture data.
The computer is adapted to receive image signals from the cameras through its buffer and extract only position coordinate values about motions of the color pointers on the respective body portions of the data provider from the received image signals. Because the computer extracts and stores not all the image signals, but only position coordinate values varying with motions of the color pointers on the respective body portions, the amount of data to be processed is reduced, resulting in an increase in processing speed.
Now, a description will be given of a procedure of capturing motions and extracting three-dimensional motion data from captured images in accordance with the present invention.
The following Table 1 shows an example of the assignment of 13 primary colors among 256 colors to main body portions of an actor.
[TABLE 1]
Figure imgf000007_0001
Herein, the resolution is set to 160 pixels x 120 pixels, the color depth is set to 16 bits, the frame rate is set to 30 frames/sec, and N represents the number of color pointers. Under the above conditions, a camera extracts three primary color elements, or red, green and blue elements, from input images and transfers them to a personal computer (PC). At this time, the amount of data being transferred from the camera to the PC is 160 pixels x 120 pixels x 16 bits x 30 frames/sec = 9216000 bits/sec because it can be expressed by a multiplication of the resolution, color depth and frame rate. Calculating this value in terms of bytes, the result is 1152000 bytes/sec. Furthermore, calculating it in terms of Kbytes, the result is 1152 Kbytes/sec. In conclusion, data of 1152 Kbytes per second is transferred from the camera to the computer.
Because the amount of data calculated in the above manner is concerned with each of the red, green and blue elements, the amount of data captured for all the elements is three times the calculated data amount, or 1152000 x 3 = 3456000 bytes/sec. Calculating it in terms of Mbytes, the result is 3.456 Mbytes/sec. Finally, the total amount of data to be processed in the computer is obtained by multiplying the amount of data captured for all the red, green and blue elements by the number of color pointers.
Namely, the total amount of data to be processed in the computer is 3.456 Mbytes/sec if N = 1 and 34.56 Mbytes/sec if N = 10. Coordinate systems shown in Figs. 1 to 9 are absolute coordinate systems available for computer three-dimensional (3D) graphic and animation technologies. Taking a front view of the coordinate systems shown in Figs. 1 to 9, the Z axis is placed in the upward and downward directions, the X axis is placed in the left and right directions and the Y axis is placed in the forward and backward directions.
Hereinafter, a detailed description will be given of a method for acquisition of motion capture data in accordance with the present invention in conjunction with the accompanying drawings.
First, three or more cameras shoot actions and motions of a data provider and transfer the resulting images respectively to associated computers. The computers extract position variations of a plurality of color pointers, attached respectively to main joint portions and terminal portions of the body of the data provider, from the images transferred from the associated cameras and obtain two- dimensional motion data coordinate values from the extracted position variations. Subsequently, the computers transfer the obtained two-dimensional motion data coordinate values over an internal network to a server computer, which then transforms the transferred coordinate values into three-dimensional motion capture data according to a synthesis program based on a three-dimensional synthesis function engine.
Fig. 1 is a view showing the construction of a system for execution of a method for acquisition of motion capture data in accordance with the present invention. As shown in this drawing, three or more cameras are provided to shoot motions of a data provider and transfer the resulting images respectively to associated computers. The computers are adapted to process the images transferred from the associated cameras in the above calculation manner and transfer the processed results to a server computer over an internal network. The server computer is adapted to obtain three-dimensional motion capture data on the basis of data transferred from the above computers.
Fig. 2 is a view showing an arrangement of cameras for acquisition of three-dimensional data in accordance with the present invention. As shown in this drawing, three cameras, or a front camera 100, left camera 200 and right camera 300 are disposed around an object, or a data provider.
The three cameras are equiangularly arranged in such a manner that they are spaced apart from one another at regular angles of 120 degrees. A virtual area is set to a predetermined range around a cross point of views of the three cameras, and it is recognized as a coordinate system by computers.
In the case where the three cameras are disposed as shown in Fig. 1, they acquire Z coordinate values on all sides of an object. X and Y coordmate values of the object are obtained by performing an appropriate calculation algorithm for X coordinate value variations acquired by the front camera 100 and +X, -Y, +Y and - Y data acquired by the left and right cameras 200 and 300. Motion capture data on all sides of the object can be acquired in this manner.
Fig. 3 is a schematic view illustrating the detection of X and Z coordinate values by the front camera 100 in Fig. 2. All X and Z position coordinate values of pointers on the front side of an object are detected through the front camera 100 and then provided as a reference for comparison with data detected through the left and right cameras 200 and 300. The X and Z coordinate values detected by the front camera 100 are also provided as a reference for correction of data detected by the other cameras.
Fig. 4 is a schematic view illustrating the detection of X (30°), Y (60°) and Z coordinate values by the left camera 200 in Fig. 2. Z coordinate values and X and Y coordinate values of pointers on the left and rear sides of an object are detected through the left camera 200. The detected X and Y coordinate values are rotated on the basis of the detected Z coordinate values and then provided as a reference for comparison with data detected by the front camera 100 and right camera 300. Fig. 5 is a schematic view illustrating the detection of Y (60°), X (30°) and Z coordinate values by the right camera 300 in Fig. 2. Z coordinate values and X and Y coordinate values of pointers on the right and rear sides of an object are detected through the right camera 300. The detected X and Y coordinate values are rotated on the basis of the detected Z coordinate values and then provided as a reference for comparison with data detected by the front camera 100 and left camera 200.
Fig. 6 is a view showing a virtual motion area (an X and Y coordinate system) of an object whose motions are to be captured. The virtual area is set to a predetermined range around a cross point of views of the three cameras 100, 200 and 300, and all motions of the object are limited to within the virtual area. A dead zone of each of the cameras is detected through the crossing views of the other two cameras and accurate coordinate values thereof are obtained through verification and correction operations.
The dead zones of the three cameras are shown respectively in Figs. 7 to 9.
Coordinate values of pointers positioned in the dead zone of the front camera 100 in Fig. 7 are determined by comparing data detected by the left and right cameras 200 and 300 with each other and verifying and correcting data detected by the front camera 100 on the basis of the compared result. Coordinate values of pointers positioned in the dead zone of the left camera 200 in Fig. 8 are determined by comparing data detected by the front and right cameras 100 and 300 with each other and verifying and correcting data detected by the left camera 200 on the basis of the compared result.
Coordinate values of pointers positioned in the dead zone of the right camera 300 in Fig. 9 are determined by comparing data detected by the front and left cameras 100 and 200 with each other and verifying and correcting data detected by the right camera 300 on the basis of the compared result.
Industrial Applicability
As apparent from the above description, the present invention provides a method for acquisition of motion capture data which is capable of extracting and processing only color pointer data representing motions of an actor. Therefore, the amount of data to be processed is significantly reduced, resulting in an increase in processing speed as compared with the processing of the entire image. Further, a low-cost personal computer, camcorder and USB camera are used with no need for complex or high-cost peripheral equipment, thereby implementing a more cost- effective system capable of very simply and conveniently performing an initial calibration for motion capture at any place.
Further, there are no wired and wireless connections made between an actor and a motion data processing system, so that the actor can move with no limitation in sphere, thereby making it possible to conveniently extract motions of the actor, including a violent action.
Moreover, the entire performance of the present system can be simpler improved by merely upgrading software, as compared with the conventional performance improvement based on hardware.
Furthermore, owing to the above features, the present invention is widely applicable to an entertainment field, a virtual reality field and other image production fields.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims

Claims:
1. A method for acquisition of motion capture data, comprising the steps of: a) attaching a plurality of color pointers with different color values respectively to main body portions of an actor; b) shooting motions of said actor by camera means; and c) converting image data picked-up by said camera means into three- dimensional coordinate values (X, Y and Z), detecting position variations of said color pointers on the respective body portions of said actor from the converted three-dimensional coordinate values and extracting three-dimensional position coordinate values based on motions of said color pointers from the detected position variations.
2. A method for acquisition of motion capture data, as set forth in Claim
1, wherein said camera means includes at least three cameras equiangularly arranged while being spaced apart from one another at regular intervals, said three cameras being a front camera, a left camera and a right camera disposed around said actor; and wherein said step c) includes the steps of: acquiring vertical motion (Z coordinate) data from all of said cameras, acquiring horizontal motion (X coordinate) data from said front camera and acquiring forward and backward motion data by performing a predetermined calculation algorithm for the acquired horizontal motion data and data acquired from said left and right cameras; and acquiring data regarding respective dead zones of said cameras by performing a parallel calculation algorithm for complementary data from said cameras, and generating three-dimensional data on the basis of the acquired dead zone data.
PCT/KR2000/001158 1999-11-11 2000-10-17 Method for acquisition of motion capture data WO2001035208A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2001536676A JP2003514298A (en) 1999-11-11 2000-10-17 How to capture motion capture data
AU79672/00A AU7967200A (en) 1999-11-11 2000-10-17 Method for acquisition of motion capture data
DE10083785T DE10083785T1 (en) 1999-11-11 2000-10-17 Process for capturing motion capture data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1019990049880A KR100361462B1 (en) 1999-11-11 1999-11-11 Method for Acquisition of Motion Capture Data
KR1999/049880 1999-11-11

Publications (1)

Publication Number Publication Date
WO2001035208A1 true WO2001035208A1 (en) 2001-05-17

Family

ID=19619529

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2000/001158 WO2001035208A1 (en) 1999-11-11 2000-10-17 Method for acquisition of motion capture data

Country Status (6)

Country Link
JP (1) JP2003514298A (en)
KR (1) KR100361462B1 (en)
CN (1) CN1340170A (en)
AU (1) AU7967200A (en)
DE (1) DE10083785T1 (en)
WO (1) WO2001035208A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6831603B2 (en) 2002-03-12 2004-12-14 Menache, Llc Motion tracking system and method
US7009561B2 (en) 2003-03-11 2006-03-07 Menache, Llp Radio frequency motion tracking system and method
CN101982836A (en) * 2010-10-14 2011-03-02 西北工业大学 Mark point identification initializing method based on principal components analysis (PCA) in motion capture system
WO2011077445A1 (en) * 2009-12-24 2011-06-30 Caleb Suresh Motupalli System and method for super-augmenting a persona to manifest a pan-environment super-cyborg for global governance
US8454428B2 (en) 2002-09-12 2013-06-04 Wms Gaming Inc. Gaming machine performing real-time 3D rendering of gaming events
EP1825438A4 (en) * 2004-12-03 2017-06-21 Sony Corporation System and method for capturing facial and body motion

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100362383B1 (en) * 1999-12-27 2002-11-23 한국전자통신연구원 Post-processing method for motion captured data using 3D interface
KR20020017576A (en) * 2000-08-31 2002-03-07 이준서 System and method for motion capture using camera image
KR20020095867A (en) * 2001-06-16 2002-12-28 이희만 A Motion Capture System using Color Wavelength
KR20030065620A (en) * 2002-01-30 2003-08-09 대한민국(전남대학교총장) apparatus and method for recognizing action of virtual game system
CN100361070C (en) * 2004-10-29 2008-01-09 中国科学院计算技术研究所 skeleton motion extraction method by means of optical-based motion capture data
US7580546B2 (en) 2004-12-09 2009-08-25 Electronics And Telecommunications Research Institute Marker-free motion capture apparatus and method for correcting tracking error
JP4687265B2 (en) * 2005-06-14 2011-05-25 富士ゼロックス株式会社 Image analyzer
US7869646B2 (en) 2005-12-01 2011-01-11 Electronics And Telecommunications Research Institute Method for estimating three-dimensional position of human joint using sphere projecting technique
KR100763578B1 (en) * 2005-12-01 2007-10-04 한국전자통신연구원 Method for Estimating 3-Dimensional Position of Human's Joint using Sphere Projecting Technique
KR100753965B1 (en) * 2006-04-07 2007-08-31 (주)아이토닉 Posture capture system of puppet and method thereof
US20080170750A1 (en) * 2006-11-01 2008-07-17 Demian Gordon Segment tracking in motion picture
CN101581966B (en) * 2008-05-16 2011-03-16 英业达股份有限公司 Method and system for operating personal computer by utilizing action recognition
US20100277470A1 (en) * 2009-05-01 2010-11-04 Microsoft Corporation Systems And Methods For Applying Model Tracking To Motion Capture
CN102622500A (en) * 2011-01-30 2012-08-01 德信互动科技(北京)有限公司 Game achieving system
KR101465087B1 (en) * 2012-11-14 2014-11-26 포항공과대학교 산학협력단 Analyzing method of trajectory and behavior pattern of a living organism
CN105118343A (en) * 2015-09-15 2015-12-02 北京瑞盖科技有限公司 Tennis training examination and evaluation system and training method
KR101968420B1 (en) * 2017-11-24 2019-04-11 이화여자대학교 산학협력단 System and Method for Customized Game Production Based on User Body Motion and Recording Medium thereof
CN108742841B (en) * 2018-05-30 2020-11-06 上海交通大学 Tool real-time positioning device of multi-position tracker
CN109186455A (en) * 2018-09-06 2019-01-11 安徽师范大学 A kind of device of view-based access control model measurement dynamic object three-dimensional coordinate

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01131430A (en) * 1987-11-17 1989-05-24 Anzen Jidosha Kk Confrontation detecting device
JPH04191607A (en) * 1990-11-26 1992-07-09 Toshiba Corp Three-dimensional measuring method
US5546189A (en) * 1994-05-19 1996-08-13 View Engineering, Inc. Triangulation-based 3D imaging and processing method and system
US6041652A (en) * 1998-07-31 2000-03-28 Litton Systems Inc. Multisensor rotor flexure mounting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01131430A (en) * 1987-11-17 1989-05-24 Anzen Jidosha Kk Confrontation detecting device
JPH04191607A (en) * 1990-11-26 1992-07-09 Toshiba Corp Three-dimensional measuring method
US5546189A (en) * 1994-05-19 1996-08-13 View Engineering, Inc. Triangulation-based 3D imaging and processing method and system
US6041652A (en) * 1998-07-31 2000-03-28 Litton Systems Inc. Multisensor rotor flexure mounting

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6831603B2 (en) 2002-03-12 2004-12-14 Menache, Llc Motion tracking system and method
US8454428B2 (en) 2002-09-12 2013-06-04 Wms Gaming Inc. Gaming machine performing real-time 3D rendering of gaming events
US7009561B2 (en) 2003-03-11 2006-03-07 Menache, Llp Radio frequency motion tracking system and method
US7432810B2 (en) 2003-03-11 2008-10-07 Menache Llc Radio frequency tags for use in a motion tracking system
EP1825438A4 (en) * 2004-12-03 2017-06-21 Sony Corporation System and method for capturing facial and body motion
WO2011077445A1 (en) * 2009-12-24 2011-06-30 Caleb Suresh Motupalli System and method for super-augmenting a persona to manifest a pan-environment super-cyborg for global governance
GB2491990A (en) * 2009-12-24 2012-12-19 Caleb Suresh Motupalli System and method for super-augmenting a persona to manifest a pan-environment super-cyborg for global governance
CN101982836A (en) * 2010-10-14 2011-03-02 西北工业大学 Mark point identification initializing method based on principal components analysis (PCA) in motion capture system

Also Published As

Publication number Publication date
DE10083785T1 (en) 2001-11-29
KR20000017755A (en) 2000-04-06
KR100361462B1 (en) 2002-11-21
CN1340170A (en) 2002-03-13
AU7967200A (en) 2001-06-06
JP2003514298A (en) 2003-04-15

Similar Documents

Publication Publication Date Title
WO2001035208A1 (en) Method for acquisition of motion capture data
CN109102537A (en) A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera
CN111353355B (en) Motion tracking system and method
US6839081B1 (en) Virtual image sensing and generating method and apparatus
JP2010109783A (en) Electronic camera
CN103903263B (en) A kind of 360 degrees omnidirection distance-finding method based on Ladybug panorama camera image
CN110544273B (en) Motion capture method, device and system
CN108958469A (en) A method of hyperlink is increased in virtual world based on augmented reality
JP2001508210A (en) Method and system for determining the position of an object
WO2024060978A1 (en) Key point detection model training method and apparatus and virtual character driving method and apparatus
KR20210146770A (en) Method for indoor localization and electronic device
KR20020017576A (en) System and method for motion capture using camera image
CN112184898A (en) Digital human body modeling method based on motion recognition
CN112790758A (en) Human motion measuring method and system based on computer vision and electronic equipment
CN110197531A (en) Role's skeleton point mapping techniques based on deep learning
CN111402392A (en) Illumination model calculation method, material parameter processing method and material parameter processing device
CN111192350A (en) Motion capture system and method based on 5G communication VR helmet
KR101094137B1 (en) Motion capture system
CN115598744A (en) High-dimensional light field event camera based on micro-lens array and extraction method
Miyasaka et al. Reconstruction of realistic 3D surface model and 3D animation from range images obtained by real time 3D measurement system
JP3616355B2 (en) Image processing method and image processing apparatus by computer
CN109246417A (en) A kind of machine vision analysis system and method based on bore hole stereoscopic display
KR20070099282A (en) Motion reaction system and the method thereof
Bériault Multi-camera system design, calibration and three-dimensional reconstruction for markerless motion capture
WO2023159517A1 (en) System and method of capturing three-dimensional human motion capture with lidar

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 00803619.5

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 09869960

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2001 536676

Country of ref document: JP

Kind code of ref document: A

RET De translation (de og part 6b)

Ref document number: 10083785

Country of ref document: DE

Date of ref document: 20011129

WWE Wipo information: entry into national phase

Ref document number: 10083785

Country of ref document: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION UNDER RULE 69 EPC (EPO FORM 1205 OF 29.01.2003)

122 Ep: pct application non-entry in european phase