CN103052973A - Method and device for generating body animation - Google Patents
Method and device for generating body animation Download PDFInfo
- Publication number
- CN103052973A CN103052973A CN2011800013260A CN201180001326A CN103052973A CN 103052973 A CN103052973 A CN 103052973A CN 2011800013260 A CN2011800013260 A CN 2011800013260A CN 201180001326 A CN201180001326 A CN 201180001326A CN 103052973 A CN103052973 A CN 103052973A
- Authority
- CN
- China
- Prior art keywords
- characteristic point
- image
- action
- predefined
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000009471 action Effects 0.000 claims abstract description 160
- 210000000746 body region Anatomy 0.000 claims abstract description 67
- 230000008859 change Effects 0.000 claims description 31
- 230000002045 lasting effect Effects 0.000 claims description 21
- 230000008439 repair process Effects 0.000 claims description 20
- 238000013519 translation Methods 0.000 claims description 20
- 230000000875 corresponding effect Effects 0.000 claims description 13
- 241001465754 Metazoa Species 0.000 claims description 10
- 230000003068 static effect Effects 0.000 claims description 10
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000004321 preservation Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 239000002131 composite material Substances 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 16
- 238000004364 calculation method Methods 0.000 abstract description 6
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 15
- 230000000694 effects Effects 0.000 description 15
- 230000014616 translation Effects 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 238000013461 design Methods 0.000 description 5
- 238000009792 diffusion process Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000000153 supplemental effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
Abstract
Description
Claims (15)
- Claims1st, a kind of method for generating body animation, it is characterised in that methods described includes:Obtain the initial position of the characteristic point of body in image;Read the action sequence of the characteristic point, and the position of the characteristic point according to action sequence calculating present frame;The position of characteristic point according to the initial position and present frame of the characteristic point, anamorphose is carried out by the image in body region in described image;The image in the body region after deformation is covered on the background image in described image, animation is generated.2nd, according to the method described in claim 1, it is characterised in that obtain in image before the initial position of the characteristic point of body, methods described also includes:Original input picture is scanned, is obtained to the predefined characteristic point of body in described image and body region;The predefined characteristic point and body region are shown in described image;Preservation drags to the predefined characteristic point of display and body region the position after the correspondence position physically in described image is accurately positioned by user.3rd, method according to claim 2, it is characterised in that preservation is dragged to the predefined characteristic point of display and body region after the position after the correspondence position physically in described image is accurately positioned by user, and methods described also includes:According to preservation by the user be accurately positioned after body region position, image repair is carried out to described image, the image and background image in each body region is obtained.4th, according to the method described in claim 1, it is characterised in that before the action sequence for reading the characteristic point, methods described also includes:Predefined action is substantially first, and the basic member of action is used for change and the lasting frame number of change for representing characteristic point position, and the basic member of the action includes static member substantially, translates member, the basic member of rotation substantially and merge substantially first;According to the predefined action of characteristic point of the substantially first and described body of action, each one action of characteristic point correspondence is substantially first in the action;By predefined combination of actions into the characteristic point action sequence.5th, method according to claim 4, it is characterised in that according to the characteristic point of the substantially first and described body of action Predefined action, including:Change in location of the characteristic point of the body in the action is confirmed one by one,When the position for confirming the characteristic point is constant, the change in location of the characteristic point is represented with the static basic member;When confirming that the characteristic point will be translated, the change in location of the characteristic point is represented with the basic member of translation, the parameter of the basic member of translation includes two-dimension displacement and mobile lasting frame number that the characteristic point is moved;When confirming that the characteristic point will be rotated around further feature point, the change in location of the characteristic point is represented with the basic member of rotation, the parameter of the basic member of rotation is included by the anglec of rotation and the frame number of rotation lasts on circular characteristic point, the anglec of rotation on the two dimensional surface of described image, the vertical two dimensional surface direction;When confirming that the characteristic point will be moved to further feature point, the change in location of the characteristic point is represented with the basic member that merges, the parameter for merging basic member includes target signature point, mobile ratio and mobile lasting frame number.6th, the method stated according to claim 5, it is characterised in that the position of characteristic point described in present frame is calculated according to the action sequence, including:Each characteristic point is corresponding in acquisition present frame from the action sequence acts member substantially;According to the parameter in the basic member of the corresponding action of each characteristic point in the present frame, the position of each characteristic point in present frame is calculated.7th, according to the method described in claim 1, it is characterised in that obtain image in body characteristic point initial position, including:Obtain original input picture in body characteristic point position, and using the position of the characteristic point of acquisition as the characteristic point initial position;Or,When obtaining an action completion in the image that preserves the characteristic point of body position, and using the position of the characteristic point of acquisition as the characteristic point initial position.8th, according to the method described in claim 1, it is characterised in that the position of characteristic point according to the initial position and present frame of the characteristic point, the image in body region in described image is subjected to anamorphose, including:The characteristic point is connected according to the relevance between the initial position and the characteristic point of the characteristic point, initial characteristicses line is obtained;The characteristic point is connected according to the relevance between the position of characteristic point described in present frame and the characteristic point, worked as Previous frame characteristic curve;According to the initial characteristicses line and the present frame characteristic curve, the anamorphose of feature based line is carried out to the image in each body region in described image.9th, the method according to claim 1 to 8 any claim, it is characterised in that the body in described image is one kind in human body, animal and cartoon character.10th, a kind of device for generating body animation, it is characterised in that described device includes:Acquisition module, the initial position for obtaining the characteristic point of body in image;Computing module, the action sequence for reading the characteristic point, and the position of the characteristic point according to action sequence calculating present frame;Deformation module, the position of characteristic point, anamorphose is carried out by the image in body region in described image described in the present frame that the initial position of characteristic point and the computing module for being obtained according to the acquisition module are calculated;Generation module, for deformation module to be deformed after the image in body region be covered in background image in described image, generate animation.11st, device according to claim 10, it is characterised in that described device also includes:First predefined module, for being obtained in the acquisition module in image before the initial position of the characteristic point of body, scans original input picture, obtains to the predefined characteristic point of body in described image and body region;Processing module, the predefined characteristic point of display and body region are dragged to the position after the correspondence position physically in described image is accurately positioned by user for the described first predefined predefined characteristic point of module and body region to be shown in described image, and preserved.12nd, device according to claim 11, it is characterised in that described device also includes:Repair module, for according to the processing module preserve by the user be accurately positioned after body region position, to described image carry out image repair, obtain the image and background image in each body region.13rd, device according to claim 10, it is characterised in that described device also includes:Second predefined module, described to act change and the lasting frame number of change that basic member is used to represent characteristic point position for before the action sequence that the computing module reads the characteristic point, predefined action to be substantially first, the basic member of action includes Static member substantially, translation member, the basic member of rotation and congregation substantially are substantially first;3rd predefined module, for the predefined action of characteristic point according to the described second predefined predefined substantially first and described body of action of module, each one action of characteristic point correspondence is substantially first in the action;Composite module, for predefining the predefined combination of actions of module into the action sequence of the characteristic point by the described 3rd.14th, device according to claim 13, it is characterized in that, described 3rd predefined module, specifically for confirming change in location of the characteristic point of the body in the action one by one, when the position for confirming the characteristic point is constant, the change in location of the characteristic point is represented with the static basic member;When confirming that the characteristic point will be translated, the change in location of the characteristic point is represented with the basic member of translation, the parameter of the basic member of translation includes two-dimension displacement and mobile lasting frame number that the characteristic point is moved;When confirming that the characteristic point will be rotated around further feature point, the change in location of the characteristic point is represented with the basic member of rotation, the parameter of the basic member of rotation is included by the anglec of rotation and the frame number of rotation lasts on circular characteristic point, the anglec of rotation on the two dimensional surface of described image, the vertical two dimensional surface direction;When confirming that the characteristic point will be moved to further feature point, the change in location of the characteristic point is represented with the basic member that merges, the parameter for merging basic member includes target signature point, mobile ratio and mobile lasting frame number.15th, the device stated according to claim 14, it is characterised in that the computing module, including:Acquiring unit, for being obtained from the action sequence, each characteristic point in present frame is corresponding to act member substantially;Computing unit, for the parameter in the basic member of the corresponding action of each characteristic point in the present frame that is obtained according to the acquiring unit, calculates the position of each characteristic point in present frame.16th, device according to claim 10, it is characterised in that the acquisition module, specifically for obtain original input picture in body characteristic point position, and using the position of the characteristic point of acquisition as the characteristic point initial position;Or, obtain the position of the characteristic point of body in the image that preserves when a upper action is completed, and using the position of the characteristic point of acquisition as the characteristic point initial position.The characteristic point, is connected by the 17th, device according to claim 10, it is characterised in that the deformation module specifically for the relevance between the initial position according to the characteristic point and the characteristic point, obtains initial characteristicses line;The characteristic point is connected according to the relevance between the position of characteristic point described in present frame and the characteristic point, present frame characteristic curve is obtained;According to the initial characteristicses line and the present frame characteristic curve, the anamorphose of feature based line is carried out to the image in each body region in described image. 18th, the device according to claim 10 to 17 any claim, it is characterised in that the body in described image is one kind in human body, animal and cartoon character.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/077083 WO2012167475A1 (en) | 2011-07-12 | 2011-07-12 | Method and device for generating body animation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103052973A true CN103052973A (en) | 2013-04-17 |
CN103052973B CN103052973B (en) | 2015-12-02 |
Family
ID=47295365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201180001326.0A Expired - Fee Related CN103052973B (en) | 2011-07-12 | 2011-07-12 | Generate method and the device of body animation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103052973B (en) |
WO (1) | WO2012167475A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110473248A (en) * | 2019-08-16 | 2019-11-19 | 上海索倍信息科技有限公司 | A kind of measurement method using picture construction human 3d model |
CN110490958A (en) * | 2019-08-22 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Animation method for drafting, device, terminal and storage medium |
CN111597979A (en) * | 2018-12-17 | 2020-08-28 | 北京嘀嘀无限科技发展有限公司 | Target object clustering method and device |
CN113556600A (en) * | 2021-07-13 | 2021-10-26 | 广州虎牙科技有限公司 | Drive control method and device based on time sequence information, electronic equipment and readable storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106251389B (en) * | 2016-08-01 | 2019-12-24 | 北京小小牛创意科技有限公司 | Method and device for producing animation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI220234B (en) * | 2003-10-21 | 2004-08-11 | Ind Tech Res Inst | A method to simulate animated images for an object |
US20070035541A1 (en) * | 2005-07-29 | 2007-02-15 | Michael Isner | Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh |
CN101082985A (en) * | 2006-12-15 | 2007-12-05 | 浙江大学 | Decompounding method for three-dimensional object shapes based on user easy interaction |
US20090153569A1 (en) * | 2007-12-17 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method for tracking head motion for 3D facial model animation from video stream |
CN101473352A (en) * | 2006-04-24 | 2009-07-01 | 索尼株式会社 | Performance driven facial animation |
CN102074033A (en) * | 2009-11-24 | 2011-05-25 | 新奥特(北京)视频技术有限公司 | Method and device for animation production |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008141125A1 (en) * | 2007-05-10 | 2008-11-20 | The Trustees Of Columbia University In The City Of New York | Methods and systems for creating speech-enabled avatars |
CN101354795A (en) * | 2008-08-28 | 2009-01-28 | 北京中星微电子有限公司 | Method and system for driving three-dimensional human face cartoon based on video |
CN101777195B (en) * | 2010-01-29 | 2012-04-25 | 浙江大学 | Three-dimensional face model adjusting method |
CN101826217A (en) * | 2010-05-07 | 2010-09-08 | 上海交通大学 | Rapid generation method for facial animation |
-
2011
- 2011-07-12 CN CN201180001326.0A patent/CN103052973B/en not_active Expired - Fee Related
- 2011-07-12 WO PCT/CN2011/077083 patent/WO2012167475A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI220234B (en) * | 2003-10-21 | 2004-08-11 | Ind Tech Res Inst | A method to simulate animated images for an object |
US20070035541A1 (en) * | 2005-07-29 | 2007-02-15 | Michael Isner | Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh |
CN101473352A (en) * | 2006-04-24 | 2009-07-01 | 索尼株式会社 | Performance driven facial animation |
CN101082985A (en) * | 2006-12-15 | 2007-12-05 | 浙江大学 | Decompounding method for three-dimensional object shapes based on user easy interaction |
US20090153569A1 (en) * | 2007-12-17 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method for tracking head motion for 3D facial model animation from video stream |
CN102074033A (en) * | 2009-11-24 | 2011-05-25 | 新奥特(北京)视频技术有限公司 | Method and device for animation production |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111597979A (en) * | 2018-12-17 | 2020-08-28 | 北京嘀嘀无限科技发展有限公司 | Target object clustering method and device |
CN111597979B (en) * | 2018-12-17 | 2023-05-12 | 北京嘀嘀无限科技发展有限公司 | Target object clustering method and device |
CN110473248A (en) * | 2019-08-16 | 2019-11-19 | 上海索倍信息科技有限公司 | A kind of measurement method using picture construction human 3d model |
CN110490958A (en) * | 2019-08-22 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Animation method for drafting, device, terminal and storage medium |
CN110490958B (en) * | 2019-08-22 | 2023-09-01 | 腾讯科技(深圳)有限公司 | Animation drawing method, device, terminal and storage medium |
CN113556600A (en) * | 2021-07-13 | 2021-10-26 | 广州虎牙科技有限公司 | Drive control method and device based on time sequence information, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103052973B (en) | 2015-12-02 |
WO2012167475A1 (en) | 2012-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gain et al. | A survey of spatial deformation from a user-centered perspective | |
CN105678683B (en) | A kind of two-dimensional storage method of threedimensional model | |
US7570264B2 (en) | Rig baking | |
Stanculescu et al. | Freestyle: Sculpting meshes with self-adaptive topology | |
US20090179900A1 (en) | Methods and Apparatus for Export of Animation Data to Non-Native Articulation Schemes | |
Pan et al. | Sketch-based skeleton-driven 2D animation and motion capture | |
CN103052973A (en) | Method and device for generating body animation | |
Lin et al. | Metamorphosis of 3d polyhedral models using progressive connectivity transformations | |
Bhattacharjee et al. | A survey on sketch based content creation: from the desktop to virtual and augmented reality | |
US20040263518A1 (en) | Defrobulated angles for character joint representation | |
Chen et al. | A survey on 3d gaussian splatting | |
CN108664126A (en) | Deformable hand captures exchange method under a kind of reality environment | |
Cetinaslan et al. | Sketching manipulators for localized blendshape editing | |
Garcia et al. | Interactive applications for sketch-based editable polycube map | |
Tejera et al. | Animation control of surface motion capture | |
Yang et al. | Life-sketch: a framework for sketch-based modelling and animation of 3D objects | |
Çetinaslan | Position manipulation techniques for facial animation | |
US8659600B2 (en) | Generating vector displacement maps using parameterized sculpted meshes | |
Fukusato et al. | View-dependent formulation of 2.5 d cartoon models | |
Coutinho et al. | Puppeteering 2.5 D models | |
Ren et al. | Efficient facial reconstruction and real-time expression for VR interaction using RGB-D videos | |
Adzhiev et al. | Functionally based augmented sculpting | |
US20230377268A1 (en) | Method and apparatus for multiple dimension image creation | |
Bendels et al. | Image and 3D-Object Editing with Precisely Specified Editing Regions. | |
Li et al. | Efficient creation of 3D organic models from sketches and ODE-based deformations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20170704 Address after: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401 Patentee after: Guangdong Gaohang Intellectual Property Operation Co., Ltd. Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Patentee before: Huawei Technologies Co., Ltd. |
|
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Peng Bixian Inventor before: Dong Lanfang Inventor before: Chen Jiahui Inventor before: Li Dexu |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20170908 Address after: Zheng Zhen new street 620500 benevolence county of Meishan City, Sichuan Province dragon No. 26 Patentee after: Peng Bixian Address before: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401 Patentee before: Guangdong Gaohang Intellectual Property Operation Co., Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20151202 Termination date: 20180712 |