CN103052973A - Method and device for generating body animation - Google Patents

Method and device for generating body animation Download PDF

Info

Publication number
CN103052973A
CN103052973A CN2011800013260A CN201180001326A CN103052973A CN 103052973 A CN103052973 A CN 103052973A CN 2011800013260 A CN2011800013260 A CN 2011800013260A CN 201180001326 A CN201180001326 A CN 201180001326A CN 103052973 A CN103052973 A CN 103052973A
Authority
CN
China
Prior art keywords
characteristic point
image
action
predefined
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011800013260A
Other languages
Chinese (zh)
Other versions
CN103052973B (en
Inventor
董兰芳
陈家辉
李德旭
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN103052973A publication Critical patent/CN103052973A/en
Application granted granted Critical
Publication of CN103052973B publication Critical patent/CN103052973B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Abstract

A method and device for generating body animation is provided, which relates to the animation technology field. The method comprises: acquiring the initial positions of the feature points of body in the image; reading the action sequences of said feature points and calculating the positions of said feature points in the current frame according to the action sequences; according to the initial positions of said feature points and the positions of said feature points in the current frame, the image of the body region in said image is deformed; the deformed image of the body region is covered on the background image in said image in order to generate animation. Animation is generated using two-dimensional image deformation technology without creating three-dimensional model, thus reducing the workload; In addition, since single image can be driven to form body animation of any action by modifying the action sequences, no analysis nor clustering on large number of images is needed, thus the method is easy to apply and the calculation amount is small.

Description

Generate the method and device of body animation
Generate the method and device technical field of body animation
The present invention relates to cartoon technique field, more particularly to a kind of method and device for generating body animation.Background technology
Image body animation technology refers to handle several images for containing human body by computer, generates the technology of human body animation.At present, image body animation technology mainly has the technology based on threedimensional model and the technology based on image sequence.
Technology based on threedimensional model:The human body in the image of input is mapped first, the three-dimensional (3 D) manikin close with the human body is obtained, animation is then generated by the action for driving the three-dimensional (3 D) manikin to be specified.The advantage of this method is once having obtained three-dimensional (3 D) manikin, it is possible to generate the animation of any action.
Technology based on image sequence:Several motion images of human body are obtained from several images of input first, then animation effect are obtained as image morphing between these motion images.The advantage of this method is that amount of calculation is small, and animation speed is fast.
During the present invention is realized, inventor has found that prior art at least has problems with:
For the technology based on threedimensional model, the process that the human body in image is mapped into three-dimensional (3 D) manikin is cumbersome, and workload is big;For the technology based on image sequence, due to being as image morphing to obtain animation between the action in several images to input, therefore the animation acted in image can only be produced.The content of the invention
In order to reduce the workload of generation body animation, and the body animation of various actions can be generated based on single image, the embodiments of the invention provide a kind of method and device for generating body animation.The technical scheme is as follows:
On the one hand there is provided a kind of method for generating body animation, methods described includes:
Obtain the initial position of the characteristic point of body in image;
Read the action sequence of the characteristic point, and the position of the characteristic point according to action sequence calculating present frame;The position of characteristic point according to the initial position and present frame of the characteristic point, anamorphose is carried out by the image in body region in described image;
The image in the body region after deformation is covered on the background image in described image, animation is generated.
On the other hand there is provided a kind of device for generating body animation, described device includes: Acquisition module, the initial position for obtaining the characteristic point of body in image;
Computing module, the action sequence for reading the characteristic point, and the position of the characteristic point according to action sequence calculating present frame;
Deformation module, the position of characteristic point, anamorphose is carried out by the image in body region in described image described in the present frame that the initial position of characteristic point and the computing module for being obtained according to the acquisition module are calculated;
Generation module, for deformation module to be deformed after the image in body region be covered in background image in described image, generate animation.
The beneficial effect that technical scheme provided in an embodiment of the present invention is brought is:
By the action sequence for reading characteristic point, the position of present frame characteristic point is calculated according to action sequence, then anamorphose is carried out on the basis of the initial position of the characteristic point of acquisition, reach the position of present frame characteristic point, so as to generate two-dimentional body animation, because this method uses two dimensional image deformation technology, it is not necessary to set up threedimensional model, therefore reduce workload;This method drives body generation to act by action sequence, modification action sequence is only needed to regard to single image can be driven to form the body animation of any action, different types of action is obtained without the body movement in great amount of images is analyzed and clustered as prior art, therefore this method realizes that simply amount of calculation is small.Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, the accompanying drawing used required in being described below to embodiment is briefly described, apparently, drawings in the following description are only some embodiments of the present invention, for those of ordinary skill in the art, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the method flow diagram for the generation body animation that the embodiment of the present invention one is provided;
Fig. 2 is the method flow diagram for the Geometrical Parameter modeling that the embodiment of the present invention two is provided;
Fig. 3 is the upper half of human body image schematic diagram that the embodiment of the present invention two is provided;
Fig. 4 is the characteristic point schematic diagram for the body that the embodiment of the present invention two is provided;
Fig. 5 is the division schematic diagram in the body region that the embodiment of the present invention two is provided;
Fig. 6 a are the first time image repair schematic diagrames that the embodiment of the present invention two is provided;
Fig. 6 b are second of image repair schematic diagrames that the embodiment of the present invention two is provided;
Fig. 6 c are the third time image repair schematic diagrames that the embodiment of the present invention two is provided;
Fig. 7 is the method flow diagram for the predefined action sequence that the embodiment of the present invention two is provided;
Fig. 8 is the method flow diagram for the generation body animation that the embodiment of the present invention two is provided;
Fig. 9 is the formation supplemental characteristic line schematic diagram that the embodiment of the present invention two is provided; Figure 10 is the apparatus structure schematic diagram for the generation body animation that the embodiment of the present invention three is provided;
Figure 11 is the apparatus structure schematic diagram for another generation body animation that the embodiment of the present invention three is provided;
Figure 12 is the apparatus structure schematic diagram for another generation body animation that the embodiment of the present invention three is provided;
Figure 13 is the apparatus structure schematic diagram for another generation body animation that the embodiment of the present invention three is provided.Embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiment one
The embodiments of the invention provide a kind of method for generating body animation, this method can generate the animation of corresponding body for an any given image containing the body such as human body or animal or cartoon character.Referring to Fig. 1, method flow includes:
101:Obtain the initial position of the characteristic point of body in image;
102:The action sequence of characteristic point is read, and according to the position of characteristic point in action sequence calculating present frame;
103:According to the position of characteristic point in the initial position and present frame of characteristic point, the image in body region in the image is subjected to anamorphose;
104:By on the background image of the image covering in the body region after deformation in the images, animation is generated.
Method provided in an embodiment of the present invention, by the action sequence for reading characteristic point, the position of the characteristic point of body in present frame is calculated according to action sequence, then anamorphose is carried out on the basis of the initial position of the characteristic point of acquisition, the position of present frame characteristic point is reached, so that two-dimentional body animation is generated, because this method uses two dimensional image deformation technology, threedimensional model need not be set up, therefore reduces workload;This method drives body generation to act by action sequence, modification action sequence is only needed to regard to single image can be driven to form the body animation of any action, different types of action is obtained without the body movement in great amount of images is analyzed and clustered as prior art, therefore this method realizes that simply amount of calculation is small.Embodiment two
The embodiments of the invention provide a kind of method for generating body animation, this method can generate the animation of the body in image for an any given image, wherein, the body in image can be one kind in human body, animal and cartoon character.The upper part of the body animation of the bodies such as human body, animal and cartoon character can have both been generated using this method, whole body animation can also be generated, had only been illustrated in embodiments of the present invention exemplified by generating animation to the upper part of the body image of human body, but be not limited to this.
Method provided in an embodiment of the present invention, after user or system input original input picture, first carries out Geometrical Parameter modeling to the image, then generates 2D (Two Dimensions, two dimension further according to the model built up)Animation, referring to Fig. 2, The method flow of Geometrical Parameter modeling includes:
201:Original input picture is scanned, is obtained to the predefined characteristic point of body in the image and body region;Assuming that the original input picture of user's input is upper half of human body image, as shown in Fig. 3, in order that follow-up mark in the picture see it is clearer, the profile of upper half of human body image is only show in Fig. 3, image detail in the middle of profile is not shown, and during practical application should be a real upper half of human body image.
Specifically, the body feature in original input picture, analysis image is scanned, is obtained as the predefined characteristic point of the body and body region.
202:Predefined characteristic point and body region are shown in the image;
Specifically, predefined characteristic point and shape body region Shrink are put to the correct position being shown in the image, but sometimes due to positioning is not accurate enough, the characteristic point being shown in image and body region can not completely with the relevant position of body it is accurate it is corresponding on, therefore user is made to be accurately positioned according to the characteristic point and body region of display, as described in step 203.
203:Preservation drags to the predefined characteristic point of display and body region the position after the correspondence position physically in the image is accurately positioned by user;
During positioning, user can drag characteristic point and body region one by one to correct position physically, can also correct the broken line in body region, easy to operate.User completes to preserve the characteristic point of user's positioning and the position in body region after revision.
In embodiments of the present invention, characteristic point has 14, as shown in Figure 4.Characteristic point can increase or reduce, and the animation effect of the more generations of characteristic point is more careful, and the animation speed of the fewer generation of characteristic point is faster.
In embodiments of the present invention, the division in body region is as shown in figure 5, be divided into three parts:Body region(Region 0), left-hand area(Region 2) and right-hand area(Region 1).Body region can also increase, and such as separate upper arm with underarm, and can thus handle upper arm has the image overlapped with underarm.The purpose for selecting these three regions respectively by using broken line is to separate body with background, while body is divided into three parts, for anamorphose.
204:According to preservation by user be accurately positioned after body region position, image repair is carried out to the image, the image and background image in each body region is obtained.
Algorithm can be repaired using the rapid image commonly used in the prior art based on image averaging gray value when carrying out image repair, but be not limited to this.
Specifically, when carrying out image repair, several body regions has been divided it is necessary to carry out image repair several times in image, have been repaired every time for different body regions.Why repeatedly to be repaired, it is because the arm regions of body may block body region in image, or background image has been blocked in each several part region of body, each layer of image information can be reduced by repeatedly repairing, so, when carrying out anamorphose, arm, body and background are separated, will not be influenced each other.
For example, when carrying out image repair to the Fig. 5 for having divided body region, needing to repair three times altogether:First, image repair is carried out to the region 2 in Fig. 5, obtains reducing the background area blocked by left-hand area and body region in Fig. 6 a, Fig. 6 a Information;Then image repair is carried out to the region 1 in Fig. 6 a, obtains reducing the information of the background area blocked by left-hand area and right-hand area and body region in Fig. 6 b, Fig. 6 b;Image repair finally is carried out to the region 0 in Fig. 6 b, obtain reducing the information for the background area blocked by left-hand area, right-hand area and body region in Fig. 6 c, Fig. 6 c, so, Fig. 6 c are exactly the background image finally given, per frame animation all using Fig. 6 c as background.
Further, after Geometrical Parameter modeling is completed to original input picture, it is possible to carry out generating for 2D animations, in embodiments of the present invention, the action sequence for the characteristic point read in during generation 2D animations is pre-defined, referring to Fig. 7, and the method for predefined action sequence includes:
701:Predefined action is substantially first;
The substantially first change and the lasting frame number of change for representing characteristic point position of action, method provided in an embodiment of the present invention has pre-defined 4 actions member substantially, including member, rotation are substantially first substantially and congregation is substantially first for static substantially first, translation.
It is static substantially first:Represent that the position of individual features point is constant, without parameter.
Translation is substantially first:Represent that individual features point will translate some displacement in some frame ins, have 3 parameters, include the two-dimension displacement X and y of characteristic point movement, and mobile lasting frame number.The action relevant with translation is a lot, such as shrugs, the characteristic point 6 and 7 in Fig. 4 exactly is translated up into some displacement.Meanwhile, translation can also reach the effect that exaggeration is acted in 2 D animation, be such as displaced outwardly the characteristic point 6 and 7 in Fig. 4 respectively, then have arm to become strong effect suddenly.What the unit of translation was calculated by default image size is 1000*1000, for the different size of image of input, the actual size of size according to default image and input picture is subjected to coordinate transformation, if i.e. the size of input picture is iWidth*iHeight, then translational movement x*=iWidth/1000 The data structure of the basic member of translation can design as follows:
Typedef struct STR-TRANS 〃 translations are substantially first
{
bool bTrans;Whether 〃 is translation
int iX ;// translation x
int iY ;〃 translates y
int iTime ;〃 translates lasting frame number
} STR— TRANS;
Rotation is substantially first:Represent that individual features point will surround some characteristic point in some frame ins and rotate some angle, 4 parameters are had, including by the anglec of rotation and the frame number of rotation lasts on circular characteristic point, the anglec of rotation on the two dimensional surface of the image, the vertical two dimensional surface direction.It is also referred to as rotating basic point by circular characteristic point, is represented with the numbering of this feature point.Most of hand motions are all relevant with rotation in animation, and this is also consistent with the bone action of body.The data structure of the basic member of rotation can design as follows:
Typedef struct STR-ROTATE 〃 rotations are substantially first bool bRotate ;// whether rotate
int iO ;〃 rotates basic point 0
float fTheta ;The anglec of rotation on 〃 two dimensional surfaces
float fZ ;The anglec of rotation on the vertical two dimensional surface directions of 〃
int iTime ;The frame number of 〃 rotation lasts
} STR— ROTATE;
Merge substantially first:Represent that individual features point will be moved in some frame ins to the position of some characteristic point, have 3 parameters, including target signature point, mobile ratio and mobile lasting frame number.Mobile ratio refers to the percentage of the two distance that this feature point is moved to target signature point, if mobile ratio is 1, it is then the position that this feature point is moved on to target signature point, is the position for the centre for moving on to this feature point and target signature point if mobile ratio is 0. 5.Rendezvous operation is used to make up in translation is unable to reach the defect of the position in imagination due to defining accurate shift value, and such as left hand is put on right shoulder, because not knowing the coordinate of right shoulder in advance, can not accurately be reached with translation.The data structure for the basic member that merges can design as follows:
The typedef struct STR-basic member bool bMix of MIX 〃 congregations;Whether 〃 merges
int iO ;〃 target signatures point
float fRate ;〃 moves ratio
int iTime ;The lasting frame number of 〃 congregations
} STR— MIX ;
702:According to the predefined action of characteristic point of the basic member of action and body;
Specifically, each one action of characteristic point correspondence is substantially first in being acted at one, therefore, according to comprising the concrete steps that for the predefined action of characteristic point of the basic member of action and body, change in location of the characteristic point of body in the action is confirmed one by one, when the position for confirming characteristic point is constant, the change in location of this feature point is represented with static basic member;
When confirming that characteristic point will be translated, the change in location of this feature point is represented with basic member is translated;
When confirming that characteristic point will be rotated around further feature point, the change in location of this feature point is represented with basic member is rotated;
When confirming that characteristic point will be moved to further feature point, the change in location of this feature point is represented with the basic member that merges.Illustrated below by taking the body in Fig. 4 as an example, can design as follows for defining the data structure of action:Typedef struct STR-ACTION 〃 actions bool bStill[14];〃 characteristic points are motionless
STR— TRANS strTrans[14];〃 characteristic points are translated
STR— ROTATE strRotate[14]
STR— MIX strMix[14];〃 characteristic points merge
The lasting frame number of the 〃 actions
}STR_ACTI0N;
Wherein, [14] represent that corresponding array has 14 array elements, because having 14 characteristic points in Fig. 4.Confirm which change in location of the characteristic point in the action should act basic member and represent with one by one, and the change of record this feature point position and parameter in array element corresponding with this feature point in acting basic member accordingly.
For the lasting frame number of the action, the maximum for specially finding out change in location in each characteristic point continues frame number, and the maximum is continued into frame number is used as the lasting frame number of the action.
Further, after the data structure for defining action, it can be obtained by the file content for representing the action, assuming that representing static substantially first with s, represent that translation is substantially first with t, represent that rotation is substantially first with r, represented to merge with m substantially first, the action of the body in three Fig. 4 is set forth below:
The s s 〃 of 8 90 60 10 r of s s s s s s s s s s r 9-90 60 10 act m l2 0.5 8 // action, 2 0-20 5 t of s s s s s s s s s s t 0-20 5 t, 0-20 5 t 0-20 5 〃 actions 3 of 1 s s s s s s s s s s s s m l3 0.5 8 in action 1, preceding ten s represent that characteristic point 0 to 9 is motionless, " r 8 90 60 10 " represents characteristic point 10 in 10 frame ins, it is rotated by 90 ° around characteristic point 8 on the two dimensional surface of the image, the vertical two dimensional surface side rotates up 60 degree, " r 9-90 60 10 " represents characteristic point 11 in 10 frame ins, minus 90 degree are rotated on the two dimensional surface of the image around characteristic point 9, the vertical two dimensional surface side rotates up 60 degree, most latter two s represents that characteristic point 12 and 13 is motionless.The animation effect that is actually formed of action 1 is that left hand and the right hand are swung into front in 10 frame ins.
In action 2, preceding 12 s represent that characteristic point 0 to 11 is motionless, " m 13 0.5 8 " represents that characteristic point 12 is moved to the point midway of characteristic point 12 and the line of characteristic point 13 in 8 frame ins, and " m 12 0.5 8 " represents that characteristic point 13 is moved to the point midway of characteristic point 13 and the line of characteristic point 12 in 8 frame ins.The animation effect that action 2 is actually formed is to make the finger tip contacts of two hands in 8 frame ins.
In action 3, preceding ten s represent that characteristic point 0 to 9 is motionless, and latter four " t 0-20 5 " represents that characteristic point 10 to 13 is subjected to displacement 20 upwards in 5 frame ins.The animation effect that action 3 is actually formed is to move up left hand and the right hand.
This step can pre-define multiple actions for the image of input.
703:By predefined combination of actions into characteristic point action sequence. Specifically, predefined multiple actions are combined, it is possible to obtain action sequence, an action sequence content for for example reading body in Fig. 4 is as follows:
#actl#5
s s s s s s s s s s r 8 90 60 10 r 9 -90 60 10 s s
s s s s s s s s s s s s m l3 0.5 8 m l2 0.5 8
s s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5
s s s s s s s s s s t 0 20 5 t 0 20 5 t 0 20 5 t 0 20 5
s s s s s s s s s s t 0 -20 5 t 0 -20 5 t 0 -20 5 t 0 -20 5
Wherein, the entitled actl of the action sequence, is decomposed into 5 actions altogether.
This step can combine multiple action sequences in advance for the image of input.
Further, complete Geometrical Parameter modeling and predefine after good action sequence, referring to Fig. 8, the method for generation body animation includes:
801:Read the action sequence of characteristic point;
Specifically, during the action sequence for the animation that will be generated reads into memory, the action sequence can be divided into several actions.This step 801 can both be performed before step 802, can also be performed before step 803, the embodiment of the present invention is not especially limited to this.
802:Obtain the initial position of the characteristic point of body in image;
Specifically, the initial bit for obtaining the characteristic point of body in image is equipped with following two modes:
Obtain original input picture in body characteristic point position, and using the position of the characteristic point of acquisition as characteristic point initial position;Or,
When obtaining an action completion in the image that preserves the characteristic point of body position, and using the position of the characteristic point of acquisition as characteristic point initial position.
It can be seen that from both the above mode, the initial position of characteristic point can have two kinds of definition, a kind of is the position that image does not carry out the characteristic point before any animation, that is, when initial Geometrical Parameter is modeled user position original input picture in characteristic point position;It is another when being that upper one complete action is completed in action sequence, the position of the characteristic point of body in the image of preservation.
In method provided in an embodiment of the present invention, if using the initial position of former characteristic point, what is generated when being subsequently generated animation is the animation of similar setting-up exercises to radio music form, i.e. each action in action sequence is to be carried out deforming what is obtained according to the initial characteristic point position of the body;If using the initial position of latter feature point, what is generated when being subsequently generated animation is the animation of sequence, i.e. each action in action sequence is to carry out deforming what is obtained according to the position of characteristic point during upper one action completion. Method provided in an embodiment of the present invention to being defined using the initial position of which kind of characteristic point, can not obtain the initial position of characteristic point using any one in above two mode.
803:The position of characteristic point in present frame is calculated according to action sequence;
Specifically, each characteristic point is corresponding in acquisition present frame from action sequence acts member substantially;According to the parameter in the basic member of the corresponding action of each characteristic point in present frame, the position of each characteristic point in present frame is calculated.Wherein, present frame refers to the frame that will reach.
For example, first it is to be understood which frame that present frame is to proceed to:Initialization frame number iTime=0 when an action starts, often completes a frame animation, iTime=iTime+l, if iTime reaches the lasting frame number of current action, iTime is zeroed and started counting up again.
Know after which frame that present frame is to proceed to, start to calculate the position of each characteristic point of body in present frame, note characteristic point is j:
Substantially first for translation, translational movement is:
(strTrans [ j] . iX * iTime I strTrans [j] . iTime,
strTrans [ j] . iY * iTime I strTrans [ j] . iTime);
Referring to the definition of data structure noted earlier, wherein, strTrans [j] iX represent the total displacement that characteristic point j is translated in the X direction, strTrans [j] iY represent the total displacement that characteristic point j is translated in y-direction, strTrans [j] iTime represent that characteristic point j translates lasting frame number, and iTime represents the frame number of present frame;
It is substantially first for rotation, first calculate rotating vector V0, then the postrotational vectorial VI on the two dimensional surface of the image is calculated, the anglec of rotation is strRotate [j] fTheta * iTime I strRotate [j] iTime, then the postrotational vectorial V2 of the vertical two dimensional surface is calculated, the anglec of rotation is strRotate [j] fZ * iTime I strRotate [j] iTime, and rotation basic point is plus the new position that V2 is characteristic point j;The meaning of wherein each parameter referring to data structure noted earlier definition;
It is substantially first for merging, first calculate total motion-vector V0, then motion-vector VI=V0 * strMix [j] fRate * iTime I strMix [j] iTime of present frame can be obtained, wherein definition of the meaning of each parameter referring to data structure noted earlier.
In above-mentioned calculating, if iTime is more than the lasting frame number of the basic member of the action, iTime is equal with the frame number that the basic member of the action is lasting in calculating.
The position of present frame characteristic point will be calculated before each frame starts, when calculating, some characteristic points are associated, the position of associated characteristic point will also be recalculated, if the characteristic point 10 in Fig. 4 around characteristic point 8 if rotating, characteristic point 12 is also required to rotate simultaneously, and associated feature point group includes in the example shown in Figure 4:
The feature point group being associated in translation:6 (8,10,12), 7 (9,11,13) The feature point group being associated in rotation:8 (10,12), 9 (11,13)
The feature point group being associated in congregation:10 (12), 11 (13), 12 (10), 13 (11)
804:According to the position of characteristic point in the initial position and present frame of characteristic point, the image in body region in the image is subjected to anamorphose;
Alternatively, can be first according to the initial position of the characteristic point of acquisition, the characteristic point of body in image is recovered to initial position, using initial position, this state is deformed each frame in animation, and the state of this initial position is obtained before this frame animation by anamorphose.
Further, according to the initial position of characteristic point and the position of present frame characteristic point, the image in body region in the image is subjected to anamorphose, specifically included:Characteristic point is connected according to the relevance between the initial position and characteristic point of characteristic point, initial characteristicses line is obtained;Characteristic point is connected according to the relevance between the position of characteristic point in present frame and characteristic point, present frame characteristic curve is obtained;According to initial characteristicses line and present frame characteristic curve, the anamorphose of feature based line is carried out to the image in each body region in the image.The reason for embodiment of the present invention uses the morphing of feature based line is that the motion of the bodies such as human body and animal is driven by bone.
Further, since anamorphose has continuity, each pixel it need not be calculated in deformation, the information of uncalculated pixel can be obtained by the pixel calculated by interpolation.
805:By on the background image of the image covering in the body region after deformation in the images, animation is generated.
Above-mentioned 801 to 805 be the handling process for generating a frame animation, and circulation performs 802 to 805 when then generating next frame animation, and constantly circulation can generate continuous body animation.
When carrying out the anamorphose of feature based line, for relatively simple characteristic curve, the phenomenon for deformation diffusion occur is easy to from characteristic curve region farther out and the larger region of amplitude of deformation ratio, such as in test arm rotational steps more than 90 degree when will be deformed the phenomenon of diffusion.In order that being not in the phenomenon of deformation diffusion during anamorphose, supplemental characteristic line can be increased on the profile of the outside in body region, i.e. body on the basis of features described above line, body is surrounded with supplemental characteristic line(May be without be all surrounded), the phenomenon that it is not in deformation diffusion that the body inside these supplemental characteristic lines, which is, and in outside, the phenomenon that it is not in deformation diffusion that the region closer from supplemental characteristic line, which is also,.
For example, referring to Fig. 9, the characteristic curve of body region is 0-1, 1-2, 1-3, 2_4, 3_5, 0-6 and 0_7, the characteristic curve of left-hand area is 6-8, 8-10 and 10-12, and the characteristic curve of left-hand area is replicated a to both sides respectively, obtain new characteristic curve 6, -8 ', 6 " -8 ", 8, -10, , 8 " -10 ", 10, -12, with 10 " -12 ", the characteristic curve of right-hand area is 7-9, 9-11 and 11-13, and the characteristic curve of right-hand area is replicated a to both sides respectively, obtain new characteristic curve 7, -9, , 7 " -9 ", 9, -11, , 9 " -11 ", 11, -13, with 11 " -13 ".Duplication operation to left-hand area characteristic curve is first to calculate 6-8 length Ll, takes distance of the characteristic curve with original characteristic curve that 1^=1^1/5, L are duplication.Then to each section of characteristic curve to be replicated on left-hand area, characteristic curve is translated into L along perpendicular both direction, can be respective Obtain two new characteristic curves.Then the intersection point of the same side adjacent feature line is obtained again, the intersection point is the intersection point of the corresponding new characteristic curve of original characteristic curve intersection point, such as 6-8,8-10,10-12 are replicated to side can obtain 6 '-8-1,8-2-10-1,10-2-12,, the intersection point of 6 '-8-1 and 8-2-10-1 is then obtained, 8 ' can be obtained, obtain 8-2-10-1 and 10-2-12, intersection point, 10 can be obtained.
In embodiments of the present invention, body form in the original input picture of user or system input can be free position, it is not specifically limited, the body of any posture can use the positioning in method progress characteristic point provided in an embodiment of the present invention and body region, and adaptable action sequence is predefined for it, generate body animation.But, in order that body animation is more smooth, action is more, effect is more preferable, and make the predefined simple of action sequence, the original input picture that input can be required is that a body is attentioned the image that standing both hands naturally droop, if what is inputted is not the body image of the posture, method provided in an embodiment of the present invention can be first passed through the anamorphose is turned into body and attentioned the image that standing both hands naturally droop, and preserve the image after deformation, it is used as the default image deformed when being subsequently generated animation, and action sequence is generated according to the default image.
Method provided in an embodiment of the present invention, by inputting an image containing the body such as human body or animal, the characteristic point of body and body region is being located above out, and the image in enantiomorph region carries out image repair, background of the background image obtained using after reparation as animation, the characteristic point position of every frame is calculated according to the action sequence of reading, and anamorphose one by one is carried out based on the initial position of characteristic point, so as to generate the body animation of two dimension, because this method uses two dimensional image deformation technology, the new position of characteristic point in each frame is calculated according to action sequence, form characteristic curve, then the image formed in new frame, produce the effect of animation, threedimensional model need not be set up, therefore workload is reduced;4 actions of this method by proposing two-dimentional body animation are substantially first, for combining formation action and action sequence, and drive body generation to act by action sequence, therefore modification action sequence is only needed to regard to single image can be driven to form the body animation of any action, different types of action is obtained without the body movement in great amount of images is analyzed and clustered as prior art, therefore this method realizes that simply amount of calculation is small;In addition, the operation for translating basic member can also produce the effect of some exaggerations action in two-dimentional body animation, reach that body becomes strong effect suddenly as the characteristic point for the shoulder mentioned in above-mentioned embodiment is outwards translated suddenly.Embodiment three
The embodiments of the invention provide a kind of device for generating body animation, the device can generate the animation of the body in image for an any given image, wherein, the body in image can be one kind in human body, animal and cartoon character.The upper part of the body animation of the bodies such as human body, animal and cartoon character can have both been generated using the device, whole body animation can also be generated.Referring to Figure 10, the device includes:
Acquisition module 1001, the initial position for obtaining the characteristic point of body in image; Computing module 1002, the action sequence for reading characteristic point, and according to the position of characteristic point in action sequence calculating present frame;
The position of characteristic point, anamorphose is carried out by the image in body region in the image in deformation module 1003, the present frame that the initial position of characteristic point and computing module 1002 for being obtained according to acquisition module 1001 are calculated;
Generation module 1004, for deformation module 1003 to be deformed after body region image covering background image in the images, generate animation.
Further, referring to Figure 11, the device also includes:
First predefined module 1005, for being obtained in acquisition module 1001 in image before the initial position of the characteristic point of body, scans original input picture, obtains to the predefined characteristic point of body in the image and body region;Step 201 in the specific implementation process detailed in Example two of first predefined module 1005, here is omitted;
Processing module 1006, the predefined characteristic point of display and body region are dragged to the position after the correspondence position physically in the image is accurately positioned by user for the first predefined predefined characteristic point of module 1005 and body region to be shown in the image, and preserved;Step 202 and 203 in the specific implementation process detailed in Example two of processing module 1006, here is omitted.
Further, referring to Figure 12, the device also includes:
Repair module 1007, for according to processing module 1006 preserve by user be accurately positioned after body region position, to the image carry out image repair, obtain the image and background image in each body region;Step 204 in the specific implementation process detailed in Example two of repair module 1007, here is omitted.
Yet further, referring to Figure 13, the device also includes:
Second predefined module 1008, for before the action sequence that computing module 1002 reads characteristic point, predefined action is substantially first, the basic member of the action is used for the change for representing characteristic point position and the lasting frame number of change, and the basic member of the action includes static member substantially, translates member, the basic member of rotation substantially and merge substantially first;Step 701 in the specific implementation process detailed in Example two of second predefined module 1008, here is omitted;
3rd predefined module 1009, for the predefined action of characteristic point according to the second predefined predefined basic member of action of module 1008 and the body, each one action of characteristic point correspondence is substantially first in the action;Step 702 in the specific implementation process detailed in Example two of 3rd predefined module 1009, here is omitted;
Composite module 1010, for predefining the predefined combination of actions of module 1009 into the action sequence of characteristic point by the 3rd;Step 703 in the specific implementation process detailed in Example two of composite module 1010, here is omitted.
Specifically, the 3rd predefined module 1009, specifically for confirming change in location of the characteristic point of the body in the action one by one, when the position for confirming characteristic point is constant, the change in location of this feature point is represented with static basic member;When confirming that characteristic point will be translated, the change in location of this feature point is represented with basic member is translated, the parameter of the basic member of translation includes should The two-dimension displacement of characteristic point movement and mobile lasting frame number;When confirming that characteristic point will be rotated around further feature point, the change in location of this feature point is represented with basic member is rotated, the parameter of the basic member of rotation is included by the anglec of rotation and the frame number of rotation lasts on circular characteristic point, the anglec of rotation on the two dimensional surface of the image, the vertical two dimensional surface direction;When confirming that characteristic point will be moved to further feature point, the change in location of this feature point is represented with the basic member that merges, the parameter for the basic member that merges includes target signature point, mobile ratio and mobile lasting frame number.
Further, computing module 1002, including:
Acquiring unit, for being obtained from action sequence, each characteristic point in present frame is corresponding to act member substantially, wherein, present frame refers to the frame that will reach;
Computing unit, for the parameter in the basic member of the corresponding action of each characteristic point in the present frame that is obtained according to acquiring unit, calculates the position of each characteristic point in present frame.
Step 803 in the specific implementation process detailed in Example two of computing module 1002, here is omitted.
Alternatively, acquisition module 1001, specifically for obtain original input picture in body characteristic point position, and using the position of the characteristic point of acquisition as characteristic point initial position;Or,
Acquisition module 1001, the position specifically for obtaining the characteristic point of body in the image that preserves when a upper action is completed, and using the position of the characteristic point of acquisition as characteristic point initial position.
Step 802 in the specific implementation process detailed in Example two of acquisition module 1001, here is omitted.
Further, characteristic point, is connected by deformation module 1003 specifically for the relevance between the initial position and characteristic point according to characteristic point, obtains initial characteristicses line;Characteristic point is connected according to the relevance between the position of characteristic point in present frame and characteristic point, present frame characteristic curve is obtained;According to initial characteristicses line and present frame characteristic curve, the anamorphose of feature based line is carried out to the image in each body region in image.Step 804 in the specific implementation process detailed in Example two of deformation module 1003, here is omitted.
In embodiments of the present invention, body form in the original input picture of user or system input can be free position, it is not specifically limited, the body of any posture can use the positioning in method progress characteristic point provided in an embodiment of the present invention and body region, and adaptable action sequence is predefined for it, generate body animation.But, in order that body animation is more smooth, action is more, effect is more preferable, and make the predefined simple of action sequence, the original input picture that input can be required is that a body is attentioned the image that standing both hands naturally droop, if what is inputted is not the body image of the posture, method provided in an embodiment of the present invention can be first passed through the anamorphose is turned into body and attentioned the image that standing both hands naturally droop, and preserve the image after deformation, it is used as the default image deformed when being subsequently generated animation, and action sequence is generated according to the default image.
In summary, the characteristic point of body and body region is being located above out by inputting an image containing the body such as human body or animal in the embodiment of the present invention, and the image in enantiomorph region carries out image repair, with the background image obtained after reparation It is used as the background of animation, the characteristic point position of every frame is calculated according to the action sequence of reading, and anamorphose one by one is carried out based on the initial position of characteristic point, so as to generate the body animation of two dimension, because this method uses two dimensional image deformation technology, the new position of characteristic point in each frame is calculated according to action sequence, form characteristic curve, then the image formed in new frame, produce the effect of animation, threedimensional model need not be set up, therefore reduces workload;4 actions of this method by proposing two-dimentional body animation are substantially first, for combining formation action and action sequence, and drive body generation to act by action sequence, therefore modification action sequence is only needed to regard to single image can be driven to form the body animation of any action, different types of action is obtained without the body movement in great amount of images is analyzed and clustered as prior art, therefore this method realizes that simply amount of calculation is small;In addition, the operation for translating basic member can also produce the effect of some exaggerations action in two-dimentional body animation, reach that body becomes strong effect suddenly as the characteristic point for the shoulder mentioned in above-mentioned embodiment is outwards translated suddenly.It should be noted that:The device for the generation body animation that above-described embodiment is provided is when generating body animation, only it is illustrated with the division of above-mentioned each functional module, in practical application, it can as needed and by above-mentioned functions distribute and be completed by different functional modules, the internal structure of device is divided into different functional modules, to complete all or part of function described above.In addition, the device for the generation body animation that above-described embodiment is provided and the embodiment of the method for generation body animation belong to same design, it implements process and refers to embodiment of the method, repeats no more here.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
One of ordinary skill in the art will appreciate that realizing all or part of step of above-described embodiment can be completed by hardware, the hardware of correlation can also be instructed to complete by program, described program can be stored in a kind of computer-readable recording medium, storage medium mentioned above can be read-only storage, disk or CD etc..
Presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit the invention, within the spirit and principles of the invention, any modification, equivalent substitution and improvements made etc. should be included in the scope of the protection.

Claims (15)

  1. Claims
    1st, a kind of method for generating body animation, it is characterised in that methods described includes:
    Obtain the initial position of the characteristic point of body in image;
    Read the action sequence of the characteristic point, and the position of the characteristic point according to action sequence calculating present frame;The position of characteristic point according to the initial position and present frame of the characteristic point, anamorphose is carried out by the image in body region in described image;
    The image in the body region after deformation is covered on the background image in described image, animation is generated.2nd, according to the method described in claim 1, it is characterised in that obtain in image before the initial position of the characteristic point of body, methods described also includes:
    Original input picture is scanned, is obtained to the predefined characteristic point of body in described image and body region;
    The predefined characteristic point and body region are shown in described image;
    Preservation drags to the predefined characteristic point of display and body region the position after the correspondence position physically in described image is accurately positioned by user.
    3rd, method according to claim 2, it is characterised in that preservation is dragged to the predefined characteristic point of display and body region after the position after the correspondence position physically in described image is accurately positioned by user, and methods described also includes:According to preservation by the user be accurately positioned after body region position, image repair is carried out to described image, the image and background image in each body region is obtained.
    4th, according to the method described in claim 1, it is characterised in that before the action sequence for reading the characteristic point, methods described also includes:
    Predefined action is substantially first, and the basic member of action is used for change and the lasting frame number of change for representing characteristic point position, and the basic member of the action includes static member substantially, translates member, the basic member of rotation substantially and merge substantially first;
    According to the predefined action of characteristic point of the substantially first and described body of action, each one action of characteristic point correspondence is substantially first in the action;
    By predefined combination of actions into the characteristic point action sequence.
    5th, method according to claim 4, it is characterised in that according to the characteristic point of the substantially first and described body of action Predefined action, including:
    Change in location of the characteristic point of the body in the action is confirmed one by one,
    When the position for confirming the characteristic point is constant, the change in location of the characteristic point is represented with the static basic member;When confirming that the characteristic point will be translated, the change in location of the characteristic point is represented with the basic member of translation, the parameter of the basic member of translation includes two-dimension displacement and mobile lasting frame number that the characteristic point is moved;
    When confirming that the characteristic point will be rotated around further feature point, the change in location of the characteristic point is represented with the basic member of rotation, the parameter of the basic member of rotation is included by the anglec of rotation and the frame number of rotation lasts on circular characteristic point, the anglec of rotation on the two dimensional surface of described image, the vertical two dimensional surface direction;
    When confirming that the characteristic point will be moved to further feature point, the change in location of the characteristic point is represented with the basic member that merges, the parameter for merging basic member includes target signature point, mobile ratio and mobile lasting frame number.
    6th, the method stated according to claim 5, it is characterised in that the position of characteristic point described in present frame is calculated according to the action sequence, including:
    Each characteristic point is corresponding in acquisition present frame from the action sequence acts member substantially;
    According to the parameter in the basic member of the corresponding action of each characteristic point in the present frame, the position of each characteristic point in present frame is calculated.
    7th, according to the method described in claim 1, it is characterised in that obtain image in body characteristic point initial position, including:
    Obtain original input picture in body characteristic point position, and using the position of the characteristic point of acquisition as the characteristic point initial position;
    Or,
    When obtaining an action completion in the image that preserves the characteristic point of body position, and using the position of the characteristic point of acquisition as the characteristic point initial position.
    8th, according to the method described in claim 1, it is characterised in that the position of characteristic point according to the initial position and present frame of the characteristic point, the image in body region in described image is subjected to anamorphose, including:
    The characteristic point is connected according to the relevance between the initial position and the characteristic point of the characteristic point, initial characteristicses line is obtained;
    The characteristic point is connected according to the relevance between the position of characteristic point described in present frame and the characteristic point, worked as Previous frame characteristic curve;
    According to the initial characteristicses line and the present frame characteristic curve, the anamorphose of feature based line is carried out to the image in each body region in described image.9th, the method according to claim 1 to 8 any claim, it is characterised in that the body in described image is one kind in human body, animal and cartoon character.
    10th, a kind of device for generating body animation, it is characterised in that described device includes:
    Acquisition module, the initial position for obtaining the characteristic point of body in image;
    Computing module, the action sequence for reading the characteristic point, and the position of the characteristic point according to action sequence calculating present frame;
    Deformation module, the position of characteristic point, anamorphose is carried out by the image in body region in described image described in the present frame that the initial position of characteristic point and the computing module for being obtained according to the acquisition module are calculated;
    Generation module, for deformation module to be deformed after the image in body region be covered in background image in described image, generate animation.
    11st, device according to claim 10, it is characterised in that described device also includes:
    First predefined module, for being obtained in the acquisition module in image before the initial position of the characteristic point of body, scans original input picture, obtains to the predefined characteristic point of body in described image and body region;
    Processing module, the predefined characteristic point of display and body region are dragged to the position after the correspondence position physically in described image is accurately positioned by user for the described first predefined predefined characteristic point of module and body region to be shown in described image, and preserved.
    12nd, device according to claim 11, it is characterised in that described device also includes:
    Repair module, for according to the processing module preserve by the user be accurately positioned after body region position, to described image carry out image repair, obtain the image and background image in each body region.
    13rd, device according to claim 10, it is characterised in that described device also includes:
    Second predefined module, described to act change and the lasting frame number of change that basic member is used to represent characteristic point position for before the action sequence that the computing module reads the characteristic point, predefined action to be substantially first, the basic member of action includes Static member substantially, translation member, the basic member of rotation and congregation substantially are substantially first;
    3rd predefined module, for the predefined action of characteristic point according to the described second predefined predefined substantially first and described body of action of module, each one action of characteristic point correspondence is substantially first in the action;
    Composite module, for predefining the predefined combination of actions of module into the action sequence of the characteristic point by the described 3rd.
    14th, device according to claim 13, it is characterized in that, described 3rd predefined module, specifically for confirming change in location of the characteristic point of the body in the action one by one, when the position for confirming the characteristic point is constant, the change in location of the characteristic point is represented with the static basic member;When confirming that the characteristic point will be translated, the change in location of the characteristic point is represented with the basic member of translation, the parameter of the basic member of translation includes two-dimension displacement and mobile lasting frame number that the characteristic point is moved;When confirming that the characteristic point will be rotated around further feature point, the change in location of the characteristic point is represented with the basic member of rotation, the parameter of the basic member of rotation is included by the anglec of rotation and the frame number of rotation lasts on circular characteristic point, the anglec of rotation on the two dimensional surface of described image, the vertical two dimensional surface direction;When confirming that the characteristic point will be moved to further feature point, the change in location of the characteristic point is represented with the basic member that merges, the parameter for merging basic member includes target signature point, mobile ratio and mobile lasting frame number.
    15th, the device stated according to claim 14, it is characterised in that the computing module, including:
    Acquiring unit, for being obtained from the action sequence, each characteristic point in present frame is corresponding to act member substantially;Computing unit, for the parameter in the basic member of the corresponding action of each characteristic point in the present frame that is obtained according to the acquiring unit, calculates the position of each characteristic point in present frame.
    16th, device according to claim 10, it is characterised in that the acquisition module, specifically for obtain original input picture in body characteristic point position, and using the position of the characteristic point of acquisition as the characteristic point initial position;Or, obtain the position of the characteristic point of body in the image that preserves when a upper action is completed, and using the position of the characteristic point of acquisition as the characteristic point initial position.
    The characteristic point, is connected by the 17th, device according to claim 10, it is characterised in that the deformation module specifically for the relevance between the initial position according to the characteristic point and the characteristic point, obtains initial characteristicses line;The characteristic point is connected according to the relevance between the position of characteristic point described in present frame and the characteristic point, present frame characteristic curve is obtained;According to the initial characteristicses line and the present frame characteristic curve, the anamorphose of feature based line is carried out to the image in each body region in described image. 18th, the device according to claim 10 to 17 any claim, it is characterised in that the body in described image is one kind in human body, animal and cartoon character.
CN201180001326.0A 2011-07-12 2011-07-12 Generate method and the device of body animation Expired - Fee Related CN103052973B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/077083 WO2012167475A1 (en) 2011-07-12 2011-07-12 Method and device for generating body animation

Publications (2)

Publication Number Publication Date
CN103052973A true CN103052973A (en) 2013-04-17
CN103052973B CN103052973B (en) 2015-12-02

Family

ID=47295365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201180001326.0A Expired - Fee Related CN103052973B (en) 2011-07-12 2011-07-12 Generate method and the device of body animation

Country Status (2)

Country Link
CN (1) CN103052973B (en)
WO (1) WO2012167475A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473248A (en) * 2019-08-16 2019-11-19 上海索倍信息科技有限公司 A kind of measurement method using picture construction human 3d model
CN110490958A (en) * 2019-08-22 2019-11-22 腾讯科技(深圳)有限公司 Animation method for drafting, device, terminal and storage medium
CN111597979A (en) * 2018-12-17 2020-08-28 北京嘀嘀无限科技发展有限公司 Target object clustering method and device
CN113556600A (en) * 2021-07-13 2021-10-26 广州虎牙科技有限公司 Drive control method and device based on time sequence information, electronic equipment and readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251389B (en) * 2016-08-01 2019-12-24 北京小小牛创意科技有限公司 Method and device for producing animation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI220234B (en) * 2003-10-21 2004-08-11 Ind Tech Res Inst A method to simulate animated images for an object
US20070035541A1 (en) * 2005-07-29 2007-02-15 Michael Isner Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh
CN101082985A (en) * 2006-12-15 2007-12-05 浙江大学 Decompounding method for three-dimensional object shapes based on user easy interaction
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN101473352A (en) * 2006-04-24 2009-07-01 索尼株式会社 Performance driven facial animation
CN102074033A (en) * 2009-11-24 2011-05-25 新奥特(北京)视频技术有限公司 Method and device for animation production

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008141125A1 (en) * 2007-05-10 2008-11-20 The Trustees Of Columbia University In The City Of New York Methods and systems for creating speech-enabled avatars
CN101354795A (en) * 2008-08-28 2009-01-28 北京中星微电子有限公司 Method and system for driving three-dimensional human face cartoon based on video
CN101777195B (en) * 2010-01-29 2012-04-25 浙江大学 Three-dimensional face model adjusting method
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI220234B (en) * 2003-10-21 2004-08-11 Ind Tech Res Inst A method to simulate animated images for an object
US20070035541A1 (en) * 2005-07-29 2007-02-15 Michael Isner Three-dimensional animation of soft tissue of characters using controls associated with a surface mesh
CN101473352A (en) * 2006-04-24 2009-07-01 索尼株式会社 Performance driven facial animation
CN101082985A (en) * 2006-12-15 2007-12-05 浙江大学 Decompounding method for three-dimensional object shapes based on user easy interaction
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
CN102074033A (en) * 2009-11-24 2011-05-25 新奥特(北京)视频技术有限公司 Method and device for animation production

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597979A (en) * 2018-12-17 2020-08-28 北京嘀嘀无限科技发展有限公司 Target object clustering method and device
CN111597979B (en) * 2018-12-17 2023-05-12 北京嘀嘀无限科技发展有限公司 Target object clustering method and device
CN110473248A (en) * 2019-08-16 2019-11-19 上海索倍信息科技有限公司 A kind of measurement method using picture construction human 3d model
CN110490958A (en) * 2019-08-22 2019-11-22 腾讯科技(深圳)有限公司 Animation method for drafting, device, terminal and storage medium
CN110490958B (en) * 2019-08-22 2023-09-01 腾讯科技(深圳)有限公司 Animation drawing method, device, terminal and storage medium
CN113556600A (en) * 2021-07-13 2021-10-26 广州虎牙科技有限公司 Drive control method and device based on time sequence information, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN103052973B (en) 2015-12-02
WO2012167475A1 (en) 2012-12-13

Similar Documents

Publication Publication Date Title
Gain et al. A survey of spatial deformation from a user-centered perspective
CN105678683B (en) A kind of two-dimensional storage method of threedimensional model
US7570264B2 (en) Rig baking
Stanculescu et al. Freestyle: Sculpting meshes with self-adaptive topology
US20090179900A1 (en) Methods and Apparatus for Export of Animation Data to Non-Native Articulation Schemes
Pan et al. Sketch-based skeleton-driven 2D animation and motion capture
CN103052973A (en) Method and device for generating body animation
Lin et al. Metamorphosis of 3d polyhedral models using progressive connectivity transformations
Bhattacharjee et al. A survey on sketch based content creation: from the desktop to virtual and augmented reality
US20040263518A1 (en) Defrobulated angles for character joint representation
Chen et al. A survey on 3d gaussian splatting
CN108664126A (en) Deformable hand captures exchange method under a kind of reality environment
Cetinaslan et al. Sketching manipulators for localized blendshape editing
Garcia et al. Interactive applications for sketch-based editable polycube map
Tejera et al. Animation control of surface motion capture
Yang et al. Life-sketch: a framework for sketch-based modelling and animation of 3D objects
Çetinaslan Position manipulation techniques for facial animation
US8659600B2 (en) Generating vector displacement maps using parameterized sculpted meshes
Fukusato et al. View-dependent formulation of 2.5 d cartoon models
Coutinho et al. Puppeteering 2.5 D models
Ren et al. Efficient facial reconstruction and real-time expression for VR interaction using RGB-D videos
Adzhiev et al. Functionally based augmented sculpting
US20230377268A1 (en) Method and apparatus for multiple dimension image creation
Bendels et al. Image and 3D-Object Editing with Precisely Specified Editing Regions.
Li et al. Efficient creation of 3D organic models from sketches and ODE-based deformations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170704

Address after: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee after: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: Huawei Technologies Co., Ltd.

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Peng Bixian

Inventor before: Dong Lanfang

Inventor before: Chen Jiahui

Inventor before: Li Dexu

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170908

Address after: Zheng Zhen new street 620500 benevolence county of Meishan City, Sichuan Province dragon No. 26

Patentee after: Peng Bixian

Address before: 510640 Guangdong City, Tianhe District Province, No. five, road, public education building, unit 371-1, unit 2401

Patentee before: Guangdong Gaohang Intellectual Property Operation Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151202

Termination date: 20180712