CA2126570A1 - Memory-based method and apparatus for computer graphics - Google Patents

Memory-based method and apparatus for computer graphics

Info

Publication number
CA2126570A1
CA2126570A1 CA002126570A CA2126570A CA2126570A1 CA 2126570 A1 CA2126570 A1 CA 2126570A1 CA 002126570 A CA002126570 A CA 002126570A CA 2126570 A CA2126570 A CA 2126570A CA 2126570 A1 CA2126570 A1 CA 2126570A1
Authority
CA
Canada
Prior art keywords
subject
views
sample
control points
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002126570A
Other languages
French (fr)
Inventor
Tomaso A. Poggio
Roberto Brunelli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Istituto Trentino di Cultura
Massachusetts Institute of Technology
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2126570A1 publication Critical patent/CA2126570A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S345/00Computer graphics processing and selective visual display systems
    • Y10S345/949Animation processing method
    • Y10S345/95Sprite processing
    • Y10S345/951Key frame processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S345/00Computer graphics processing and selective visual display systems
    • Y10S345/949Animation processing method
    • Y10S345/953Geometric processing

Abstract

A memory-based computer graphic animation system generates desired images and image sequences from 2-D views. The 2-D views provide sparse data from which intermediate views are generated based on a generalization and interpolation technique of the invention.
This technique is called a Hyper Basis function network and provides a smooth mapping between the given set of 2-D views and a resulting image sequence for animating a subject in a desired movement. A multilayer network provides learning of such mappings and is based on Hyper Basis Functions (HBF's). A special case of the HBFs) is the radial basis function technique used in a preferred embodiment. The invention generalization/integration technique involves establishing working axes along which different views of the subject are taken. Different points along the working axes define different positions (geometrical and/or graphical) of the subject. For each of these points, control points for defining a view of the subject are either given or calculated by interpolation/generalization of the present invention.

Description

WO 93/14467 PCI`/llS93/00131 212657~

MEMORY--BASED ~METHOD AND APPARATUS FOR COMPUTER GRAPHICS

~ackground of the_~nventio~n Computer technology has brought about a variety of graphics and image processing systems, from graphics S animation systems to pattern recognition systems (such as neural networks). Important to such systems is the accuracy of generated ~output) images and in particular image sequences.
In general, graphic animation today is typically based on the three steps of (i) three dimensional modeling of surfaces of an object of interest, (ii) physically-based simu~ation of movements, and (iii) rendering or computer illumination of three dimensional images from calculated surfaces. The step of three dimensional modeling is typically based on a three dimensional description including x, y, z axis specifications and surface specifications. The resulting 3-D model is considered to be a physically-based model. To that end, every prospective view is computable. Move~ent such as rotation of the whole model or portions thereof, and illumination is then accomplished through computer aided design (CAD) systems and the like. While this 3-D
modeling and physical~simulation approach to graphic animation is clearly fundamentally correct and potentially powerful, current results are still far from obtaining general purpose,~ realistic image sequences.
Before the use of three dimensional, physically-based models of obje~ts for graphics animation, two dimensional images were used.~ In a two dimensional image of an object only a single perspective view is provided, i.e., is computable. Basica~ly a series of 2-D images of an object in respective poses provides the illusion of whole object or object parts movement, and hence graphic animation.
.

~', ', ' .
", ~ ~ ; . ., . . . .

!," --2--I'! This ~-D serial image approach to graphic animation is cumbersome and often requires repetition in drawing/providing portions of views from one view to succeeding views, and is thus riddled with many I 5 inef~iciencies. The 3-D model-based approach with ,l computer support was developed with the advent of computer technology to improve on and preclude the inefficiencies of the 2-D image graphic animation approach.
A technique for producing computer processed animation is discussed in Internationa} Patent Publication No. W0 89109458. There, graphic movement 1~ sequences for~a cartoon figure are produced by i~ mimicking movement of an actor~ Joints on the actor , are associated;with joints on the cartoon figure, which ~i~ delineate segments of the figure The segments are moveablè in~relation~to one another at common joints.
or each segment of the figure, a number of key ; drawings iD memory match various orientations of the ; 20 segment on~thé~actor.~ Key drawings are retrieved from memory to~form~;each segment of the figure as the actor moves. The~segm~ents;are joined together at the common joints to~create the~animated figure.
Computer~animation~is also discussed by T. Agui et al. in "Three-Dimensional Computer Animation by Trigonometric~Approximation to Aperiodic Motion,"
SYstems and Com~uters in Ja~an 19(5):82-88 ~1988)~ The article discusses the;use of trigonometriç
approximation;~and the application of Fourier expansion to computer~animation The technique can represent motion as~a `functlon and approximates animation for the human walking motion.
In addition to computer systems for generating graphics animatlon, there are computer systems for categorizlng~or recognizing patterns, or more generally . ~ ~ : AMEN~ED SHEEl -mapping patterns, in 3-D or 2-D views/images, occluded images and/or noisy images. These computer systems are sometimes referred to as neural networks. ~Typically a neural network is predefined or trained to produce a target output for a certain input. Pairs of example mappings (certain input-target output) collectively called the "training set" are presented to the neural network during the predefining stage called learning.
During learning, internal parameters of the neural network are made to converge to respective values that produce the desired mappings for any subsequent given input. Thereafter, the neural network operates on subject input patterns and provides an output pattern according to the learned mapping.
;:
SummarY of the Invention :
The present invention provides a computer method and apparatus for generating 3-D graphics and animation based on two dimensional views and novel approximation `~ techn~iques:in g ead of 3-D physical-based modeling as in !

E~ED SH~

, . . . .... . . ..... .... . ...... .. . ..

WO93/14467 212 ~ 5 7 0 PCT/US93/00131 prior art. In general, the present invention method uses a few views of ~n object, such as a person, to generate intermediate views, under the control of a set of parameters. In particular, the parameters are points along working axes and the views correspond to different points along the working axes. The working axes may be any geometrical or graphical aspect for describing a subject. To that end, the parameters define, for example, angle of rotation about a longitudinal working axis, angle of tilt about an orthogonal working axis, three dimensional position (or orientation) of the object, or the expressions of a face of the person, and the like.
The steps of the method can be summarized as (l) providing two dimensional views of a subject;
(2) for each view, setting parameters according to geomotrical and /or graphical features (such as orientation/pose and expression) of the ~u~ject and~assiqning control point valuPs to the set of parameters; and (3) generating intermediate view~ for desired values of the parameters by general~ization/interpolation.
.,, , ~
~: The set of views:used during the parameter setting steps are preferably~real high-resolution images of the object.
In a preferred emb~diment, the step of assigning control point:values to the set of parameters involves a neural network~learning from examples in which each of the given views serves as a training set and i5 associated .
with respective:parameter values, i.e., paints along the working axes, (such as the pose and expression of the : ~ :

`:

. ~093/14467 PCT/US93/00131 , 21265~
, person). The learning technique employed is that of the so called Hyper Basis Functions (HBF). A ~pecial case o~
the HBFs are the so called Radial Basis Functions in which an unknown multivariate function is~ proximated by the superposition of a given number of-~radial functions whose centers are located on the points of the training set (i.e., control points).
In accordance with one aspect of the present invention, apparatus for computer graphics animation includes a) a source for providing sample views of a subject, b) a preprocessing member coupled to the source, c) an image processor coupled to the preprocessing member, and d) display means coupled to the image processor. The source provides sample views of a subject, each sample view providing the subject in a different ample position along one or more~working axes. Each working axis is formed of a:plurality of points, each point having a different parameter value for defining a different position of the subject along the working axis. A
sequence of the~sample positions ~ogether with intermediate~positions::provides animation of the subject in a certain movement.
The,preprocessing member is coupled to the source to receive the sample views and in turn determines ~alues ~ 25 (i.e., locations)~of control points in a set of control ; points of the~subject in each sample view. For each sample view, the preprocessing member establishes an association between:values of the control points and ~; parameter values (i~e., points along the working axes) :

:

WOg3/1~67 PCT/US93/00131 indicative of the sample position of the subject in that sample view.
The image processor is coupled to the preprocessing member and is supported by the established associations between control point values and parameter values of the sample positions of the subject. In particular, the image processor maps values of the control points for sample positions of the subject to values of the control points for desired intermediate positions (points) a~ong the working axes to form intermediate views of the subject.
The image processor forms an image sequence from both the sample views and the formed intermediate views. The image sequence defines a prototype for animation of any object in a class containing the subject.
The display means is coupled to the image processor for locally or remotely displaying the image sequence to provide graphic animation of the subject in the certain movement.
~ , Brief Description of the Drawings ~The foregoing and~other objects, features and advantages of t~e invention will be apparent from the :
; following,more~particular description of pre*erred embodiments of the~drawings~in~`which like reference characters refer~to the~same pàrts throughout the diffsrent views. The drawings are not necessarily to scale, emphasis~ihstead being placed upon illustrating ~he principlès of the~invention.
Figure la is;a~schematic illustration of different views~ of a subject used in the training set of the neural WO93/14467 PCT/US93/~131 network supporting the computer graphics apparatus and method of the present invention.
Figure lb is a schematic illustration of intermediate views generated for the sample ini~ial views of Figure la.
Figure lc is a schematic il~ustration of intermediate views with corresponding polygonal areas for line filling or filling by gray scaIing techniques.
Figure ld is a schematic illustration of views of a subject taken along longitudinal and orthogonal working axes.
Figure 2 illustrates a computer generated, graphics animation image sequence based on five sample (training) views in the apparatus of the present invention.
Figure 3 illustrates graphics animation of a new ! 15 subiect from a common class of sub~ects of that of Figure 2 using the movements (image sequences) learned for the I Figure 2 animation.
Figure 4a is a block diagram of computer graphics apparatus of the present invention.
I 20 Figure 4b~is a flow diagram of image preprocessing and processing~subsystems of the apparatus of Figure 4a.
I ~ Figure 5 is a schematic illustration of a neural ne*work e~ployed by the~computer graphics apparatus of the present invention in~Figures 4a: and b.
i: :
:
Detalled Des~riotlon of the Preferred Embod ment The~present; invention provides a memory-based apparatus and method of~ graphic animation. Conceptually, as illustrated in Figures la and lb, the present invention J
employs a limited number of initial examples or sample views 13 of an object of interest. Each sample view 13 is ~:

W093/14467 PCT/US93/~131 an image of the subject taken at a different point along a working axis (or a different set of points along a plurality of working axes). For each sample view 13, a parameter value (or set of parameter values) represents the point (points) along the working axis (axes) from which the sample view 13 is defined, as discussed in detail later. In the illustration of Fisure la, each sample view 13 illustrates the object of interest at a diffexent time t along a single axis of time and hence has an indicative parameter value ~i.e., the corresponding value of t).
In addition, for each of the sample views 13 of the object, a set of two dimensional control points 15 in the plane of the view, such as characteristic features, body junctions, etc. is identified and defined. This includes establishing location values for each control point 15 in each of the sample views 13.
In turn, an input-output mapping between parameter values of the given sample views 13 of the object and the location values (for the control points 15) is established by the present invention. From this mapping, the present invention is able to generate desired intermediate views 17~(Figure lb) between two of the initial~sample views 13 and subsequentialy between newly generated intermediate views li and/or initial sample views 13. That is~, the present invention is able to generate Iocation values of the control points 15 for desired parameter values of intermediate positions of the subject along the~working axes to form intermediate views 17. Such generation of intermediate views 17 or "in betweening" is~accomplished by interpolation o values of WOg~/1~67 PCT/US93/00131 2126$7 0 the control points 15 from control point values associated with the parameter values initially defined for the sample ~iews 13 to control point values for.desired parameter values of intermediate views 17.
To give relative depth of the~different control points 15, z buffering techniq~ës, line filling or texture mapping are employed. In particular, the control points 15 in their determined ~calculated) locations/location values define polygons 19 (Figure lc) which correspond from view to view and are a~le to be line filled or grey scale filled by common methods. As a result, a series of views (or images) of an object and (with proper rendering or display) animation thereof is obtained without an explicit 3-D physical based model of the object.
lS As mentioned, the parameter values are points along working axes, and the views are determined as being taken from a different set of points along working axes. The working axes may be any:geometrical or graphical aspect for describing an:object or subject~ One working axes may be, for example, a longitudinal axis about which the object may be rotated. The different points along the longitudinal axis are designated in terms of angle of rotation ~. For~example, a gi~en image of a head may be viewed (a) face on~at ~ =~0, (b) turned slightly at ~ =
45, and (c) on profile at ~ = 90.
Another working axis may be, for example, an orthogonal (e.g., horizontal) axis about which the object : may be tilted. The different points along this axis are designated in te~ms:of angle of tilt ~. For example, the image of the head ~ay have views at (i) ~ = 0 where the head is not tilted forward or backward, (ii) ~ = 45 where ::~

. .

WO93/1~67 PCT/US93/00131 1 .

g the head is tilted forward as in a nodding head, and (iii) = -45 where the head is tilted backward~
An example of a working axis defining a graphical aspect of an object is an axis of expression. Say the example head image in the foregoing discussion has a facial expression which ranges from a full smile to a straight face to a frown. The working axis in this instance is a facial expression axis formed of three ~'J' points, one point with parameter value z = 1 for indica~ing a full smile, one with parameter value z = 2 for indicating a straight face, and 1 with parameter value z = 3 for indicating a frown.
j Another working axis may be time as seen in the i illustration of Figures la and lb. The points of this ;J 15 axis mark different instances in time, and hence the :, parameter values of these points indicate positions of the l object at the respective instance of time.
I The views of~an object are then taken along a set of JI working axes ~i.e.,~a single working axis or a combination of working axes throughout the views) as follows. For a moving object taken along a single axis of time, each view ' captures the mo~ing object in a different position at an i instance in time~where t = n, a point along the time axis.
ll Further, n is the~parameter value corresponding to the ~` ~ 25 view.
~'~ For a three~dimensional object rotating about a longitudinal axis~and tilting about an orthogonal (horizontal) axis~, each view captures the object in a ~;~ different position defined by ~ and ~ (angle of rotation and~angle of tilt~from above). That is, each view has a different pair ~d, ~) of parameter values indicating `~

.~
"t 7 PCT/US93/~131 2~26~ o respective points along the longitudinal working axis and horizontal working axis. Figure ld is illustrative where three views of object 12 rotating and tilting about a longitudinal and horizontal working axes, respectively, S are provided. The first view 14 shows the moving object 12 taken at points D = 90 and ~ - 45 along working longitudinal and horizontal axes, respectively. That is, object 12 is rotated 90 about a working longitudinal axis and tilted 45 about the working horizontal axis, where the second view 16 shows the moving object 12 taken at (0, 0), i.e., no tilt and not rotated~ The third view 18 shows the moving object 12 taken at points (0, -45) where the object is tilted backwards.
For the image of a head turning, tilting and changing facial expressions, each view captures the head in a different positlon defined by ~, ~ and z as described above. That is, a view~of the head face on, not tilted and with a straight expression is defined by triplet points or a treble~parameter value of (0, 0, 2). A view of the head in profile, tilted forward 45 and with a frown is defined~by;~treble parameter values (90, 45, 3) and so on.
The!present invention method~and apparatus is said to ~; be memory based~because a set of 2-D views is used for texture mapping~instead of an~explicit 3-D model which is rendered each time in prior art. It is emphasized that the set of control~points 15 does not correspond to a 3-D
wire-framed, solid model,~but rather each control point is a 2-D point in~;~the;~image (view) plane. Further, the initial sample~views 13 are more than two dimensional (a single perspective view) due to the def inition of the :~
.

~ ::

~0g3/14467 PCT/~S93/00131 control points 15 in conjunction with parameter values, but are less than a three dimensional model in which every perspective view is computable. To that end, the present invention provides a 2 ~/2-D memory based approach to 3-D
graphics and animation. Also, it is noted that the invention provides a technique to perform "in-betweening"
in a multidimensional input space ~as in Figure ld), and not only in a one dimensional input space as in Figure lb.
In practice, the present invention employs two mappings, a first mapping from a set of example points (i.e., contral poin~s 15 of sample views 13) of an object to a set of desired intermediate views 17, and a second mapping from the generated intermediate views 17 to an image sequence of the object. The correspondence between co~ntrol points configuration (as associated with parameter values) and desired intermediate views 17 is established from initial sample:views as mentioned above. Factored into the intermediate views 17 is the space o~ all perspective projections~of the object such that from the sample:views 13 'any desired intermediate views 17 of the object can be generated (i.e., calculated by ; interpolation). :The~:second mapping involves texture mapping to give re~lative depth to the different control points~ Further,:where the ob~ect belongs to a class of objects, t~e first and seoond mappings are shareable amongst the other members (objects) of the class. In : order to share these maps, a mapping from the control : points of the instances of a desired object to those of the class prototype and its inverse i~ utilized.
.
Fi~ures 2 and 3 are illustrative of the foregoing practices of the present invention. The sequence of ~ ' W093~ 7 PCT/US93/~Ot31 2l26s7o views/images of these figures is read right to left and top row down. Referring to Figure 2, five sample views Zla, b, c, d, e are given and depi~t a subject ("John") in a certain movement (walking). For each view, location of each forearm, thigh, shin and foot ~or example is determined according to a coordinate system in the plane of the ~iew. The coordinates for each forearm, thigh, shin and foot form a set of control point values denoted {C;} ~ R2. For each different movement ~ (e.g., jumping, walking, running, etc.), a first map Ma associates control point values Cj with specific respective parameter values (i.e., points along the working axes).
From each set {Cj} of control point values the absolute coordinates of the control points is transformed to barycentric coordinates by common methods and means.
The resulting control points in barycentric coordinates forms a new set {Cj~} of control point values. It is this new set of ~ontrol point values which is used in the first ma~ping from sample views 21 to desired intermediate views. Barycentric coordinates are used because this mapping~is intrinsic to the subject, while movement of the subject is relati~e to the environment.
.
In;particular, the subject S is composed of a given number o polygonal elements U., each element Uj being defined by a subset of the set of control points {Cj8}~
For example a`triangle element Uj is formed by control poi~ts l5a, b, and g in the subject of Figure la, and other elemen~s U; are rectangles, ellipses and the like in the subject S~of Figure 2. Subject S is then WO~3/1~67 PCT/US93/0~131 -~3-mathematically denotsd S = {~}~

Animation of the subject S, using a particular movement map Ma along a working axis o~ time, for example, amounts to introd~cing a temporal dependence denoted S~t3 = {Uj(t)}.

Each element Uj(t) is computed using the map Ma for the given movement. That is, ea~h single control point value Cj of element Uj is mapped by the ~unction Ma resulting in the transformed U;, and the same is done for each U; (and its control point values C;) o~ subject S. An intermediate view results. Further intermediate views are similarly generated~using the foregoing transformation of ea~h element U~ of sub~ect S according to the function of ~5 the d~sired map ~. It is from these intermediate views that an image sequence for a graphic animation of the : subject is generated from sample views 21.
.
Having:generated intermediate views from sample views ~21,~a texture~mapping is next employed to create images ; ~ 20 from the :inte~mediate~views, and in particu~ar an image :~: s~quence for the~graphic animation of the subject "John"
of sample views 2~1. T~exture mapping is accomplished by standard techniques known in the art. In particular, texture mappi~g~maps grey values for each of the polygons defined between oontrol points of a sample view to the :

: ~

WO93/1~67 PCT/USg3/00131 2~6sl o corresponding polygons in the generated intermediate Yiews. ,, In a preferred embodiment, apparatus of the present invention implements the function.!:~

f (x) = ~c~C(¦¦x-t¦¦~)+p(x) Equation 1 where G~x) is a radial Gaussian function (such as the radial Green's function defined in "Networks ~or Approximation and Learning" by T. Poggio and F.
Girosi, IEEE Proceedings, Vol. 78, No. 9 September 1990), thus, G(x) is a Gaussian distribution of the I square of:the distance between desired x and ¦~ 15 predetermined t.

x is the~position or points along working axes : ~ (parameter values~ for which control point location : : values are desired; :::

¦ ~ c are:coefficients (weights) that are "learned" from the known/given;control point values of sample views ¦ ~ 21. -Thère~are~in~general much fewer of these ~ coefficients than the nu~ber N of sample views (n S

, ¦~ : ta is a so called "center" of the Gaussian distribution and is:on a distinct set`of control points~with~known parameter values from given sample views 21;~and~

~: :

1 ~

WO93/14467PCr/US93/00l31 ptx) is a polynomial that depends on chosen smoothness assumptions. In many cases it is convenient to include up to the constant and linear terms.

5Further function G may be the multiquadric G(r) = ¦c+r2 or the "thin plate spline" G(r) = r21n r, or other specific functions, radial or not. The norm is a weighted norm 10¦Ix-tllW = (x - t~)TWTW(x - ta) Equation 2 where W is an unknown square matrix and the superscript T
indicates the ~ranspose. In the simple case of diagonal the diagonal elements w; assign a specific weight to each input cooxdinate~,;determining in fact the units of measure and the importance of each feature (the matrix W is especially imPortant in cases~in which the input features are of a different type~and their relative importance is unknown). ~
From the~foregoing Equations, location values for control poin~s;in~intermediate views at times t in the example of Figure~2 (or more generally at desired points/
parameter valùes;~along working axes) are approximated (interpolated),~a~nd~in~turn sample view elements U; are map~ed to the transformed instance Ui~t~ of that element.
As a result, intermediate views for à first image sequence~
part 23 (Figure 2)~ between sampl~e views 21a and 21b are generated for a first~range of t. Intermediate views for a second image sequence part 25 between samp~e views 21b ~093/l4467 PCT/US93/~13l ~6Srl o and 21c are generated for an immediately succeeding range o~ t. Intermediate views for a third image sequence part 27 between sample views 21c and 2id are generated for a further succeeding range of t. ,~nd intermediate views for a fourth image sequence part~ 2'9~ between sample views 21d and 21e are generated for an'ending range of t.
Effectively a smooth image sequence of John walking is generated as illustrated in Figure 2. Smooth image sequences for other movements of John are understood to be similarly generated from the foregoing discussion. In a like manner, smooth image sequences for different poses (defined by rotation about a longitudinal axis and/or tilt about an orthogonal axis, and the like) and/or different facial expressions~are~,understood to be similarly generated. ~ ~;
Implementation of~the foregoing and in particular implementation~of~Equation l is accomplished in a preferred embodiment-by~the network 51 illustrated in ; Figure 5. Neùral~network~or more specifically the Hyper Basis Function~nètwork 5} is formed of a layer of input nodes 53, coupled~to~a layer of working nodes 55 which send activation~signàls~to a~summing output node 57. The input nod,es~53~transmit signals indicative of an input pattern.~ Thesè~signaIs are bàsed on the control points C" corresponding,~paràmeter~values defining the view of the sub~ect in~the~input pattern, and desired movement ~.
Each input node 53~is~ coupled to each working node 55, that is, the~layer~of~input nodes 53 is said to be fully :

, WO93/1~67 PCT/US93/00131 connected to working nodes 55 to distribute the input ,pattern amongst the working node layer.
;'Working nodes 55 are activated according to Equation ',l (discussed above) in two different modes, initially in a ,,l5 learning mode and thereafter in operation mode for mapping idesired patterns. During learning mode, the sample views of a subject of interest ~e.g., the five sample views 21 of "John" in Figure 2) serves as a training set of input patterns. More accurately, the control points of the sample views and their corresponding parameter values along a working''axis (or axes) for the given movement ~
provides an input-output mapping Ma to working nodes 55.
Internal weights and network coefficients (ca, w; or W, t) are adjusted for each input-output mapping of the sample views and consequently are made'to converge at respective value~. In the preferred embodiment this includes finding the optimal va1ues of the various sets of ''coefficients/weights ca, w; and t~, that minimize an error functional on the~sample views. The error functional is defined ] H~,~w ~i (43~' Equation 3 ~, with 25 ~ ` ' n 4 ~ yj - f (x) = y; _ ~ oaG(¦¦x; - t¦¦2)-=1 W

a~common/standard method for minimizing the error function is the steepest descent approach which requires ~i:::
~; :
~"~

., W093/1~67 PCT~US93/001~l 2~,65r~ o , .

calculations of derivatives. In this method the values of c, ta and W that minimize Htf-] are regarded as the coordinates of the stable fixed point of the following dynamical system:

c~ L~ = 1,.............. ,n Equation 4 ~Ca H~f~1 ~ = 1,... ,n Equation 5 w ~ lf~l Equation 6 where ~ is a system parameter.
A simpler method that does not require calculation of derivatives is to look for random changes in the coefficient values that reduce the error. Restated, random changes in the coefficients/weight Ca~ ta~ and W
are made and accepted if H~f-] decreases. Occasionally changes that increase~H[f-] may also be accepted.
Upon the internal~weights and network coefficients ~`~; ` taking on (i.e.,~being assigned) these values, the neural network 51 is~said to have learned~the mapping for movement-~ (denoted~M~above). Movement ~ may be an eIement of the~set~consisting of walking, running, jumping etc., with parameter~values along a single working axis of time. Movement ~ ma~be an element in a set of poses (orientation by angles of rotation and tilt) with parameter value~;pairs~along~two working axes. Movement may be an element in a set of poses and expressions with parameter values in triplets along threq working axes (a :

longitudinal, horizontal and expression axes). And so on, commensurate with the previous discussion of working axes~
The same learning procedure is employed for each given movement ~ with sample views for the same. Thus, at the end of learning mode, the neural network 51 is trained to map a set of parameter values along pertinent working axes into 2-dimensional views of the subject of interest.
After completion of the learning mode, and hence establishment of internal weights and network coefficients W, c, ta~ the working nodes 55 are activated in operation mode. Specifically, after learning, the centers ta of the basis functions;of Equation l above are similar to prototypes, since they are points in the multidimensional input space. In~response to the input signals (parameter values coordinates~of des~ired~position or view of a subject along working axes) from input nodes 53, each working node 55 computes a weighted distance of the inputs from its center~;t, that~is a measure of their similarity, and applies to it~the radial function G (Equation l). In the case of the~Gaussian G, a working node has maximum activation when~the input exactly matches its center t.
Thus, the working~nodes 55 become activated according to the learned mappings~M.~ ~
Working nodes~55~transmit generated activation ; ~ 25 signals G along~;1ines~59~to summing output node 57. Each transmission line 59 mu1tiplies the respective carried activation signal by a weight value c determined during the learning mode~of the~network. Thus output node 57 ~; receives signals cG from each working node 55 which represents the linear superposition of the activations of .

:~
, WO93/14467 PC~/US93/~131 ~,~Z6S~ ~

all the basis functions in the network 51.
Output node 57 adds to the CG signals direct, weighted connections from the inputs (the linear terms of p(x) of Equation l shown by dotted lines in Figure 5) and from a constant input node 49 (a constant term). The total provides an output in accordance with Equation l.
This output is interpreted as the corresponding map for the given input (desired) parameter values. That is, the output defines the coordinates (location values) of the control points for intermediate views, and ultimately defines the image sequënce for the initial sample views.
It is noted that in the limit case of the basis functions approximating delta functions, the system 5l of Figure 5 becomes equivalent to a look-up table. During learning~ the weights c are found by minimizing a measure of the error between the network's predictian and each of the sample views. At the same time the centers t of the radial functions and the wei~hts in the norm are also updated during learning. Moving the centers ta is equivalent to modifying the corresponding prototypes and corresponds to task dependent clustering. Finding the optimal weights W for the norm is equivalent to transforming appropriately, for instance scaling, the input coordinates;~correspond ts task-dependent dimensionality reduction.

Software/Hardware Support , The pattern~mapping system 51 of Figure 5 i5 generally embodied in a~computer system 61 illustrated in ~ Figures 4a and 4b. Referring to Figure 4a, a digital : :~
:

WO93/14467 PCT/US93/~0131 processor 63 of the computer system 61 receives input 69 from internal memory, I/O devices (e.g~, a keyboard, mouse and the like) and/or memory storage devices (e.g., importable memory files, disk storage, and the like). In the case where the input is sample views or input patterns the digital processor 63 employs an image preprocessing member 65 for determining a set of control points Cj and corresponding values for each control point throughout the different input patterns 69. The preprocessing ~ember 65 also determines for each input pattern, the corresponding parameter values for a desired movement ~ of a subject in the input pattern 69. The image preprocessing member 65 is implemented in hardware, software or a combination thereof as made clear later. One implementation is neural network 51 in its learning mode as illustrated in Figure 5. ~ more genera} software implementation is outlined in the flow diagram of Figure 4b.
Referring to the left side portion of Figure 4b, when input 69 is sample views or input patterns 69, image preprocessor 65 implements a learning mode. Given (from the user or other sources~ at the start of the learning mode is a definition~of the working axes. At 81, in Figure 4b, image~preprocessor 65 establishes parameter values (points) along the work ng axes and assigns a different para~eter value (single, pair or triplet, etc.) to each input pattern (sample view) 69. Next at 83, image preprocessor 65 extracts a set of control points C; for application ~o each input pattern 69, and for each input pattern determines control point values.
.
:
~:

WOg3/1~K7 PCT/US93/00131 ,, 2~,26S~
-At 71, image preprocessor 65 establishes an association or mapping M~ between values of the control points and parameter values of the~working axes. In effect, this is accomplished by image preprocessor 65 I S calculating Equations 4, 5 and 6 (discussed above) to I determine the coefficients for Equation l (the supporting ¦ equation of network 5l in Figure 5). From 7l, coefficients c,;t~ and W result, and in turn defines the Hyper Basis function network Sl operation mode, which implements Equation l as discussed in Fiqure 5. Another implementation of the function (Equation l) supporting 1 operation mode is image processor 67 outlined in flow ¦~ diagram fashion in the~right side portion of Figure 4b and discussed next. It is understood that other lS implementations~are~suitable.~
When input 69~ is an indication of user desired views at input parameter;values along the~working axes, the ~ input 69 is transferred~to operation mode module 77.
1~ Under operation~mode~module 77,~for~each desired position ~ 20 (inpu~parameter~values)~of~the~subject in the desired ;;~ ~ movement ~ (along~the~worXing~axes), imaqe processor 67 (i) applies~mapping~M~ to~the input parameter values and determines~;the corresponding control point values.
This;is~accomp~1ished~by~interpolation (using Equation l) as indioated~at~73~in~Figure~4b. Image processor 67 uses the resulting values of control points Cj to define ; polygonal elements~U;, and in turn forms subject S in the desired position~along working axes. At 74 image processor 67~spp1iès`~line filling or texture mapping to form~an ~intermediate view of the resulting subject S.

. ~
' ?

WO93/1~7 PCT/US93/0013t 2126~70 Image processor 67 nex~ places in the formed view in sequence 75 with other views (i.e., desired positions of the subject) generated for the desired movement ~ of the subject S.
The computer system 61 of Figures 4a and 4b can also generate isolated views, if so desired, corresponding to input parameter values (position of the subject along working axes) that do not correspond to any of the example views used in training. The computer system 61 can also generate views of another subject in the same class of the subject for which a sequence of views has been generated for a desired movement, as discussed next.
Where the input 69 is a view of a subject belonging to a class of subjects for which an image sequence has already been formed, a map from the control points of the subject of the current input 69 to that of the prototype of the class is performed at 73 as follows. Referring back to Figure 2 in the case where subject John is a member of a class of objects, the generated image sequence of John may be utilized to similarly animate another object of that class.~ In order to animate another object ~of the sa~e class,~the~present invention transforms the establishéd desired~movement of the class prototype. For exampIe, assume~that it is desired to generate images of a different person, say Jane walking, and the Figure 2 generated views of John walking is a prototype of the class common to John and Jane. Of course the Figure 2 procedure may be repeated for initial sample views 35a, b, c, d, e of Jane~ in Figure 3, but shortcuts are herein provided. That is, the present invention exploits the image sequence generated for prototype John to animate ::

W093/14K7 PCT/US93/~t31 2,l26S~

other objects of the same class (namely Jane) with minimal additional information.
One of the simplest ways of mapping Jane onto the available class prototype (30hn) is that of parallel deformation. The first step consists in transforming the control points of the reference frame of the prototype and of the new subject to their~barycentric coordinates. This operation allows separation of the motion of the barycenter, which is considered to be intrinsîc to the learned movement, from the motion around it, which depends on the particular instance mapped. The set of the control points is then embedded in a 2n-dimensional space.
A parallel deformation is defined by:
:
S8(t) = R8~-l) + tS~(O) - RB(O) ] Equation 3 where ~ ~
:: : : : : : :
R is the reference characteristic view of John (in this example;~

; ~the subscript B~means that the control points are considered in~their~barycentric coordinates; and 20~ ~R is the prototype~and~S Ls the new subject.

The subjects are considered embedded in a 2n-dimensional space (n~being the~nu~ber of~control points). From t, we ~can~obtain the imagè of;~Jane~at time t, obtaining first the set of control~points~at time~t by transforming the ~25~image of ~John at time t by the displacement tS8(0) ~ ~ --:: :

::
;: : ; ~ :

WO93/1~67 PCT/US93/~t31 ''~

.,..,~ :.

R~(0)]. This generates an image of Jane in the position (i.e., pose) for each view of prototype John in Figure 2. ' ' Thus, there exists a one-to-one correspondence or mapping between the views of Jane in Figure 3 and the views Oe S prototype John in Figure 2. The image sequence, and 'nence animation, of Jane walking illustrated in Figure 3 reuults from just one sample view (e.g., first view 37) of J;_ne.
Image sequences of other movements of Jane are similarly transposed/mapped~from~the image sequences of the same I0 movements defined for class prototype John.
To that end, transposition by parallel deformation maps control point values at parameter va~ues of a prototype subject to~that~of; a new subject for a common pose. Thus, although~a~given first view 37 (Figure 3) of lS a new ~ubject~is;not~necessarily~a view of the new subject in the same pose as that;of~the~ class prototype in the sample views 21 used~to establish~the~image sequence of ' ' the class prototype,~`the~present invention generates an image sequence of-the~new;~;subject in a'desired movement a 20~ from the image~sequence~'of~the class prototype in the same ;movement. The one~given~f;irst~view 37 (Figure 3) of the '' new subject~only~affects~th~is mapping by initially ''-~establishing control points~of;~the new subject.
~he reason this~type~of mapping is called parallel deformation~is thè~ following. If we look at the 2n-dimensional vectors~,~we~see that views of Jane are '' ~obtained adding to~the~corresponding frame (view) of the ' prototype; at time~t~'~the~difference between the prototype ~ at~t=Q~and the given~first~view of Jane t=0. This '';~ ~ 30 provides that the dePormations (i.e., the difference ' : ~ ~ : :~:: ' : -: . ~
~ '~

WO93/14467 PCT/US~3/00131 ?,~26S~t O
. -26-between the subjects at time t and its characteristicview) of the prototype and of Jane are then parallel by construction.
Accordingly, a series of working or intermediate views of the subject in the current input 69 are generated from the sample and intermediate views of the image sequence for the class prototype.
At any rate the output in Figure 4b of image processor 67 is an image sequence of subject S in a desired movement ~. Referring back to Figure 4a, digital processor 63 displays this image sequence through a display unit 78 to provide graphical animation of the subject S. Alternatively, digital processor 63 stores the image sequence in a memory storage device/area for later display. Other Ij~ devices to which the image sequence is output includes, without limitation, printers, facsimile machines, communication lines to remote computer workstations, and the like i}lustrated at 81 in Figure 4a.
One software embodiment of the image preprocessing :~
member 65 and image processor 67 is the C program called "Shaker" found in;the~software library of the Istituto per la Ricerca Scientifica; e Technologica, Trento, Italy and ~:
Massachusetts Institute of Techno1Ogy Artificial Intelligence Laboratory, Cambridge, Massachusetts, U.S.A.
As can be seen from the foregoing description of computer system 61 employing computer graphics animation : apparatus and method of thé present invention, the present invention has applications in animation of cartoon characters, video conferencing and other image processing applications. As for animation of cartoon characters any desired view of a cartoon character may be generated from WO93/1~67 PCT/US93/00131 a "training set" consisting of a large set of a~ailable views of the cartoon character.
The video conferencing app~ication or more generally teleconferencing is implemented with two or more computer S systems or workstations 61 transmitting parameter values and control point values across communications lines 81 coupling the systems and generating image sequences therefrom. Respective display units 78 at each workstation provides display of generated image sequences.
And transmission of text files or audio signals provide communications between users of sending and receiving systems 61. The present invention ability to animate an image using parameter values along working axes and a limited number of control points reduces drastically the amount of information that needs to be transmitted, and thus enables such teleconferencing.
Other applications are understood to be within the purview of those skil~led in the art.
, :"

~ ' . ':
'' ; , ' . -: "~

'.' .

. ' .

W093/1~7 PCT/USg3/00131 2'126s'7 ~"~,, Equivalents While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departin~ from the spirit and scope of the invention as defined by the appended claims.
For example, other multivariate approximation and interpolation techniques may be used in place of the Hyper Basis Functions employed in the disclosed preferred embodiment. Such techn~iques are either a special case of Hyper Basis Functions (such as generalized splines, tensor ~.
product spline, and~:~tensor product linear interpolation), or similar to~Hyper Basis~Functions ~such as Multi Layer :
15 Perceptrons and Kernel~techniques). -. . ~ , , , .
, ~:

~ ~ ' :

:, ~

~ , .

Claims (12)

  1. Apparatus for generating computer graphics animation of.
    a subject having a display (78) for displaying an animated image sequence of the subject in a certain movement, the improvement comprising:
    a source (69) for providing sample views (13) of a subject, each sample view (13) providing the subject in a different sample position along at least one working axis, each working axis being formed of a plurality of parameter values, each parameter value defining a different position of the subject along the working axis;
    a preprocessing member (65) coupled to receive from the source (69) the sample views (13), the preprocessing member (65) determining (i) a set of control points (15) of the subject in each sample view (13), and (ii) plane coordinates of the control points (15) in each sample view (13), and for each sample view (13), the preprocessing member (65) associating the coordinates of the control points (15) with the parameter values of the at least one working axis indicative of the sample position of the subject in that sample view (13); and an image processor (67) coupled to the preprocessing member (65) and responsive to the associations between the coordinates of the control points (15) and the parameter values of the sample positions of the subject, the image processor (67) mapping the coordinates of the control points (15) for sample positions of the subject to control point (15) coordinates for desired intermediate positions along the at least; one working axis to form intermediate views (17) of the subject, the image processor forming an image sequence from both the sample views (13) and formed intermediate views (17), the image sequence defining a prototype for animation of any object in a class containing the subject.
  2. 2. Apparatus as claimed in Claim 1 wherein the image processor (67) farms an image sequence from the sample views (13) and intermediate views (17) arranged in order according to sequence of the sample and intermediate positions for animating the subject in the certain movement.
  3. 3. Apparatus as claimed in Claim 1 wherein:
    the source (69) subsequently provides at least one example view (37) of an object of the class; and the image processor (67) maps the coordinates of the control points (15) of the views (13,17) forming the image sequence to control points of the example view (37) to determine, for each parameter value of the at least one working axis, coordinates of the control point values for intermediate views of the object, to generate a respective image sequence for animating the objects in the certain movement.
  4. 4. Apparatus as claimed in Claim 1 wherein the display (78) includes a display unit networked to the image processor (67) for remote display of the image sequence.
  5. 5. Apparatus as claimed in Claim 1 wherein at least one working axis defines position of the subject as one of rotation about a longitudinal axis, tilt about an orthogonal axis, instance in time along a time axis, and facial expression along a respective axis.
  6. 6. Apparatus as claimed in Claim 1 wherein the at least one working axis is a plurality of working axes.
  7. 7. In a computer system, a method of generating computer graphic animation of a subject and displaying an animated image sequence of the subject in a certain movement through a display (78) of the computer system, the improvement comprising the steps of:
    providing sample views (13) of a subject, each sample view (13) providing the subject in a different sample position along at least one working axis, each working axis being formed of a plurality of parameter values, each parameter value defining a different position of the subject along the working axis, and a sequence of the sample positions together with intermediate positions animating the subject in a certain movement;
    determining a set of control points (15) of the subject in each sample view (13);
    for each sample view (13) (i) determining plane coordinate values of the control points (15), and (ii) establishing an association between the coordinates of the control points (15) and parameter values of the at least one working axis indicative of the sample position of the subject in that sample view (13);

    mapping the coordinates of the control points (15) for sample positions of the subject to the coordinates of the control points (15) for desired intermediate positions along the at least one working axis to form intermediate views (17) of the subject; and forming an image sequence from both the sample views (13) and formed intermediate views (17), the image sequence defining a prototype for animation of any object in a class containing the subject.
  8. 8. A method as claimed in Claim 7 wherein the step of mapping control point (15) coordinates for sample positions of the subject to control point (15) coordinates for desired intermediate positions includes interpolating values of the control points (15) between parameter values of the sample positions and desired parameter values of the intermediate positions.
  9. 9. A method as claimed in Claim 7 wherein the step of forming an image sequence includes arranging the sample views (13) and formed intermediate views (17) in order according to sequence of the sample and intermediate positions for animating the subject in the certain movement.
  10. 10. A method as claimed in Claim 7 further comprising the steps of:
    providing at least one example view (37) of an object of the class; and determining and mapping coordinates of the control points (15) of the views (13,17) forming the image sequence to control points of the example view (37) to determine, for each parameter value of the at least one working axis, coordinates of the control points for intermediate views of the object, to generate a respective image sequence for animating the object in the certain movement.
  11. 11. A method as claimed in Claim 7 wherein the step of providing sample views (13) includes establishing at least one working axis as one of a longitudinal axis about which the subject may be rotated in a view, an orthogonal axis about which the subject may be tilted in a view, a time axis, or an axis for indicating range of facial expressions of the subject.
  12. 12. A method as claimed in Claim 7 wherein the at least one working axis is a plurality of working axes.
CA002126570A 1992-01-13 1993-01-08 Memory-based method and apparatus for computer graphics Abandoned CA2126570A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US819,767 1992-01-13
US07/819,767 US5416899A (en) 1992-01-13 1992-01-13 Memory based method and apparatus for computer graphics

Publications (1)

Publication Number Publication Date
CA2126570A1 true CA2126570A1 (en) 1993-07-22

Family

ID=25229004

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002126570A Abandoned CA2126570A1 (en) 1992-01-13 1993-01-08 Memory-based method and apparatus for computer graphics

Country Status (12)

Country Link
US (2) US5416899A (en)
EP (2) EP0621969B1 (en)
JP (1) JPH07509081A (en)
KR (1) KR950700576A (en)
AT (1) ATE140091T1 (en)
AU (1) AU660532B2 (en)
CA (1) CA2126570A1 (en)
DE (1) DE69303468T2 (en)
DK (1) DK0621969T3 (en)
ES (1) ES2091593T3 (en)
GR (1) GR3020963T3 (en)
WO (1) WO1993014467A1 (en)

Families Citing this family (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992009965A1 (en) * 1990-11-30 1992-06-11 Cambridge Animation Systems Limited Animation
US5416899A (en) * 1992-01-13 1995-05-16 Massachusetts Institute Of Technology Memory based method and apparatus for computer graphics
US5640590A (en) * 1992-11-18 1997-06-17 Canon Information Systems, Inc. Method and apparatus for scripting a text-to-speech-based multimedia presentation
USRE43462E1 (en) 1993-04-21 2012-06-12 Kinya (Ken) Washino Video monitoring and conferencing system
JP3345473B2 (en) * 1993-08-03 2002-11-18 株式会社日立製作所 Animation generation method
US5613048A (en) * 1993-08-03 1997-03-18 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
WO1995006297A1 (en) * 1993-08-27 1995-03-02 Massachusetts Institute Of Technology Example-based image analysis and synthesis using pixelwise correspondence
US6463176B1 (en) * 1994-02-02 2002-10-08 Canon Kabushiki Kaisha Image recognition/reproduction method and apparatus
US5802281A (en) 1994-09-07 1998-09-01 Rsi Systems, Inc. Peripheral audio/video communication system that interfaces with a host computer and determines format of coded audio/video signals
EP1494476A1 (en) * 1994-09-09 2005-01-05 Canon Kabushiki Kaisha Image processing method and apparatus
US5613093A (en) * 1994-10-12 1997-03-18 Kolb; George P. Apparatus and method for drill design
US5854898A (en) 1995-02-24 1998-12-29 Apple Computer, Inc. System for automatically adding additional data stream to existing media connection between two end points upon exchange of notifying and confirmation messages therebetween
US5826269A (en) 1995-06-21 1998-10-20 Microsoft Corporation Electronic mail interface for a network server
US5883638A (en) * 1995-12-01 1999-03-16 Lucas Digital, Ltd. Method and apparatus for creating lifelike digital representations of computer animated objects by providing corrective enveloping
US5818461A (en) * 1995-12-01 1998-10-06 Lucas Digital, Ltd. Method and apparatus for creating lifelike digital representations of computer animated objects
US5854634A (en) 1995-12-26 1998-12-29 Imax Corporation Computer-assisted animation construction system using source poses within a pose transformation space
US6144972A (en) * 1996-01-31 2000-11-07 Mitsubishi Denki Kabushiki Kaisha Moving image anchoring apparatus which estimates the movement of an anchor based on the movement of the object with which the anchor is associated utilizing a pattern matching technique
US5970504A (en) * 1996-01-31 1999-10-19 Mitsubishi Denki Kabushiki Kaisha Moving image anchoring apparatus and hypermedia apparatus which estimate the movement of an anchor based on the movement of the object with which the anchor is associated
AU718608B2 (en) 1996-03-15 2000-04-20 Gizmoz Israel (2002) Ltd. Programmable computer graphic objects
US20040243529A1 (en) * 1996-03-25 2004-12-02 Stoneman Martin L. Machine computational-processing systems for simulated-humanoid autonomous decision systems
US8983889B1 (en) 1996-03-25 2015-03-17 Martin L. Stoneman Autonomous humanoid cognitive systems
US7885912B1 (en) 1996-03-25 2011-02-08 Stoneman Martin L Humanoid machine systems, methods, and ontologies
US5956491A (en) 1996-04-01 1999-09-21 Marks; Daniel L. Group communications multiplexing system
US5689437A (en) * 1996-05-31 1997-11-18 Nec Corporation Video display method and apparatus
US5793382A (en) * 1996-06-10 1998-08-11 Mitsubishi Electric Information Technology Center America, Inc. Method for smooth motion in a distributed virtual reality environment
US5768513A (en) * 1996-06-27 1998-06-16 At&T Corp. Multimedia messaging using the internet
US5784561A (en) * 1996-07-01 1998-07-21 At&T Corp. On-demand video conference method and apparatus
US6570587B1 (en) 1996-07-26 2003-05-27 Veon Ltd. System and method and linking information to a video
US5933150A (en) * 1996-08-06 1999-08-03 Interval Research Corporation System for image manipulation and animation using embedded constraint graphics
JPH1097514A (en) * 1996-09-24 1998-04-14 Masahiko Shizawa Polyvalent mapping learning method
US5945999A (en) * 1996-10-31 1999-08-31 Viva Associates Animation methods, systems, and program products for combining two and three dimensional objects
US6754181B1 (en) 1996-11-18 2004-06-22 Mci Communications Corporation System and method for a directory service supporting a hybrid communication system architecture
US6690654B2 (en) 1996-11-18 2004-02-10 Mci Communications Corporation Method and system for multi-media collaboration between remote parties
US7145898B1 (en) 1996-11-18 2006-12-05 Mci Communications Corporation System, method and article of manufacture for selecting a gateway of a hybrid communication system architecture
US6335927B1 (en) 1996-11-18 2002-01-01 Mci Communications Corporation System and method for providing requested quality of service in a hybrid network
US5963217A (en) * 1996-11-18 1999-10-05 7Thstreet.Com, Inc. Network conference system using limited bandwidth to generate locally animated displays
US6151619A (en) * 1996-11-26 2000-11-21 Apple Computer, Inc. Method and apparatus for maintaining configuration information of a teleconference and identification of endpoint during teleconference
US6018710A (en) * 1996-12-13 2000-01-25 Siemens Corporate Research, Inc. Web-based interactive radio environment: WIRE
TW359054B (en) * 1996-12-20 1999-05-21 Sony Corp Method and apparatus for automatic sending of e-mail and automatic sending control program supplying medium
US6731625B1 (en) 1997-02-10 2004-05-04 Mci Communications Corporation System, method and article of manufacture for a call back architecture in a hybrid network with support for internet telephony
US5969721A (en) * 1997-06-03 1999-10-19 At&T Corp. System and apparatus for customizing a computer animation wireframe
US5956701A (en) * 1997-06-13 1999-09-21 International Business Machines Corporation Method and system for using an artificial neural net for image map processing
US6205473B1 (en) * 1997-10-03 2001-03-20 Helius Development Corporation Method and system for asymmetric satellite communications for local area networks
US6014150A (en) * 1997-08-01 2000-01-11 Avid Technology, Inc. System and method of defining and employing behaviors for articulated chains
US6307576B1 (en) * 1997-10-02 2001-10-23 Maury Rosenfeld Method for automatically animating lip synchronization and facial expression of animated characters
US6384819B1 (en) * 1997-10-15 2002-05-07 Electric Planet, Inc. System and method for generating an animatable character
US6216119B1 (en) * 1997-11-19 2001-04-10 Netuitive, Inc. Multi-kernel neural network concurrent learning, monitoring, and forecasting system
US6449001B1 (en) 1997-11-21 2002-09-10 William W. Levy Video teleconferencing assembly and process
US7159009B2 (en) * 1997-12-17 2007-01-02 Sony Corporation Method and apparatus for automatic sending of e-mail and automatic sending control program supplying medium
FI115747B (en) * 1998-02-12 2005-06-30 Nokia Corp Procedure for data transfer
US6421463B1 (en) 1998-04-01 2002-07-16 Massachusetts Institute Of Technology Trainable system to search for objects in images
US6356669B1 (en) 1998-05-26 2002-03-12 Interval Research Corporation Example-based image synthesis suitable for articulated figures
US6233605B1 (en) * 1998-07-09 2001-05-15 Ncr Corporation Low-bandwidth remote conferencing
US6212548B1 (en) 1998-07-30 2001-04-03 At & T Corp System and method for multiple asynchronous text chat conversations
US6681010B1 (en) 1998-09-01 2004-01-20 Genesys Telecommunications Laboratories, Inc. Methods, systems and computer program products for automatic task distribution
US6215498B1 (en) 1998-09-10 2001-04-10 Lionhearth Technologies, Inc. Virtual command post
US6351267B1 (en) * 1998-12-10 2002-02-26 Gizmoz Ltd Fast transmission of graphic objects
US6433813B1 (en) * 1998-12-28 2002-08-13 Ameritech Corporation Videoconferencing method and system for connecting a host with a plurality of participants
US6604129B2 (en) 1999-03-25 2003-08-05 At&T Corp. Method and apparatus for a conference call mediation service
US6591293B1 (en) 1999-03-31 2003-07-08 International Business Machines Corporation Application presentation synchronizer
WO2000060483A1 (en) * 1999-04-01 2000-10-12 Multitude, Inc. Apparatus and method for creating audio forums
US6611822B1 (en) * 1999-05-05 2003-08-26 Ac Properties B.V. System method and article of manufacture for creating collaborative application sharing
US6904185B1 (en) * 1999-12-16 2005-06-07 Eastman Kodak Company Techniques for recursively linking a multiply modified multimedia asset to an original digital negative
US6731308B1 (en) * 2000-03-09 2004-05-04 Sun Microsystems, Inc. Mechanism for reciprocal awareness of intent to initiate and end interaction among remote users
DE10018143C5 (en) * 2000-04-12 2012-09-06 Oerlikon Trading Ag, Trübbach DLC layer system and method and apparatus for producing such a layer system
US6552733B1 (en) * 2000-04-20 2003-04-22 Ati International, Srl Configurable vertex blending circuit and method therefore
US7091975B1 (en) * 2000-07-21 2006-08-15 Microsoft Corporation Shape and animation methods and systems using examples
WO2002027654A2 (en) * 2000-09-29 2002-04-04 Siemens Aktiengesellschaft Method and assembly for the computer-assisted mapping of a plurality of temporarily variable status descriptions and method for training such an assembly
US20080040227A1 (en) 2000-11-03 2008-02-14 At&T Corp. System and method of marketing using a multi-media communication system
US7091976B1 (en) * 2000-11-03 2006-08-15 At&T Corp. System and method of customizing animated entities for use in a multi-media communication application
US6963839B1 (en) 2000-11-03 2005-11-08 At&T Corp. System and method of controlling sound in a multi-media communication application
US6990452B1 (en) 2000-11-03 2006-01-24 At&T Corp. Method for sending multi-media messages using emoticons
US6976082B1 (en) 2000-11-03 2005-12-13 At&T Corp. System and method for receiving multi-media messages
US7203648B1 (en) 2000-11-03 2007-04-10 At&T Corp. Method for sending multi-media messages with customized audio
US6910186B2 (en) * 2000-12-08 2005-06-21 Kyunam Kim Graphic chatting with organizational avatars
US20040135788A1 (en) * 2000-12-22 2004-07-15 Davidson Colin Bruce Image processing system
US20020180788A1 (en) * 2001-06-01 2002-12-05 Wu Hsiang Min Method of video displaying for E-mails
US7433458B2 (en) * 2001-06-29 2008-10-07 At&T Intellectual Property I, L.P. System and method for viewing contents via a computer network during a telephone call
FR2828572A1 (en) * 2001-08-13 2003-02-14 Olivier Cordoleani Method for creating a virtual three-dimensional person representing a real person in which a database of geometries, textures, expression, etc. is created with a motor then used to manage movement and expressions of the 3-D person
US7671861B1 (en) * 2001-11-02 2010-03-02 At&T Intellectual Property Ii, L.P. Apparatus and method of customizing animated entities for use in a multi-media communication application
US7003139B2 (en) * 2002-02-19 2006-02-21 Eastman Kodak Company Method for using facial expression to determine affective information in an imaging system
US6873692B1 (en) 2002-03-29 2005-03-29 Bellsouth Intellectual Property Corporation Telephone synchronization with software applications and documents
US20030225848A1 (en) * 2002-05-31 2003-12-04 Brian Heikes Remote instant messaging personalization items
US7689649B2 (en) * 2002-05-31 2010-03-30 Aol Inc. Rendering destination instant messaging personalization items before communicating with destination
US7779076B2 (en) * 2002-05-31 2010-08-17 Aol Inc. Instant messaging personalization
US20030225847A1 (en) * 2002-05-31 2003-12-04 Brian Heikes Sending instant messaging personalization items
US7685237B1 (en) 2002-05-31 2010-03-23 Aol Inc. Multiple personalities in chat communications
US7636755B2 (en) 2002-11-21 2009-12-22 Aol Llc Multiple avatar personalities
WO2004049113A2 (en) * 2002-11-21 2004-06-10 America Online, Inc. Multiple personalities
US8037150B2 (en) 2002-11-21 2011-10-11 Aol Inc. System and methods for providing multiple personas in a communications environment
US7908554B1 (en) 2003-03-03 2011-03-15 Aol Inc. Modifying avatar behavior based on user action or mood
US20040179039A1 (en) 2003-03-03 2004-09-16 Blattner Patrick D. Using avatars to communicate
US7913176B1 (en) 2003-03-03 2011-03-22 Aol Inc. Applying access controls to communications with avatars
US7173623B2 (en) * 2003-05-09 2007-02-06 Microsoft Corporation System supporting animation of graphical display elements through animation object instances
US7034836B2 (en) * 2003-05-14 2006-04-25 Pixar Adaptive caching of animation controls
US7944449B2 (en) * 2003-05-14 2011-05-17 Pixar Methods and apparatus for export of animation data to non-native articulation schemes
US7862428B2 (en) * 2003-07-02 2011-01-04 Ganz Interactive action figures for gaming systems
US7023454B1 (en) 2003-07-07 2006-04-04 Knight Andrew F Method and apparatus for creating a virtual video of an object
KR100512742B1 (en) * 2003-07-25 2005-09-07 삼성전자주식회사 Portable computer
US7870504B1 (en) * 2003-10-01 2011-01-11 TestPlant Inc. Method for monitoring a graphical user interface on a second computer display from a first computer
JP4509119B2 (en) 2003-11-13 2010-07-21 本田技研工業株式会社 Adaptive stochastic image tracking with sequential subspace updates
US7534157B2 (en) 2003-12-31 2009-05-19 Ganz System and method for toy adoption and marketing
WO2005064502A1 (en) * 2003-12-31 2005-07-14 Ganz, An Ontario Partnership Consisting Of S.H. Ganz Holdings Inc. And 816877 Ontario Limited System and method for toy adoption and marketing
US7587452B2 (en) 2004-04-23 2009-09-08 At&T Intellectual Property I, L. P. Methods, systems, and products for network conferencing
US7694228B2 (en) 2004-05-26 2010-04-06 At&T Intellectual Property I, L.P. Methods, systems, and products for network conferencing
US7587037B2 (en) 2004-05-26 2009-09-08 At&T Intellectual Property I, L.P. Network conferencing using method for distributed computing and/or distributed objects for presentation to a mobile communications device
US7403969B2 (en) * 2004-05-26 2008-07-22 At&T Delaware Intellectual Property, Inc. Network conferencing using method for distributed computing and/or distributed objects to intermediate host for presentation to a communications device
US9652809B1 (en) 2004-12-21 2017-05-16 Aol Inc. Using user profile information to determine an avatar and/or avatar characteristics
US20060164440A1 (en) * 2005-01-25 2006-07-27 Steve Sullivan Method of directly manipulating geometric shapes
JP4516536B2 (en) * 2005-03-09 2010-08-04 富士フイルム株式会社 Movie generation apparatus, movie generation method, and program
US8606950B2 (en) * 2005-06-08 2013-12-10 Logitech Europe S.A. System and method for transparently processing multimedia data
US7623731B2 (en) * 2005-06-20 2009-11-24 Honda Motor Co., Ltd. Direct method for modeling non-rigid motion with thin plate spline transformation
NZ564006A (en) 2006-12-06 2009-03-31 2121200 Ontario Inc System and method for product marketing using feature codes
KR20120088525A (en) * 2008-08-05 2012-08-08 가부시키가이샤 에네사이바 Facility monitoring/controlling system and facility monitoring/controlling method
US8788943B2 (en) * 2009-05-15 2014-07-22 Ganz Unlocking emoticons using feature codes
US20120075354A1 (en) * 2010-09-29 2012-03-29 Sharp Laboratories Of America, Inc. Capture time reduction for correction of display non-uniformities
EP3286718A4 (en) 2015-04-23 2018-12-05 Hasbro, Inc. Context-aware digital play
US10917611B2 (en) 2015-06-09 2021-02-09 Avaya Inc. Video adaptation in conferencing using power or view indications
CN106056654B (en) * 2016-05-30 2019-12-17 武汉开目信息技术有限责任公司 Method for outputting and playing process video in three-dimensional assembly simulation
US11389735B2 (en) 2019-10-23 2022-07-19 Ganz Virtual pet system
US11358059B2 (en) 2020-05-27 2022-06-14 Ganz Live toy system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3510210A (en) * 1967-12-15 1970-05-05 Xerox Corp Computer process character animation
US4414621A (en) * 1977-06-13 1983-11-08 Canadian Patents & Development Ltd. Interactive visual communications system
US4600919A (en) * 1982-08-03 1986-07-15 New York Institute Of Technology Three dimensional animation
US4797836A (en) * 1986-11-19 1989-01-10 The Grass Valley Group, Inc. Image orientation and animation using quaternions
SE8801043D0 (en) * 1988-03-22 1988-03-22 Orjan Strandberg GeniMator
US5029997A (en) * 1989-11-16 1991-07-09 Faroudja Philippe Y C Stop-frame animation system
US5245553A (en) * 1989-12-14 1993-09-14 Options Unlimited Research Full-duplex video communication and document generation system
US5261041A (en) * 1990-12-28 1993-11-09 Apple Computer, Inc. Computer controlled animation system based on definitional animated objects and methods of manipulating same
US5416899A (en) * 1992-01-13 1995-05-16 Massachusetts Institute Of Technology Memory based method and apparatus for computer graphics

Also Published As

Publication number Publication date
ATE140091T1 (en) 1996-07-15
AU660532B2 (en) 1995-06-29
US5416899A (en) 1995-05-16
EP0709810A3 (en) 1996-08-14
EP0621969A1 (en) 1994-11-02
WO1993014467A1 (en) 1993-07-22
DK0621969T3 (en) 1996-11-25
ES2091593T3 (en) 1996-11-01
EP0621969B1 (en) 1996-07-03
AU3438093A (en) 1993-08-03
DE69303468T2 (en) 1997-01-16
DE69303468D1 (en) 1996-08-08
KR950700576A (en) 1995-01-16
GR3020963T3 (en) 1996-12-31
US5659692A (en) 1997-08-19
EP0709810A2 (en) 1996-05-01
JPH07509081A (en) 1995-10-05

Similar Documents

Publication Publication Date Title
EP0621969B1 (en) Memory-based method and apparatus for computer graphics
US7515155B2 (en) Statistical dynamic modeling method and apparatus
US7319466B1 (en) Method and apparatus for generating and interfacing with a haptic virtual reality environment
Basdogan et al. Haptic rendering in virtual environments
US6535215B1 (en) Method for animating 3-D computer generated characters
US6731287B1 (en) Method for animating a 3-D model of a face
US7307633B2 (en) Statistical dynamic collisions method and apparatus utilizing skin collision points to create a skin collision response
US7057619B2 (en) Methods and system for general skinning via hardware accelerators
US5883638A (en) Method and apparatus for creating lifelike digital representations of computer animated objects by providing corrective enveloping
WO1999015945A2 (en) Generating three-dimensional models of objects defined by two-dimensional image data
US7872654B2 (en) Animating hair using pose controllers
JPH10208078A (en) System and method for quickly transforming graphic object
US20080024504A1 (en) Method to imitate lifelike images for computer deformed objects
US7057618B2 (en) Patch picking methods and apparatus
JPH04289976A (en) Three-dimensional shape model forming method and system
EP1565892B1 (en) Virtual model generation
FEILD Memory based method and apparatus for computer graphics
EP1050020B1 (en) Method for representing geometric shapes and geometric structures in computer graphics
Brunelli Poggio et a1.
WO2004104935A1 (en) Statistical dynamic modeling method and apparatus
Basdogan et al. Principles of haptic rendering for virtual environments
Basdogan et al. Haptic Rendering in Virtual
ZANNATHA et al. Realistic Computer Simulations Based on Visual and Force Feedback
Badler et al. Computer Graphics Research Laboratory Quarterly Progress Report Number 43
TA 5.2. 1 Homogeneous Transformation Matrices

Legal Events

Date Code Title Description
FZDE Discontinued