WO2007079361A2 - Modeling the three-dimensional shape of an object by shading of a two-dimensional image - Google Patents

Modeling the three-dimensional shape of an object by shading of a two-dimensional image Download PDF

Info

Publication number
WO2007079361A2
WO2007079361A2 PCT/US2006/062405 US2006062405W WO2007079361A2 WO 2007079361 A2 WO2007079361 A2 WO 2007079361A2 US 2006062405 W US2006062405 W US 2006062405W WO 2007079361 A2 WO2007079361 A2 WO 2007079361A2
Authority
WO
WIPO (PCT)
Prior art keywords
shading
model
image
updated
computer
Prior art date
Application number
PCT/US2006/062405
Other languages
French (fr)
Other versions
WO2007079361A3 (en
Inventor
Rolf Herken
Tom-Michael Thamm
Jennifer Courter
Original Assignee
Mental Images Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mental Images Gmbh filed Critical Mental Images Gmbh
Priority to AU2006332582A priority Critical patent/AU2006332582A1/en
Priority to EP06849019A priority patent/EP1964065A2/en
Priority to CA002633680A priority patent/CA2633680A1/en
Priority to JP2008547751A priority patent/JP2009521062A/en
Publication of WO2007079361A2 publication Critical patent/WO2007079361A2/en
Publication of WO2007079361A3 publication Critical patent/WO2007079361A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/507Depth or shape recovery from shading

Definitions

  • the present invention relates to the field of computer graphics, computer-aided geometric design and the like, and in particular to improved systems and techniques for modeling the three- dimensional shape of an object by shading of a two-dimensional image.
  • an artist, draftsman or the like attempts to generate a three- dimensional model of an object, as maintained by a computer, from lines defining two- dimensional views of objects.
  • computer-graphical arrangements generate a three-dimensional model from, for example, various two-dimensional line drawings comprising contours and/or cross-sections of the object and by applying a number of operations to such lines which will result in two-dimensional surfaces in three-dimensional space, and subsequent modification of parameters and control points of such surfaces to correct or otherwise modify the shape of the resulting model of the object.
  • a three-dimensional model for the object may be viewed or displayed in any of a number of orientations.
  • robot vision or machine vision ⁇ which will generally be .referred to herein as "machine vision”
  • shape from shading is used to generate a three-dimensional model of an existing object from one or more; two-dimensional images of the object as recorded by a camera.
  • machine vision the type of the object recorded on the image(s) is initially unknown by the machine, and the model of the object that is generated is generally used, for example, to facilitate identification of the type of the object depicted on the image(s) by the machine or another device.
  • the object to be modeled is illuminated by a light source, and a camera, such as a photographic or video camera, is used to record the imagc(s) from which the object will be modeled.
  • a camera such as a photographic or video camera
  • the orientation of a light source, the camera position and the image plane relative to the object a.re known.
  • the reflectance properties of the surface of the object are also known.
  • an orthographic projection technique is used to project the surface of the object onto the image plane, that is. it is assumed that an implicit camera that is recording the image on the image plane has a focal length of infinity
  • the image plane represents the x, v coordinate axes (that is, any point on the image plane can be identified by coordinates ⁇ x.
  • the image of the object as projected onto the image plane is represented by an image ir.radian.ee function /Oe, v) over a two-dimensional domain ⁇ . e R ⁇ while the shape of the object is given by a height function z (x. _v) over the domain ⁇ .
  • the image irradiance function Hx, y) represents the brightness of the object at each point (x.y) in the image. Io the shape from shading methodology, given ⁇ (x, v) for all points ⁇ > ⁇ ) in the domain, the shape of an object given by z(x, y), is determined. It would be desirable to provide improved methods and systems for generating a three- dimensional model of an object by shading as applied to a two-dimensional image of an object.
  • the present invention provides improved methods and systems for generating a three- dimensional model of an object by shading.
  • One aspect of the invention provides improvements to the shape-by-shading (SBS) systems and methods described in commonly owned U.S. Patent No. 6.724.38?,
  • Another aspect of the invention relates to particular shaping techniques, methods and algorithms that can be implemented in a shapc-by-shading (SBS) modeler in accordance with the invention, and more particularly, methods and algorithms that advantageously exploit tnist-region models and methods
  • SBS shapc-by-shading
  • FfGS. 1-4 are a series of diagrams illustrating components of an exemplars 1 digital processing environment in which aspects of the present invention can be deployed.
  • FIG. 5 depicts a computer graphics system for generating a three-dimensional model of an object by shading as applied by an operator or the like to a two-dimensional image of the object in the given state of its creation at any point in time, constructed in accordance with the invention.
  • FIGS. 6-10 are a series of diagrams that are useful in understanding the operations performed by the computer graphics system depicted in FIG. 5 in determining the updating of the model of an object by shading as applied to the two-dimensional image of the object in its given state of creation at any point in time.
  • FIGS. 1 1 A and 1 IB show a flowchart depicting operations performed by the computer graphics system and operator in connection with the invention.
  • FlG. 12 shows a diagram illustrating data flow in an SBS system according to the present invention.
  • FiG. 13 shows a screenshot 350 of the SBS Modifier in 3ds max.
  • FIGS. 14 arid 15 show pseudo code implementations of SBS techniques according to aspects of the invention.
  • FfGS. 16A and 16B show a tabic that provides a listing of mathematical notation used in the present description of the invention.
  • HGS. 17-22 show a series of flowchart of a generalized method and sub-methods according to various aspects of the present invention.
  • FIGS. J -4 Before describing particular examples and embodiments of the invention, the following is a discussion, to he read in connection with FIGS. J -4 of underlying digital processing structures and environments in which the invention may be implemented and practiced.
  • the present invention can be utilized in the generation and synthesis of images, such as for display in a motion picture or other dynamic display.
  • the techniques described herein can be practiced as part of a computer graphics system, in which a pixel value is generated for pixels in an image.
  • the pixel value is representative of a point in a scene as recorded on an image plane of a simulated camera.
  • the underlying computer graphics system can be configured to generate the pixel value for an image using a selected methodology, such as that of the present invention.
  • FlG. 1 attached hereto depicts an illustrative computer system IO that can carry out such computer graphics processes.
  • the computer system 10 in one embodiment includes a processor module 11 and operator interface elements comprising operator input components such as a keyboard 12A and/or a mouse 12B (or digitizing tablet or other analogous element(s). generally identified as operator input elements) 12) and an operator output element such as a video display device 13.
  • the illustrative computer system 10 can be of a conventional stored-program computer architecture.
  • the processor module 1 1 can include, for example, one or more processor, memory and mass storage devices, such as disk and/or tape storage elements (not separately shown), which perform processing and storage operations in connection with digital data provided thereto.
  • the operator input element(s) 12 can be provided to permit an operator to input information for processing.
  • the video display device 13 can be provided to display output information generated by the processor module 1 1 on a screen 14 to the operator, including data that the operator may input tor processing, information that the operator may input to control processing, as well as information generated during processing.
  • Tlte processor module 1 1 can generate information for display by the video display device 13 using a so-called "graphical user interface" ' ('"GUI.
  • the computer system 10 is shown as comprising particular components, such as the keyboard 12 A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FlG. 1
  • the processor module 1 1 can include one or more network ports, generally identified by reference numeral 34, which are connected to communication links which connect the computer system 10 in a computer network. The network ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network.
  • a network In a typical network organized according to, for example, the clie ⁇ t- server paradigm, certain computer systems in the network are designated as servers, which store data and programs (generally, "information " ) for processing by the other, client computer systems, thereby to enable the client computer systems to conveniently share the information.
  • a client computer system which needs access to information maintained by a particular server will enable the server to download the information to it over the network.
  • the client computer system may also return the processed data to the server for storage, in addition to computer systems (including the above-described servers and clients), a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network.
  • the communication Sinks interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-carrying medium. including wires, optical fibers or other media for earning signals among the computer systems.
  • Computer systems transfer information over die network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message, in addition to the computer system 10 shown in the drawings, methods, devices or software products in accordance with the present invention can operate on an)' of a wide range of conventional computing devices and systems, such as those depicted by way of example in FlG. 2 (e.g., network system 100), whether standalone, networked, portable or fixed, including conventional PCs 102, laptops 104. handheld or mobile computers J 06, or across the Internet or other networks 108, which may in turn include servers 1 10 and storage i 12
  • a software application configured in accordance with the invention can operate within, e.g., a PC 102 like that shown in FlOS. 2 and 3, in which program instructions can be read from ROM or CD ROM 1 16 (FIG 3), magnetic disk or other storage 120 and loaded into RAM 1 14 for execution by CPU 1 1 S.
  • Data can be input into the system via any known device or means, including a conventional keyboard, scanner, mouse, digitizing tablet, or other elements 103.
  • ASIC Application -Specific Integrated Circuit
  • the Shape-by-Shading Module 150 one or more of the following sub-modules: shading information input module 150a, model generator module 150b, and display output module 150c.
  • the Shape ⁇ by-Shading Module 150 may also include other components described herein, generally depicted in box 150d as '"tools/APl/plug-ins. "" As further shown in FlG. 4. the output of the Sha ⁇ e-by-Shading Module 150 may be provided in a number of different forms, including displayable images, digitally updated geometric models, subdivision surfaces, and the like.
  • PC's operating system and a computer program product configured in accordance with the present invention.
  • the term "'computer program product" can encompass any set of computer-readable programs instructions encoded on a computer readable medium.
  • a computer readable medium can encompass any form of computer readable element, including, but not limited to, a computer hard disk, computer floppy disk, computer-readable flash drive, computer-readable RAM or
  • ROM element or any other known means of encoding, storing or providing digital information. whether local to or remote from the workstation, PC or other digital processing device or system.
  • Various forms of computer readable elements and media are well known in the computing arts, and their selection is left to the implementer In each case, the invention is operable to enable a computer system to calculate a pixel value, and the pixel value can be used by hardware elements in the computer system, which can be conventional elements such as graphics cards or display controllers, to generate a display -controlling electronic output. Conventional graphics cards and display controllers are well known in the computing aits, are not necessarily part of the present invention, and their selection can be left to the implemented
  • FlG. 5 depicts a computer graphics system 200 for generating a three-dimensional model of an object by shading as applied by an operator or the like to a two-dimensional image of the object in the given state of its creation at any point in time, constructed in accordance with the invention.
  • the computer graphics system includes a processor module 201 , one or more operator input devices 202 and one or more display devices 203.
  • the display device(s) 203 will typically comprise a frame buffer, video display terminal or the like, which will display information in textual and/or graphical form on a display screen to the operator.
  • the operator input devices 202 for a computer graphics system 200 will typically include a pen 204 which is typically used in conjunction with a digitizing tablet 205, and a trackball or mouse device 206.
  • the pen 204 and digitizing tablet be used by the operator in several modes. In one mode, particularly useful in connection with the invention, the pen 204 and digitizing tablet are used to provide updated shading information to the computer graphics system, In other modes, the pen aid digitizing tablet arc used by the operator to input conventional computer graphics information, such as line drawing for. for example, surface trimming and other information, to the computer graphics system 200, thereby to enable the system 200 to perform conventional computer graphics operations.
  • the trackball or mouse device 206 can be used to move a cursor or pointer over the screen to particular points in the image at which the operator can provide input with the pen and digitizing tablet.
  • the computer graphics system 200 may also include a keyboard (not shown) which the operator can use to provide textual input to the system 200.
  • the processor module 201 generally includes a processor, which may be in the form of one or more microprocessors, a main memory, and will generally include one a mass storage subsystem including one or more disk storage devices.
  • the memory and disk storage devices will generally store data and programs (collectively, "information " ) to be processed by the processor, and will store processed data which has boon generated by the processor.
  • Hie processor module includes connections to the operator input device(s) 202 and the display deviee(s) 203, and will receive information input by the operator through the operator input device(s) 202.
  • the processor module can provide video display information, which can form part of the information obtained from the memory and disk storage device as well as processed data generated thereby, to the display device(s) for display to the operator.
  • the processor module 201 may also include connections (not shown) to hardcopy output devices such as printers for facilitating the generation of hardcopy output, modems and/or network interfaces (also not shown) for connecting the system 200 to the public telephony system and/or in a computer network for facilitating the transferor information, arid the like.
  • the computer graphics system 200 generates from input provided by the operator, through the pen and digitizing tablet and the mouse, information defining the initial and subsequent shape of a three-dimensional object, which information may be used to generate a two-dimensional image of the corresponding object for display to the operator, thereby to generate a model of the object.
  • the image displayed by the computer graphics system 200 represents the image of the object as illuminated from an illumination direction and as projected onto an. image plane, with the object having a spatial position and rotational orientation relative to the illumination direction and the image plane and a scaling and/or zoom setting as selected by the operator.
  • the initial model used in the model generation process may be one of a plurality of default models as provided the computer graphics system itself, such as a model defining a hemispherical or -ellipsoid shape.
  • the initial model may be provided by the operator by providing an initial shading of at least one pixel of die image plane, using die pen 204 and digitizing tablet 205.
  • the initial model is provided by the operator, one of the pixels on the image plane is selected to provide a " 'reference " portion of the initial surface fragment for the object, the reference initial surface fragment portion having a selected spatial position, rotational orientation and height value with respect to the image plane, and the computer graphics system determines the initial model for the rest of the surface fragment (if any) in relation to shading ⁇ if any) applied to other pixels on the image plane.
  • the reference: initial surface fragment portion is selected to be the portion of the surface fragment for which the first pixel on the image plane to which the operator applies shading.
  • the reference initial surface- fragment portion is determined to be parallel to the image plane, so that a vector normal to the reference initial surface fragment portion ts orthogonal to the image plane and the reference initial surface fragment portion has a height value as selected by the operator.
  • computer graphics system will display the image of the initial model, the image defining the shading of the object associated with the initial model as illuminated from the particular illumination direction and projected onto the image plane.
  • the operator using the moose and the pen and digitizing tablet, w ill provide updated shading of the image of the initial object, and/or extend the object by shading neighboring areas on die image plane, and the computer graphics system 200 will generate an updated mode! representing the shape of the object based on the updated shading provided by the operator.
  • the operator can increase or decrease the amount of shading applied to particular points on the image plane.
  • the operator using the mouse or trackball and the pen and digitizing tablet, can perform conventional computer graphics operations in connection with the image, such as trimming of the surface- representation of the object defined by the model.
  • the computer graphics system 200 can use the updated shading and other computer graphic information provided by the operator to generate the updated model defining the shape of the object, and further generate from the updated model a two-dimensional image for display to the operator, from respective spatial p ⁇ sition(s). rotational o ⁇ en£ati ⁇ n ⁇ s) and scaling and/or zoom settings as selected by the operator.
  • the operator determines that the shape of the object as represented by the updated model is satisfactory, he or she can enable the computer graphics system 200 to store the updated model as defining the shape of the final object.
  • the operator determines that the shape of the object as represented by the updated model ts not satisfactory, he or she can cooperate with the computer graphics system 200 to further update the shading and other computer graphic information, in the process using three- dimensional rotation and translation and scaling or zooming as needed.
  • the computer graphics system 200 updates the model information, which is again used to provide a two-dimensional image of the object, from rotational orientations, translation or spatial position settings, and scale and/or zoom settings as selected by the operator.
  • FIGS, 6-1 The detailed operations performed by the computer graphics system 200 in determining the shape of an object will be described in connection with FIGS, 6-1 1.
  • the image of the object is projected onto a two-dimensional image plane 220 that is tessellated into pixels 22.1 ⁇ /, j) having a predetermined number of rows and columns.
  • the image plane 220 defines an x, v Cartesian plane, with rows extending in the x direction and columns extending in the v direction.
  • the projection of the surface of the object which is identified in FlG. 6 by reference numeral 222.
  • Each point on the image plane corresponds to a picture element, or " pixel, " represented herein by ⁇ .-. ., with i € S . I . N] and / e [ i, M ⁇ , where N is the maximum number of columns (index / ranging over the columns in the image plane) and M is the maximum number of rows (index / ranging over the rows in the image plane), ⁇ n the illustrative image plane 220 depicted Ln FIG. 6. the number of columns N is eight, and the number of rows M is nine.
  • the rows may correspond ⁇ scan lines used by the device(s) to display the image.
  • Each pixel ⁇ *. . corresponds to a particular point (x ( . v.) of the coordinate system, and M x . ⁇ ' identifies the resolution of the image.
  • the computer graphics system 10 assumes that the object is illuminated by a light source having a direction
  • the computer graphics system 200 initializes the object with at least an ini ⁇ nitesimally small portion of the object to be modeled as the initial model. For each pixel ⁇ ,, , the height value z(x, y) defining the height of the portion of the object projected onto the pixel is known, and is defined as a height field H( ⁇ . v) as follows; >
  • the computer graphics system 200 displays the image representing the object defined by the initial model which is displayed to the operator on the display 203 as the image on image plane 220.
  • the operator can begin to modify the image by updating the shading the image using the pen 204 and digitizing tablet 205 (FlG. 5).
  • the image of the initial model as displayed by the computer graphics system will itself be shaded to represent the shape of the object as defined by the initial model, as illuminated from the predetermined illumination direction and as projected onto the image plane.
  • Each pixel ⁇ ,. . on the image plane will have an associated intensity value ⁇ (x, y) (which is also referred to herein as a "pixel value”) which represents the relative brightness of the image at the pixel ⁇ , .
  • the operator preferably updates the shading for the image such that, for each pixel
  • the computer graphics system 10 After the operator updates the shading for a pixel, the computer graphics system 10 will perform two general operations in generation of the updated shape for the object, hi particular. the computer graphics system 200 will
  • the computer graphics system 1.0 will perform these operations (i) and (ii) tor each pixel ⁇ - j whose shading is updated, as the shading is updated, thereby to provide a new normal vector field N(x, y) and height field H(x, y).
  • Operations performed by the computer graphics system 200 in connection with updating of the normal vector m (item ⁇ i) above) for a pixel ⁇ ; . will be described in connection with FIGS. 7 and S 5 and operations performed in connection with updating of the height value r(A ⁇ v) (item (ii) above) for the pixel ⁇ ,. ; will be described in connection with FIGS, °- and 10.
  • the illumination direction is represented by the hne extending from the vector corresponding to the arrow identified by legend "L " "L' '' specifically represents an illumination vector whose direction is based on the direction of illumination from the light source illuminating the object, and whose magnitude represents the magnitude of the illumination on the object provided by the Sight source.
  • the set of possible new normal vectors lie on the surface of the cone 231. which is defined by: n t -L «I (2.04) that is. the set of vectors for which the dot product with the illumination vector corresponds to the pixel value "/ " for the pixel after the updating of the shading as provided by the operator.
  • the normal vector n- is, as is the case with all norma! vectors, normalized to have a predetermined magnitude value, preferably the value 'One,' 1 the updated norma! vector has a magnitude corresponding to: where 'Ijfliij " refers to the magnitude of updated normal vector «, .
  • Equations (2,04) and (2,05) define a set of vectors, and the magnitudes of the respective vectors, one of which is the updated normal vector for the updated object at point z(x,y).
  • the computer graphics system 200 will select one of the vectors from the set as the appropriate updated normal vector ii) as follows.
  • the updated normal vector will Uc on the surface of cone 231. It is apparent that, if the original norma! vector n i; and the illumination vector! are not parallel, then they (that is, the prior normal vector n « and the illumination vector /.) will define a plane.
  • One of the lines 233 lies on the surface of the cone 23 ] which is on the side of the illumination vector L towards the prior norma! vector ft, and the other line 233 lies on the surface of the cone 231 which is on the side of die illumination vector L away from the normal vector no, and the correct updated normal vector n-, is defined by the line on the cone 231 which is on the side of the illumination vector ⁇ , towards the 10 prior normal vector n n .
  • the direction of the updated normal vector can be determined from Equation (2.04) and the following. Since the prior normal vector ⁇ > and the illumination vector L form a plane 232. their cross product, ''ftxl" defines a vector that is normal to the plane 232. Thus, since the updated normal vector n, also lies in the plane 232, the 15 dot product of the updated normal vector ⁇ - ⁇ with the vector defined by the cross product between the prior normal vector «o and the illumination vector L has the value zero, that is:
  • Equation (2.06) can be re-written as
  • the computer graphics system 200 (FIG. 5) will generate an updated normal vector n-. for each pixel ⁇ ,. , in the image plane 220 based on the shading provided by the operator, thereby to generate an updated vector field N(x, y).
  • the computer graphics system 200 After the computer graphics system 200 has generated 30 the updated normal vector for a pixel it can generate a new height value z(x, , y) for that pixel, thereby to update the height field H(x, y) based on the updated shading. Operations performed by the computer graphics system 200 in connection with updating the height value r(x.j) will be described in connection with FIGS. 9 and i f).
  • FiG. 9 depicts an illustrative updated shading for the image plane 220 depicted in FIG. 6.
  • the pixels @ ,- . ; ⁇ have been provided with coordinates, with the rows being identified by numbers in the range from 1 through K, .inclusive, and the columns being identified by letters in the range A through ⁇ 5 inclusive.
  • the pixels ® & ; through ⁇ ;:. j are provided with coordinates, with the rows being identified by numbers in the range from 1 through K, .inclusive, and the columns being identified by letters in the range A through ⁇ 5 inclusive.
  • a have all been modified, and the computer graphics system 200 is to generate an updated height value h(x,y) therefor for use as the updated height value for the pixel in the updated height field H(x,y).
  • the computer graphics system 200 performs several operations, which will be described below, to generate a height
  • TITC operations performed by the computer graphics system 200 in generating an updated i 5 height value will be described in connection with one of the modified pixels in the image plane 220, namely, pixel ⁇ a , / . along one of the directions, namely, the horizontal direction. Operations performed in connection with the other directions, and the other pixels whose shading is updated, will be apparent to those skilled in the art.
  • the computer graphics system 200 makes use of Bezier-Bernstein interpolation, which defines a 0 curve P(f) of degree ";? ": as
  • t is a numerical parameter on the interval between 0 and 1 , inclusive
  • vectors B 1 defined by components (6, ⁇ S b, v , />,,) define " VH " control points for the curve P(O- with control points AV and B fl comprising the endpoints of the curve.
  • the tangents of the curve P(t) at the 5 endpoints correspond to the vectors £y>'s and B !i ⁇ B,, .
  • the computer graphics system 200 uses a cubic Bezaer-Bernstein interpolation
  • the points B 0 - # ⁇ , Bz, and Ih. are control points for the cubic curve P,,Mi). 0 Equation (2.09), as applied Io the determination of the updated height value h ⁇ for the pixel ⁇ .o., ⁇ corresponds to h ⁇ h a (l ⁇ f) 5 +3B i t(l ⁇ tf+3£ :: t I ⁇ l -t)+h b t* (2. l 0)
  • Equation (2, 10) that for / ::: 0, the updated height value /?. for pixel ® ⁇ ., ⁇ corresponds to /? «, which is the height value for pixel ⁇ c,,, ⁇ , and for i - L the updated height value h ⁇ for pixel ⁇ & . ,- corresponds to Iv, which is the height value for pixel ⁇ £ , ⁇ ,
  • the updated height value hi is a function of the height values A ⁇ and A* of the pixels ® C J and ⁇ and the height values for control points B) and B 2 .
  • the vector BJ.k is orthogonal to the normal vector i ⁇ , at pixel ⁇ c .-t and the vector BS* is orthogonal to the .normal vector iv, at pixel ⁇ U ,J-
  • Equation (2.10) which is m vector forrn. gives rise to the following equations for each of the dimensions "x" and "z " (the * '/ ' dimension being orthogonal to the image plane):
  • Equation (2 12) gives rise to the following two equations and
  • Equations (2.13 ⁇ through (2.16), (2.24) and (2.25) are all onc-dimensionai in the respective x and z components.
  • Equations (2.13) through (2.16), (2.24) and (2.25) there are six unknown values. namely, the value of parameter f, the values of the x and z components of the vector Ih (that is, values b ⁇ x and b is ), the x and z components of the vector Bi (that is. values &:..- , and b:.,), and the z component of the vector
  • the computer graphics system 200 will, in addition to performing the operations described above in connection with the horizontal direction (corresponding to the 'V coordinate axis), also perform corresponding operations similar to those described above for each of the vertical and two diagonal directions to determine the updated height vector h: for the pixel ⁇ & 4. After the computer graphics system 200 determines the updated height vectors for all four directions, it will average them together.
  • the z component of the average of the updated height vectors corresponds to the height value for the updated model for Um object.
  • the operations performed by the computer graphics system 200 will be described in connection with the flowchart in FIGS, 1 1 A and 1 1 B.
  • the operator will have a mental image of the object that is to be modeled by the computer graphics system.
  • the initial model for the object is determined (step 250), and the computer graphics system displays a two dimensional image thereof to the operator based on a predetermined illumination direction, with the display direction corresponding to an image plane (reference image plane 20 depicted in Fl.G. 6) (step 251).
  • the initial model may define a predetermined default shape, such as a hemisphere or ellipsoid, provided by the computer graphics system, or alternatively a shape as provided by the operator.
  • the shape will define an initial normal vector field N(x, y) and height field H(x, y), defining a normal vector and height value for each pixel in the image.
  • the operator can select one of a plurality of operating modes. including a shading mode in connection with the invention, as well as one of a plurality of conventional computer graphics modes, such as erasure and trimming (step 252). If the operator selects the shading mode, the operator will update the shading of the two-dimensional image fay means of.
  • the system's pen and digitizing tablet step 253
  • the computer graphics system 200 can display the shading to the operator.
  • the shading that is applied by the operator will preferably be a representation of the shading of the finished object as it would appear illuminated from the predetermined illumination direction, and as projected onto die image plane as displayed by the computer graphics system 200.
  • the computer graphics system 200 When the operator has updated the shading for a pixel in step 253, the computer graphics system 200 will generate an update to the model of the object, In generating the updated model, the computer graphics system 200 will first determine, for each pixel in the image, an updated normal vector, as described above in connection with FIGS. ? and 8. thereby to provide an updated normal vector field for the object (step 254). Thereafter, the computer graphics system 200 will determine, for each pixel in the image, an updated height value, as described above in connection with FIGS. 9 and 10. thereby to provide an updated height field for the object (step 255).
  • the computer graphics system 200 After generating the updated norma! vector field and updated height field, thereby to provide an updated model of the object the computer graphics system 200, will display an image of the updated model to the operator from one or more directions and zooms as selected by the operator (step 256). in the process rotating, translating and scaling and/or zooming the image as selected by the operator (step 257). ⁇ f the operator determines that the updated model is satisfactory (step 258), which may occur if for example, the updated model corresponds to his or her mental image of the object to be modeled, he or she can enable the computer graphics system 200 to save the updated model as the final model of the object ⁇ step 259). On the other hand, if the operator determines in. step 257 that the updated model is not satisfactory, he or she can enable the computer graphics system 200 to return to step 251.
  • step 252 if the operator in that step selects another operating mode, such as the erasure mode or a conventional operational mode such as the trimming mode, the computer graphics system will sequence to step 260 to update the model based on the erasure information, or the trimming and other conventional computer graphic information provided to the computer graphics system 200 by the operator.
  • the computer graphics system will sequence to step 257 to display an image of the object based on the updated model. If the operator determines that the updated model is satisfactory (step 108), he or she can enable the computer graphics system 200 to save the updated model as the final model of the object (step 25 Q ). On the other hand, if the operator determines in .step 25? that the updated model is not satisfactory, he or she can enable the computer graphics system 200 to return to step 251 ,
  • the operator can enable the computer graphics system 200 to perform steps 251 , 253 through 257 and 260 as the operator updates the shading of the image of the object (step 253). or provides other computer graphic information (step 260), and the computer graphics system 200 will generate, in steps 254 and 255, the updated normal vector field and updated height field, or, in step 260, conventional computer graphic components, thereby to define the updated model of the object.
  • the operator determines in step 258 that the updated model corresponds to his or her mental image of the object, or is otherwise satisfactory, he or she can enable the computer graphics system 200 to store the updated normal vector field and the updated height field to define the final model for the object (step 259).
  • the invention provides a number of advantages.
  • the computer graphics system provides an interactive computer graphics system which allows an operator, such as an artist, to imagine the desired shape of an object and how the shading on the object might appear with the object being illuminated from a particular illumination direction and as viewed from a particular viewing direction (as defined by the location of the image plane).
  • the computer graphics system displays a model of the object, as updated based on the shading, to the operator.
  • the operator can accept the model as the final object, or alternatively can update the shading further, from which the computer graphics system will further update the model of the object.
  • the computer graphics system constructed in accordance with the invention avoids the necessity of solving partial differential equations, which is required in prior art systems which operate in accordance with the shape- from-shading methodology.
  • a further advantage of the invention is that it readily facilitates the use of a hierarchical representation for the model of the object that is generated.
  • the operator enables the computer graphics system 200 to increase the scale of the object or zoom in on the object thereby to provide a higher resolution
  • a plurality of pixels of the image will display a portion of the image which, at the lower resolution, were associated with a single pixel
  • the computer graphics system will generate the normal vector and height value for each pixel at the higher resolution for which the shading is updated as described above, thereby to generate and/or update the portion of the model associated with the updated shading at the increased resolution.
  • the updated portion of the model at the higher resolution will be associated with the particular portion of the model which was previously defined at the lower resolution, thereby to provide the hierarchical representation, which may be stored.
  • the object as defined by the model inherits a level of detail which corresponds to a higher resolution in the underlying surface representation.
  • Corresponding operations can be performed if the operator enables the computer graphics system 200 to decrease the scale of the object or zoom out from the object thereby providing a lower resolution.
  • the computer graphics system 200 can retain the object model information, that is, the no ⁇ na! vector field information and height field information, for a number of updates of the shading as provided by the operator, which it (that is, system 200) may use in displaying models of the object for the respective updates.
  • This can allow the operator to view images of the respective models to. for example, enable him or her to see the evolution of the object through the respective updates.
  • this can allow the operator to return to a model from a prior update as the base which is to be updated. This will allow the operator, for example, to generate a tree of objects based on different shapings at particular models.
  • the computer graphics system 200 has been described as making use of an orthogonal projection and a single light source, it will be appreciated that the other forms of projection, including perspective projection, and multiple light sources can be used. f.n addition, although the computer graphics system 200 has been described as providing shape of an object by shading of an image of the object, it will be appreciated that it may also provide computer graphics operations, such as trimming and erasure, through appropriate operational modes of the pen 204 and digitizing tablet.
  • a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a genera! purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program.
  • Any program may in whole or in part comprise part of or be .stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner.
  • system may be operated and/or otherwise controlled fay means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.
  • operator input elements not shown
  • Section 3.2 sets forth a short summary of the SBS shading and shaping process. Sections 3.3 through 3.7 describe specific extensions and other improvements to the SBS shading and shaping process.
  • FlG. 12 shows a flow diagram illustrating the SBS shading and shaping cycle 300.
  • Step 301 Hierarchical subdivision surfaces, polygon meshes and Non-Uniform Rational
  • NURBS Shape-by -Shading
  • Step 302 Once a subdivision surface is in place and displayed to the user, it is matched to a 2D model view, including information about grid corners, grid width and height, pixel size and camera to object transformation.
  • Steps 303-305 Using the 21) model view, the user sets a lighting direction, times input parameters, and shades, i.e., modifies the intensities of selected pixels, or loads a set of pre- shaded pixels. This information is passed to the shaping algorithm 306, Steps 306-309.
  • the shaping algorithm 306 determines the correct geometric alterations to make to the surface. More surface primitives are added where needed via subdivision in the area of the shading in order to ensure that sufficient detail is present (step 307). A height field is found that reflects in 3D the changes that were requested in the 2D setting (step 308), and the subdivision surface is then altered so that it reflects these heights (step 309). The result is a shaped hierarchical subdivision surface that can be altered further (steps 302-309), saved (step 3 JO). or converted to the desired output surface type.
  • the presently described systems and techniques extend SBS to accept any hierarchical subdivision surface, polygon mesh, or NURBS surface.
  • the incoming mesh is converted to a hierarchical subdivision surface if it is not already one, and the resulting subdivision surface is the one on which die SBS shading and shaping cycle is performed.
  • Adaptive subdivision is used to add detail to the surface, and analysis and synthesis are used to propagate changes to all levels of the surface, allowing for modifications at specified levels of detail.
  • Features of the HSDS library are set forth in patents owned by the owner of the present patent application.
  • TIw subdivision surface model mat results from the SBS process can be converted to another surface type if desired. In this way.
  • SBS allows both for flexibility in choosing incoming and outgoing mesh types and takes advantage of hierarchical subdivision properties
  • the surface of interest which is assumed to be continuous, is projected orthographieally onto the viewing plane.
  • This projection has an associated height field, whose intensities are determined by one light source with a Larnbertian reflectance map so that the discrete intensity / at the point ( «, v) in the model view of a given surface with height field H is defined by:
  • N u ts a discrete normal to the surface and f. is a u»it vector that points in the direction of the light: source, which is infinitely far away.
  • the intensities of selected pixels on the projected surface are changed by means of shading or loading a pre-defined set of pixels.
  • the SBS shaping algorithm finds a shape determined by the shading
  • One solution method involves Bezier-Berastein polynomials
  • a new technique for interpreting 2D shading in the model view as 3D shape on the surface is now implemented in SBS and is described in the remainder of this section.
  • P be the set of pixels in the model view whose intensities have been modified. This set is called the set of modified pixels. It is possible, through a series of simplifications, to reduce the pixels over which Function 3.02 is summed to
  • the set of height increments F can be reduced to a vector x containing one entry for each pixel or connected area whose corresponding heights value may be altered by the algorithm.
  • the size of F matches that of the height field of the projected surface.
  • the vector x reduces the size based on the number of unique non-zero height increments in F, a potentially much smaller set. Function 3.02 can then be reduced to the unconstrained minimization of
  • the reduced function is less computationally intensive to minimize, both because it is only necessary to sum over a neighborhood of the modified pixels instead of all pixels in the model view and because the dimension of the minimization problem is reduced form the size of F ' to the length of x.
  • Tlie method used to minimize Function 3,03 is the Trust-Region method.
  • a C++ applications programming interface has been developed recently for SBS. It works above the mental niatterK library and requires hbmentalmati ' er.hb at linking time.
  • the mental matter library is initialized and terminated internally within the SBS library.
  • Initialization of miSbs module is implicit and is done when first accessing the class.
  • An instance odniSbs module is returned by its static method get.
  • TIw terminate method must be called when unloading the library.
  • SBS uses objects from the miCapi ci subsurf class of the mental matter library, which are wrapped into a miSbs surface object via the create surface method.
  • a phigin writer may provide the SBS API with an instance of miCapi cl subsurfov with a tessellated mesh in the form of a nnCkoBox. Another possibility is not to provide a surface, in which case the library creates a wrapper holding an empty subdivision surface.
  • Other methods in the miSbs module class include creai viewer m ⁇ get solver.
  • the miSbs surface class is used in calls to rmSbs solver to access the SBS shaping algorithm and in miSbs ogl view for display and interaction.
  • miSbs surface implementations are instantiated using the create surface method of miSbs module, and are destroyed with the destroy method.
  • Other methods in the miSbs surface class include gel sitbswfi get depth and convert (which converts mesh indices to and from surf indices).
  • Other related classes are provided to speed up integration. For instance. miSbsMax mesh hides the technical details of converting 3ds niax ⁇ meshes to and from the nuSbs surface object.
  • the miSbs ogl view class may be used to display an instance oimiShs surface.
  • ffliSbs ogf view implementations are instantiated using the create viewer method of miSbs mocluk and are destroyed with the destroy method.
  • the niiSbs ogl view entity does not maintain any reference to a given miSbs surface instance, ft only carries data related to its mesh representation (possibly simplified), and auxiliary graphic data such as OpenGL* contexts and triangle strip buffers.
  • Methods of the miSbs ogl view class include set settings and update, as well as a set of methods around the 2D projection of mesh vertices and faces.
  • a third class, called iniSbs solver., is exposed in die API Io perform SBS operations. It is implemented as a static, stateless instance and is accessed via die get. solver method of mtSbs module. Its methods include get default settings, udpdate (the main algorithm that determines 3D shape from 21 ) shading), and cancel,
  • the plugin features iaciude:
  • Mesh-Based SBS The ability to run the SBS process on polygon meshes, without first converting to and/or using properties of subdivision surfaces.
  • Real -Time Mesh Display The ability to display large, complex meshes at an interactive rate .
  • Other modi fications may i ucl ude view -tie pendent sinipli fication and custom mesh operations.
  • Trimming The ability to trim arbitrarily across surface faces.
  • Creasing The ability to crease arbitrarily across surface faces.
  • NURBS-Based SBS The ability to run the SBS process on Non-Uniform Rational B-S ⁇ iine (NURBS).
  • SBS Shape-By-Shading
  • is a positive constant called the smoothing coefficient and the integration is performed over the area of the projected surface in the viewing window.
  • the ideal result is a new set of surface heights with smoothness determined by ⁇ whose intensities match those of the shading.
  • the 2D shading results in a 3D modification.
  • the SBS process must be done in an efficient way and one that does not disturb the continuity of the underlying surface. In order to move from a theoretical setting to a computational one it is necessary to diseretize, winch is discussed in the next section.
  • the viewing window is rasterized as a rectangular grid of pixels, M, that have integer- valued coordinates ( «. v) and corresponding camera space coordinates (x. y).
  • This rasterization is called die “model view. " A neighborhood of raster pixel ( ««. v ⁇ ) € Mis defined to be all pixels Ut, v) e M such that
  • A. pi.ve! is defined to be on the boundary of set / ⁇ if the pixel is a member of .4 but one or more of its neighbors is not in A . ' The boundary of A is denoted OA.
  • a 5 of A is defined to be -4 ::: A x - OA.
  • V' denotes set subtraction.
  • Pixels Ln the interior of the model view have a neighborhood containing 9 pixels.
  • the SBS algorithm assumes orthographic projection of the surface onto the viewing plane.
  • .V denotes the set of pixels in die model view that intersect the projected surface.
  • the 10 height field of the surface, denoted by H, is defined over the pixels in 5 and contains the heights of the surface in camera space.
  • H ⁇ u, v) is the floating-point height of the part of the surface visible at pixel ⁇ «, v) in the model view .
  • Non-square pixels can be used, for example, in a mesh-based implementation of SBS.
  • projected primitive vertices can be used to define the pixels.
  • c the floating-point width (height), in camera space, between neighboring raster points.
  • the discrete first derivative in the w-direction of a surface with height field H is based on a simple slope formula across two pixels that straddle the point of interest and is defined to be
  • a discrete normal to a surface with height field // is defined to be
  • the light extractionrce is described by a unit vector C that points in die direction of the light. which is infinitely far away.
  • the discrete intensity / of a given surface with height field // and light vector f: is defined by
  • This formula produces scalar values between 0.0 (black) and 1.0 (white), and implies that areas of the surface that face toward the light source arc lighter than those parts that face away.
  • the discrete second derivative in the ⁇ -direction is defined to be
  • a discretized version of Function 4.01 can be made by using the definitions of discrete intensity and curvature. Since those formulas depend on neighboring pixel values, only pixels that are both in the interior of the mode! view and in the interior of the projected surface w ill be considered as being contained in the area of interest.
  • the discrete version of Function 4.01 is defined to be
  • A is a positive constant called the smoothing coefficient and F is a set of height increments
  • Function 4 10 seed only involve a sura over pixels in a neighborhood of those pixels whose intensities were modified by the user rather that all jjixeis in
  • the view code aids in the development of reduction m calculation It categorizes pixels by proposed height increment.
  • the notions of path and connected set are needed to build it.
  • a path from pixel ⁇ « «, v (> ) to pixel (a,, V 1 ) is defined to be any sequence of pixels
  • Set /i is a connected component of set B if A C B and for each (W 0 , v ⁇ > ), ⁇ «>. V 5 ) S A there is a path from (u lh Vfi) to (?»'i, Vs) completely contained in .-4.
  • the view code assumes that the height increments ⁇ . are constant on connected components of ⁇ ' ⁇ P. In order to avoid surface artifacts, if
  • T- (7'.]. 7y, :7s, ....? «.:) be a partition of ⁇ ' ⁇ P such that T. ⁇ , is the union of all connected components that intersect c ⁇ V U fiM and T, for 0 ⁇ / ⁇ ⁇ K IS a connected component that is maximal in the sense that there is no connected component of 5 ⁇ P that properly contains it. Then the set V of increments has at most m ⁇ « unique non-zero values, one for each of the
  • Function 4.17 is the final version of the discrete function to be minimized by the shaping algorithm. It is called the "objective function" and is of dimension m + n.
  • SBS uses the Trust-Region Newton-CG method to minimize the objective function ⁇ Function 4.17).
  • the main ideas of Trust-Region methods are to set up a quadratic model for the function, to minimize the mode! function within a *v t ⁇ ust region.” to adjust the trust region according to certain criteria, to minimize the model function within the new trust region, to adjust 15 the region again, to minimize again, etc. Given certain restrictions, such a method is guaranteed to converge to a point corresponding to a local minimum of the original function.
  • Vf is the gradient vector
  • V ⁇ f is the Hessian matrix
  • the "IMewton " ' in the Trust-Region N ' ewton-CG name comes from the fact that the Hessian matrix is used in the model. Some other, usually positive definite, matrix may be used instead, in which case Newton is dropped from the name. In the case of SBS. use of the Hessian matrix is convenient and allows for weakened convergence conditions to be used. If Il Vf ⁇ x"')
  • V r is the residual gradient vector, as previously defined, and V : r, is the residual Hessian
  • the formula for intensity can be used to write residuals of die first kind as
  • Formulas (4.33) and (4,34) can be broken down even further by finding formulas for the partial derivatives of AH S ( ⁇ ?,) and D-J-I. , ⁇ q t ). Recall that
  • SBS uses the CG-Steihaug method to fi»d a minimum of the mode! fu ⁇ ctio ⁇ in the trust region.
  • Conjugate gradient methods try to solve a iittcar system Ax ::: — ⁇ . where ,4 is a symmetric matrix.
  • the problem can be rc-formuiatcd as follows:
  • ⁇ and b can be expressed in terms of the Trust-Region method as follows:
  • CO methods use the notion of conjugate gradient (CG) to build a sequence of vectors that converge to the minimum of ⁇ x). For a given /, a set of vectors
  • the CG-Steihaug mctl ⁇ od is used at step / in UK Trust-Region Nevvton-CG method to find a point at winch a minimum of the model function occurs in the trust region.
  • Three sequences are used to do this. They are ⁇ :/ ' • '. r' ⁇ ' and p ' J .
  • d'- J is a sequence of conjugate gradient direction vectors.
  • r U ! is a sequence of residuals, and p K) converges to the desired minimum p '.
  • the conjngacy of d ! - guarantees convergence in at most m ⁇ + ⁇ n steps.
  • the starting quantities for the sequences are // " 0 ::: 0, r : ⁇ i ::: -Vy(V) ::: /j, and
  • the method described is a standard CG method.
  • the seque ⁇ ee p ! ' J is generated until the residual falls under some threshold.
  • Hie Steihaug variant of the CG method takes into account the cases of
  • Equations (4.53) and (4.54) show that for each i,(a,) hi is nonzero only if / and k are both in
  • a Cauchy point (/? ) ! is defined to be ⁇
  • the move threshold is set to 0.1 € (0. '/4).
  • A is a sum of squares, so / ⁇ > 0.
  • Hie threshold is chosen in (U, X ⁇ ), so if the center of the trust region is moved then / ⁇ > 0. Since the €G-Steihaug method minimizes the model in the trust region, then
  • FIGS. 16A and 16B set forth rabies 500a and 500b providing, for convenient reference, a listing of mathematical notation used in describing systems and techniques according to aspects of the present invention.
  • FIGS 17-22 show a series of flowcharts illustrating a generalized method 600 and sub-methods 62(L 640, 66(L 680. and 700 according Io die above-discussed aspects of the invention for generating a geometrical model representing geometry of at least a portion of a surface of a three-dimensional (3d) object by shading by an operator in connection with a two- dimensional (2d) image of the object,, fee image representing the object as projected onto an image plane.
  • the generalized method 600 shown in FlG. 1? comprises the following steps:
  • Step 601 Receiving shading information provided by the operator in connection with the image of the object, the shading information representing a change in brightness level of at least a portion of the image.
  • Step 602 Generating, in response to the shading information, an updated geometrical model of the object, the shading information being used to determine at least one geometrical feature of the updated geometrical model.
  • Step 603 Displaying the image of the object as defined by the updated geometrical model,
  • the generalized method 600 can operate upon a digital input of any- hierarchical subdivision surface, polygon mesh or N URBS surface.
  • Generalized method 600 may include sub-method 620 shown in FIG. 18. comprising the following steps:
  • Step 621 Once a subdivision surface has been generated and displayed to a user, matching the subdivision surface to a 2d model view, the 2d model view including information about grid comers, grid width and height, pixel size and camera to object transformation.
  • Step 622 Utilizing 2 ⁇ model view to sets a lighting direction, tune input parameters and shade, thereby modifying the intensities of selected pixels; or load a set of pr ⁇ -shaded pixels.
  • the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
  • Step 623 Determining the correct geometric alterations to make to the surface, adding surface primitives where needed via subdivision in the area of the shading, in order to ensure that sufficient detail is present; and determine a height field that reflects in 3D the changes that were requested in the 2D setting, altering the subdivision surface to reflect the determined height values, thereby resulting in a shaped, hierarchical subdivision surface that can be altered further.
  • the surface primitives can include any of triangles, quadrilaterals, or other polygons.
  • Generalized method 600 may also include sub-method 640 shown in FlG. 19. comprising the following steps: Step 641; Creating an underlying subdivision surface.
  • Step 642 Displaying a 2D shade view.
  • Step 643 Enabling a user to set lighting, shading, and tone parameters:
  • Step 644 Executing a shaping process comprising (a) introducing detail on the surface; (b) determining new height parameters for the surface; and (c) shaping the subdivision surface. thereby generating a 3D subdivision surface.
  • the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
  • Generalized method 600 may also include sub-method 660 shown in FfG. 20, comprising the following steps: Step 661 : Receiving an input comprising either a mesh representation or a NURBS surface.
  • Step 662 Converting the input to a hierarchical subdivision surface if it is not already one.
  • Step 663 Performing shading and shaping on the hierarchical subdivision surface.
  • Step 664 Utilizing adaptive subdivision to add detail to the surface, and analysis and synthesis to propagate changes to all levels of the surface, thereby allowing for modifications at selected levels of detail.
  • Step 665 Providing a hierarchical subdivision surface library.
  • Step 666 Converting the subdivision surface model resulting from the SBS process, to another surface- type if desired.
  • Generalized method 600 may also include sub-method 680 shown in FiG. 21. comprising the following steps:
  • Step 681 Applying a selected shaping operation, the selected shaping operation being configured to attempt to produce a set of height increments f over the model view that minimizes the function given by:
  • Step 682 reducing the function to the unconstrained minimization of:
  • the method used to perform the minimization is a trust-region method.
  • Step 683 Performing further reduction from summing over all the pixels in the model view that intersect the interior of the projected surface to summing only over that set reduced by intersecting it with the neighborhood of modified pixels, such that the calculation need not be made over the entire projected surface as seen in the model view, the reduced set being referred to as Q,
  • Generalized method 600 may also include the following sub-method 700 shown in FlG. 22. comprising the following steps;
  • Step 701 modeling the function by the quadratic function:
  • Vf is the gradient vector of /and VY is the Hessian matrix of/.
  • Step 702 Minimizing the model F in a selected region.
  • the selected region is ⁇ jxjj ⁇ ⁇ , for some A > 0.
  • Step 703 Implementing minimization utilizing the CG-Steihaug method with a special sparse matrix multiplication.
  • Step 704 Constructing a test value from the resulting minimum point .Y-. and is:
  • Step 705 repeating the process until a minimum for / in the trust region is found based on established criterion, in the present example, the criterion to stop the process is if
  • the plug-in product features may include any of a shading tool with a 2d paint function and the ability to load and save shadings, light controls, parameter tuning, updating of surface shape based on shading information, light direction and input parameters, an undo/redo function internal to the modifier, a tool for selecting an area to be updated, utilizing a masking technique, and a selection tool with a set of standard subdivision surface manipulations, wherein the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
  • the generalized method and sub-methods may include the ability to nm the SBS process on polygon meshes or NURBS surfaces without first converting to or using properties of subdivision surfaces.
  • the method and sub-methods may further include the ability to display large, complex meshes at an interactive rate, the ability to trim arbitrarily across surface faces, and/or the ability to sketch contour lines to produce an initial 3d shape, useable in conjunction with the SBS modeling process.

Abstract

System for generating a geometrical model representing a portion of a three-dimensional object by shading by an operator in connection with a two-dimensional image of the object, the image representing the object as projected onto an image plane, comprising receiving shading information, updating a geometric model, displaying the updated geometric model, and operating on input of a hierarchical subdivision, polygon mesh, or NURBS surface.

Description

MODELING THE THREE-DIMENSIONAL SHAPE OF AN OBJECT BY SHADING OF A TWO-DIMENSIONAL IMAGE
This application is a continuation-iii-part of commonly owned, co-pending U.S. Patent Application Serial No. 10/795,704 {Attorney Docket MENT-003DI), filed on March 5, 2004; which is a divisional of U.S. Patent Application No. 09/027 J 75 (Attorney Docket MENT-003), filed o« Feb. 20, 1998 (now U.S. Patent No, 6,724,?83); which claims the priority benefit of U.S. Provisional Application for Patent Serial No 60/038,888 (Attorney Docket MENT-003-PR), tiled on Feb. 21 , 1997; all three applications being incorporated herein by reference.
This application for U.S. Patent also claims the priority benefit of U.S. Provisional Patent Application Serial No, 60/752,230 (Attorney Docket MNTL-106-PR), filed December 20. 2005; and U.S. Provisional Patent Application Serial No. 60/823,464 (Attorney Docket MENT-089-B- PR), filed Aug. 24, 2006; these two applications also being incorporated herein by reference.
Also incorporated herein by reference are commonly owned, co-pending U.S. Patent Application Serial No. 09/852,906 (MENT-060), filed May 9. 2001, now allowed; and U.S. Patent Application Serial No. i 0/062.192 (MENT-062). filed Feb. 1, 2002, now allowed.
Field of the Invention
The present invention relates to the field of computer graphics, computer-aided geometric design and the like, and in particular to improved systems and techniques for modeling the three- dimensional shape of an object by shading of a two-dimensional image.
Backgroιmd.ofihe.lπyerιtion
In computer graphics, computer-aided geometric design and the like, an artist, draftsman or the like (generally referred to herein as an "operator"'), attempts to generate a three- dimensional model of an object, as maintained by a computer, from lines defining two- dimensional views of objects. Conventionally, computer-graphical arrangements generate a three-dimensional model from, for example, various two-dimensional line drawings comprising contours and/or cross-sections of the object and by applying a number of operations to such lines which will result in two-dimensional surfaces in three-dimensional space, and subsequent modification of parameters and control points of such surfaces to correct or otherwise modify the shape of the resulting model of the object. After a three-dimensional model for the object has been generated, it may be viewed or displayed in any of a number of orientations. in a field of artificial intelligence commonly referred to as robot vision or machine vision {which will generally be .referred to herein as "machine vision"), a methodology referred to as "shape from shading" is used to generate a three-dimensional model of an existing object from one or more; two-dimensional images of the object as recorded by a camera. Generally, in machine vision, the type of the object recorded on the image(s) is initially unknown by the machine, and the model of the object that is generated is generally used, for example, to facilitate identification of the type of the object depicted on the image(s) by the machine or another device. In the shape from shading methodology, the object to be modeled is illuminated by a light source, and a camera, such as a photographic or video camera, is used to record the imagc(s) from which the object will be modeled. It is assumed that the orientation of a light source, the camera position and the image plane relative to the object a.re known. In addition, it is assumed that the reflectance properties of the surface of the object are also known. It is further assumed that an orthographic projection technique is used to project the surface of the object onto the image plane, that is. it is assumed that an implicit camera that is recording the image on the image plane has a focal length of infinity The image plane represents the x, v coordinate axes (that is, any point on the image plane can be identified by coordinates {x. j)). and the z axis is thus normal to the image plane; as a result, any point on the surface of the object that can be projected onto the image plane cars be represented by the coordinates (x,y. z). The image of the object as projected onto the image plane is represented by an image ir.radian.ee function /Oe, v) over a two-dimensional domain Ω. e RΛ while the shape of the object is given by a height function z (x. _v) over the domain Ω.
The image irradiance function Hx, y) represents the brightness of the object at each point (x.y) in the image. Io the shape from shading methodology, given ϊ(x, v) for all points {Λ\ >) in the domain, the shape of an object given by z(x, y), is determined. It would be desirable to provide improved methods and systems for generating a three- dimensional model of an object by shading as applied to a two-dimensional image of an object.
§uniniarv..of the invention
The present invention provides improved methods and systems for generating a three- dimensional model of an object by shading.
One aspect of the invention provides improvements to the shape-by-shading (SBS) systems and methods described in commonly owned U.S. Patent No. 6.724.38?,
Another aspect of the invention relates to particular shaping techniques, methods and algorithms that can be implemented in a shapc-by-shading (SBS) modeler in accordance with the invention, and more particularly, methods and algorithms that advantageously exploit tnist-region models and methods
Brief Description of the Drawings This invention is pointed out with particularity in the appended claims. The above and further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which'
FfGS. 1-4 are a series of diagrams illustrating components of an exemplars1 digital processing environment in which aspects of the present invention can be deployed. FIG. 5 depicts a computer graphics system for generating a three-dimensional model of an object by shading as applied by an operator or the like to a two-dimensional image of the object in the given state of its creation at any point in time, constructed in accordance with the invention.
FIGS. 6-10 are a series of diagrams that are useful in understanding the operations performed by the computer graphics system depicted in FIG. 5 in determining the updating of the model of an object by shading as applied to the two-dimensional image of the object in its given state of creation at any point in time.
FIGS. 1 1 A and 1 IB show a flowchart depicting operations performed by the computer graphics system and operator in connection with the invention. FlG. 12 shows a diagram illustrating data flow in an SBS system according to the present invention.
FiG. 13 shows a screenshot 350 of the SBS Modifier in 3ds max. FIGS. 14 arid 15 show pseudo code implementations of SBS techniques according to aspects of the invention. FfGS. 16A and 16B show a tabic that provides a listing of mathematical notation used in the present description of the invention.
HGS. 17-22 show a series of flowchart of a generalized method and sub-methods according to various aspects of the present invention.
P.e!ailed...|λ!scπpϋpji.ofthej.nvent|on
These and other aspects, embodiments, practices, implementations and examples of the invention are set forth in the following detailed description, which is divided into sections as fallows: L Digital Processing Environment in Which Invention Can Be Implemented
Ii. SBS Modeler ill. Improvements to SBS Modeler 3.1 Introduction
3.2 SBS Shading and Shaping Process
3.3 Surface Handling improvements
3.4 Additional Shaping Algorithm(s)
3.5 SBS C-H- APl 3.6 A Prototype; SBS Plug-in for 3ds max
3.7 Extensions to SBS IV. Shaping Methods and Algorithms Implemented in the SBS Modeler
4.1 Introduction
4.2 Rasterization 4.3 Reduction of Function to Be Minimized
4.4 Trust-Region Ncwton-CG Method
4.5 Computation of the Trust-Region Model
4.6 Minimization of (he Trust-Region Model
4.7 Convergence of Trust-Region Newton-CG Method V. Flowcharts of Generalized Methods
Before describing particular examples and embodiments of the invention, the following is a discussion, to he read in connection with FIGS. J -4 of underlying digital processing structures and environments in which the invention may be implemented and practiced.
Those skilled in the art will understand that the present invention can be utilized in the generation and synthesis of images, such as for display in a motion picture or other dynamic display. The techniques described herein can be practiced as part of a computer graphics system, in which a pixel value is generated for pixels in an image. The pixel value is representative of a point in a scene as recorded on an image plane of a simulated camera. The underlying computer graphics system can be configured to generate the pixel value for an image using a selected methodology, such as that of the present invention.
The following detailed description illustrates examples of methods, structures, systems. and computer software products in accordance with these techniques. It will be understood by those skilled in the ait that the described methods and systems can be implemented in software, hardware, or a combination of software and hardware, using conventional computer apparatus such as a personal computer (PC) or equivalent device operating in accordance with (or emulating) a conventional operating system such as Microsoft Windows, Linux, or Unix, either in a standalone configuration or across a network. The various processing aspects and means described herein may therefore be implemented in the software and/or hardware elements of a properly configured digital processing device or network of devices. Processing may be performed sequentially or in parallel, and may be implemented using special purpose or re- configurable hardware.
As an example, FlG. 1 attached hereto depicts an illustrative computer system IO that can carry out such computer graphics processes. With reference to FtG. K the computer system 10 in one embodiment includes a processor module 11 and operator interface elements comprising operator input components such as a keyboard 12A and/or a mouse 12B (or digitizing tablet or other analogous element(s). generally identified as operator input elements) 12) and an operator output element such as a video display device 13. The illustrative computer system 10 can be of a conventional stored-program computer architecture. The processor module 1 1 can include, for example, one or more processor, memory and mass storage devices, such as disk and/or tape storage elements (not separately shown), which perform processing and storage operations in connection with digital data provided thereto. The operator input element(s) 12 can be provided to permit an operator to input information for processing. The video display device 13 can be provided to display output information generated by the processor module 1 1 on a screen 14 to the operator, including data that the operator may input tor processing, information that the operator may input to control processing, as well as information generated during processing. Tlte processor module 1 1 can generate information for display by the video display device 13 using a so-called "graphical user interface"' ('"GUI."'), in which information for various applications programs is displayed using various ^'windows " Although the computer system 10 is shown as comprising particular components, such as the keyboard 12 A and mouse 12B for receiving input information from an operator, and a video display device 13 for displaying output information to the operator, it will be appreciated that the computer system 10 may include a variety of components in addition to or instead of those depicted in FlG. 1 In addition, the processor module 1 1 can include one or more network ports, generally identified by reference numeral 34, which are connected to communication links which connect the computer system 10 in a computer network. The network ports enable the computer system 10 to transmit information to, and receive information from, other computer systems and other devices in the network. In a typical network organized according to, for example, the clieπt- server paradigm, certain computer systems in the network are designated as servers, which store data and programs (generally, "information") for processing by the other, client computer systems, thereby to enable the client computer systems to conveniently share the information. A client computer system which needs access to information maintained by a particular server will enable the server to download the information to it over the network. After processing the data, the client computer system may also return the processed data to the server for storage, in addition to computer systems (including the above-described servers and clients), a network may also include, for example, printers and facsimile devices, digital audio or video storage and distribution devices, and the like, which may be shared among the various computer systems connected in the network. The communication Sinks interconnecting the computer systems in the network may, as is conventional, comprise any convenient information-carrying medium. including wires, optical fibers or other media for earning signals among the computer systems. Computer systems transfer information over die network by means of messages transferred over the communication links, with each message including information and an identifier identifying the device to receive the message, in addition to the computer system 10 shown in the drawings, methods, devices or software products in accordance with the present invention can operate on an)' of a wide range of conventional computing devices and systems, such as those depicted by way of example in FlG. 2 (e.g., network system 100), whether standalone, networked, portable or fixed, including conventional PCs 102, laptops 104. handheld or mobile computers J 06, or across the Internet or other networks 108, which may in turn include servers 1 10 and storage i 12
1« line with conventional computer software and hardware practice, a software application configured in accordance with the invention can operate within, e.g., a PC 102 like that shown in FlOS. 2 and 3, in which program instructions can be read from ROM or CD ROM 1 16 (FIG 3), magnetic disk or other storage 120 and loaded into RAM 1 14 for execution by CPU 1 1 S. Data can be input into the system via any known device or means, including a conventional keyboard, scanner, mouse, digitizing tablet, or other elements 103.
Those skilled in the art w ill understand that the method aspects of the invention described herein can be executed in hardw are elements, such as an Application -Specific Integrated Circuit (ASIC) constructed specifically to cam out the processes described herein, using ASIC construction techniques known to ASfC manufacturers. Various forms of ASICs are available from many manufacturers, although currently available ASICs do not provide the functions described in this patent application. Such manufacturers include Intel Corporation and "NVIDIA Corporation, both of Santa Clara, California. The actual semiconductor elements of a conventional ASfC or equivalent integrated circuit are not part of the present invention, and will not be discussed in detail herein.
Those skilled in the art will also understand that ASICs or other conventional integrated circuit or semiconductor elements can be implemented in such a manner, using the teachings of the present invention as described in greater detail herein, to carry out the methods of the present invention as discussed in greater detail below, and to implement a Shape-hy-Shadiug Module 150 within processing system 102, as shown in FTO. 4, TB accordance with the following described systems and techniques, the Shape-by-Shading Module 150 one or more of the following sub-modules: shading information input module 150a, model generator module 150b, and display output module 150c. The Shape~by-Shading Module 150 may also include other components described herein, generally depicted in box 150d as '"tools/APl/plug-ins."" As further shown in FlG. 4. the output of the Shaρe-by-Shading Module 150 may be provided in a number of different forms, including displayable images, digitally updated geometric models, subdivision surfaces, and the like.
Those skilled in the art will also understand that method aspects of the present invention can be carried out within commercially available digital processing systems, such as workstations and personal computers (PCs), operating under the collective command of the workstation or
PC's operating system and a computer program product configured in accordance with the present invention. The term "'computer program product" can encompass any set of computer-readable programs instructions encoded on a computer readable medium. A computer readable medium can encompass any form of computer readable element, including, but not limited to, a computer hard disk, computer floppy disk, computer-readable flash drive, computer-readable RAM or
ROM element, or any other known means of encoding, storing or providing digital information. whether local to or remote from the workstation, PC or other digital processing device or system. Various forms of computer readable elements and media are well known in the computing arts, and their selection is left to the implementer In each case, the invention is operable to enable a computer system to calculate a pixel value, and the pixel value can be used by hardware elements in the computer system, which can be conventional elements such as graphics cards or display controllers, to generate a display -controlling electronic output. Conventional graphics cards and display controllers are well known in the computing aits, are not necessarily part of the present invention, and their selection can be left to the implemented
iL^MModeler
FlG. 5 depicts a computer graphics system 200 for generating a three-dimensional model of an object by shading as applied by an operator or the like to a two-dimensional image of the object in the given state of its creation at any point in time, constructed in accordance with the invention. With reference to FlG. 5, the computer graphics system includes a processor module 201 , one or more operator input devices 202 and one or more display devices 203. The display device(s) 203 will typically comprise a frame buffer, video display terminal or the like, which will display information in textual and/or graphical form on a display screen to the operator. The operator input devices 202 for a computer graphics system 200 will typically include a pen 204 which is typically used in conjunction with a digitizing tablet 205, and a trackball or mouse device 206. Generally, the pen 204 and digitizing tablet be used by the operator in several modes. In one mode, particularly useful in connection with the invention, the pen 204 and digitizing tablet are used to provide updated shading information to the computer graphics system, In other modes, the pen aid digitizing tablet arc used by the operator to input conventional computer graphics information, such as line drawing for. for example, surface trimming and other information, to the computer graphics system 200, thereby to enable the system 200 to perform conventional computer graphics operations. The trackball or mouse device 206 can be used to move a cursor or pointer over the screen to particular points in the image at which the operator can provide input with the pen and digitizing tablet. The computer graphics system 200 may also include a keyboard (not shown) which the operator can use to provide textual input to the system 200.
The processor module 201 generally includes a processor, which may be in the form of one or more microprocessors, a main memory, and will generally include one a mass storage subsystem including one or more disk storage devices. The memory and disk storage devices will generally store data and programs (collectively, "information") to be processed by the processor, and will store processed data which has boon generated by the processor. Hie processor module includes connections to the operator input device(s) 202 and the display deviee(s) 203, and will receive information input by the operator through the operator input device(s) 202. process the input information, store the processed information in the memory and/or mass storage subsystem In addition, the processor module can provide video display information, which can form part of the information obtained from the memory and disk storage device as well as processed data generated thereby, to the display device(s) for display to the operator. The processor module 201 may also include connections (not shown) to hardcopy output devices such as printers for facilitating the generation of hardcopy output, modems and/or network interfaces (also not shown) for connecting the system 200 to the public telephony system and/or in a computer network for facilitating the transferor information, arid the like.
The computer graphics system 200 generates from input provided by the operator, through the pen and digitizing tablet and the mouse, information defining the initial and subsequent shape of a three-dimensional object, which information may be used to generate a two-dimensional image of the corresponding object for display to the operator, thereby to generate a model of the object. The image displayed by the computer graphics system 200 represents the image of the object as illuminated from an illumination direction and as projected onto an. image plane, with the object having a spatial position and rotational orientation relative to the illumination direction and the image plane and a scaling and/or zoom setting as selected by the operator. The initial model used in the model generation process may be one of a plurality of default models as provided the computer graphics system itself, such as a model defining a hemispherical or -ellipsoid shape. Alternatively, the initial model may be provided by the operator by providing an initial shading of at least one pixel of die image plane, using die pen 204 and digitizing tablet 205. If the initial model is provided by the operator, one of the pixels on the image plane is selected to provide a "'reference" portion of the initial surface fragment for the object, the reference initial surface fragment portion having a selected spatial position, rotational orientation and height value with respect to the image plane, and the computer graphics system determines the initial model for the rest of the surface fragment (if any) in relation to shading {if any) applied to other pixels on the image plane. In one embodiment, the reference: initial surface fragment portion is selected to be the portion of the surface fragment for which the first pixel on the image plane to which the operator applies shading. In addition, in that embodiment, the reference initial surface- fragment portion is determined to be parallel to the image plane, so that a vector normal to the reference initial surface fragment portion ts orthogonal to the image plane and the reference initial surface fragment portion has a height value as selected by the operator. In any case, computer graphics system will display the image of the initial model, the image defining the shading of the object associated with the initial model as illuminated from the particular illumination direction and projected onto the image plane.
The operator, using the moose and the pen and digitizing tablet, w ill provide updated shading of the image of the initial object, and/or extend the object by shading neighboring areas on die image plane, and the computer graphics system 200 will generate an updated mode! representing the shape of the object based on the updated shading provided by the operator. In updating the shading, the operator can increase or decrease the amount of shading applied to particular points on the image plane. In addition, the operator, using the mouse or trackball and the pen and digitizing tablet, can perform conventional computer graphics operations in connection with the image, such as trimming of the surface- representation of the object defined by the model. The computer graphics system 200 can use the updated shading and other computer graphic information provided by the operator to generate the updated model defining the shape of the object, and further generate from the updated model a two-dimensional image for display to the operator, from respective spatial pøsition(s). rotational oπen£atiøn{s) and scaling and/or zoom settings as selected by the operator. If the operator determines that the shape of the object as represented by the updated model is satisfactory, he or she can enable the computer graphics system 200 to store the updated model as defining the shape of the final object On the other hand, if the operator determines that the shape of the object as represented by the updated model ts not satisfactory, he or she can cooperate with the computer graphics system 200 to further update the shading and other computer graphic information, in the process using three- dimensional rotation and translation and scaling or zooming as needed. As the shading and other computer graphic information is updated, the computer graphics system 200 updates the model information, which is again used to provide a two-dimensional image of the object, from rotational orientations, translation or spatial position settings, and scale and/or zoom settings as selected by the operator. These operations can continue until the operator determines that the shape of the object is satisfactory, at which point the computer graphics system 200 will store the updated model information as representing the final object.
The detailed operations performed by the computer graphics system 200 in determining the shape of an object will be described in connection with FIGS, 6-1 1. With reference to FiG, 6, in the operations of fee computer graphics system 200, it is assumed that the image of the object is projected onto a two-dimensional image plane 220 that is tessellated into pixels 22.1 {/, j) having a predetermined number of rows and columns. The image plane 220 defines an x, v Cartesian plane, with rows extending in the x direction and columns extending in the v direction. The projection of the surface of the object, which is identified in FlG. 6 by reference numeral 222. that is to be formed is orthographic, with the direction of the camera's "eye" being in the z direction, orthogonal to the xt y image plane . Each point on the image plane corresponds to a picture element, or "pixel," represented herein by θ.-. ., with i € S.I . N] and / e [ i, M\ , where N is the maximum number of columns (index / ranging over the columns in the image plane) and M is the maximum number of rows (index / ranging over the rows in the image plane), ϊn the illustrative image plane 220 depicted Ln FIG. 6. the number of columns N is eight, and the number of rows M is nine. Jf the display device(s) 203 which are used to depict the image plane 220 to the operator are raster-scan devices, the rows may correspond ω scan lines used by the device(s) to display the image. Each pixel θ*. . corresponds to a particular point (x(. v.) of the coordinate system, and M x .^' identifies the resolution of the image. In addition, the computer graphics system 10 assumes that the object is illuminated by a light source having a direction
£ - (χι> }'•.- zi). where /, is a vector, and that the surface of the object is Lambestian. The implicit camera, whose image plane is represented by the image plane 220, is assumed to be view the image plane 220 from a direction that is orthogonal to the image plane 220. as is represented by the arrow with the label "CAMERA." As .noted above, the computer graphics system 200 initializes the object with at least an iniϊnitesimally small portion of the object to be modeled as the initial model. For each pixel θ ,, , the height value z(x, y) defining the height of the portion of the object projected onto the pixel is known, and is defined as a height field H(χ. v) as follows;
Figure imgf000012_0001
>
where v{x, y) € Ω refers to "for all points {x. y) in the domain Ω,"'' with the domain Ω referring to the image plane 220 Furthermore, for each pixel Φ ,. , , the norma} n {x, y) of the portion of the surface of the basic initial object projected thereon is also known and is defined as a normal field Λ''(x, y) as follows'
N(x, y )-{»(x,y ..) : vz(x, v μH(x?y) } (2.02 >
In FIG. 6, the norma! associated with the surface 222 of the object projected onto one the pixels of lhe image plane 220 is represented by the arrow labeled "«."
After the computer graphics system 200 displays the image representing the object defined by the initial model which is displayed to the operator on the display 203 as the image on image plane 220. the operator can begin to modify the image by updating the shading the image using the pen 204 and digitizing tablet 205 (FlG. 5). It will be appreciated that the image of the initial model as displayed by the computer graphics system will itself be shaded to represent the shape of the object as defined by the initial model, as illuminated from the predetermined illumination direction and as projected onto the image plane. Each pixel © ,. . on the image plane will have an associated intensity value ϊ(x, y) (which is also referred to herein as a "pixel value") which represents the relative brightness of the image at the pixel Φ , . , and which, inversely, represents the relative shading of the pixel. If the initial pixel value for each pixel ® ,. , is given by l;{x, y), which represents the image intensity value or brightness of the respective pixel θ ,-. , at location (x.y) on the image plane 220, and the pixel value after shading is represented by L, (x.y), then the operator preferably updates the shading for the image such that, for each pixel
Figure imgf000012_0002
where <=./ (t> > 0) is a predetermined bound value selected so that, if Equation (2.03) is satisfied for each pixel, the shape of the object can be updated based on the shading provided by the operator. After the operator updates the shading for a pixel, the computer graphics system 10 will perform two general operations in generation of the updated shape for the object, hi particular. the computer graphics system 200 will
(i) first determine, for each pixel θ ;. ,. whose shading is updated, a respective new normal vector n% (x. y)\ and
(i 0 after generating an updated normal vector «5 Cv, y), determine a new height value
*,y) The computer graphics system 1.0 will perform these operations (i) and (ii) tor each pixel Φ -j whose shading is updated, as the shading is updated, thereby to provide a new normal vector field N(x, y) and height field H(x, y). Operations performed by the computer graphics system 200 in connection with updating of the normal vector m (item <i) above) for a pixel Φ ; . , will be described in connection with FIGS. 7 and S5 and operations performed in connection with updating of the height value r(A\ v) (item (ii) above) for the pixel Θ ,.; will be described in connection with FIGS, °- and 10.
With reference initially to FiG. 7, that figure depicts a portion of the object, identified by reference numeral 230, after a pixel's shading has been updated b> the operator, In the following. it will be assumed that the updated normal vector, identified by the arrow identified by legend "«H " for a point z(x, v) on the surface of the object 230, is to be determined. The normal vector identified by legend "'%.." represents the normal to the surface prior to the updating. The illumination direction is represented by the hne extending from the vector corresponding to the arrow identified by legend "L " "L''' specifically represents an illumination vector whose direction is based on the direction of illumination from the light source illuminating the object, and whose magnitude represents the magnitude of the illumination on the object provided by the Sight source. Iu that case, based on the updating, the set of possible new normal vectors lie on the surface of the cone 231. which is defined by: nt-L«I (2.04) that is. the set of vectors for which the dot product with the illumination vector corresponds to the pixel value "/" for the pixel after the updating of the shading as provided by the operator. In addition, since the normal vector n- is, as is the case with all norma! vectors, normalized to have a predetermined magnitude value, preferably the value 'One,'1 the updated norma! vector has a magnitude corresponding to:
Figure imgf000013_0001
where 'Ijfliij" refers to the magnitude of updated normal vector «, .
Equations (2,04) and (2,05) define a set of vectors, and the magnitudes of the respective vectors, one of which is the updated normal vector for the updated object at point z(x,y). The computer graphics system 200 will select one of the vectors from the set as the appropriate updated normal vector ii) as follows. As noted above, the updated normal vector will Uc on the surface of cone 231. It is apparent that, if the original norma! vector ni; and the illumination vector! are not parallel, then they (that is, the prior normal vector n« and the illumination vector /.) will define a plane. This follows since the point s(x, v) at which die illumination vector £ impinges on the object 230, and the origin of the normal vector rtu on object 230, is the same point, and the tail of the illumination vector and head of the prior normal vector nr, w ill provide the two additional points which, with the point 2{JC, v), suffices to defined a plane Thus, if a plane, which is identified by reference numeral 232, is constructed on which both the illumination vector L and the prior normal vector n,-, lie, that plane 232 will intersect the cone 5 along two lines, which arc represented by lines 33 in FiG. 7. One of the lines 233 lies on the surface of the cone 23 ] which is on the side of the illumination vector L towards the prior norma! vector ft,, and the other line 233 lies on the surface of the cone 231 which is on the side of die illumination vector L away from the normal vector no, and the correct updated normal vector n-, is defined by the line on the cone 231 which is on the side of the illumination vector ϊ, towards the 10 prior normal vector nn.
Based on these observations, the direction of the updated normal vector can be determined from Equation (2.04) and the following. Since the prior normal vector ι\> and the illumination vector L form a plane 232. their cross product, ''ftxl" defines a vector that is normal to the plane 232. Thus, since the updated normal vector n, also lies in the plane 232, the 15 dot product of the updated normal vector π-ι with the vector defined by the cross product between the prior normal vector «o and the illumination vector L has the value zero, that is:
K1-(W0XL)=O (2.06)
In addition, since the difference between the pixel values 1« and 1; provided by the prior shading and the updated shading is bounded by t, (Equation (2.03) above), the angle S between
20 the prior normal vector ».;, and the updated normal vector n-. is also bounded by some maximum positive value Q,. AS a result. Equation (2.06) can be re-written as
|(«,. flή x i)j < fV (2.07)
This is illustrated diagrammatical Iy in FKs. S, FΪG. 8 depicts a portion of the cone 232 depicted in FIG. 7. the updated normal vector nι. and a region, identified by reference numeral 25 234, that represents the maximum angle t.s from the prior normal vector in which the updated normal vector n; is constrained to lie.
The computer graphics system 200 (FIG. 5) will generate an updated normal vector n-. for each pixel © ,. , in the image plane 220 based on the shading provided by the operator, thereby to generate an updated vector field N(x, y). After the computer graphics system 200 has generated 30 the updated normal vector for a pixel it can generate a new height value z(x, ,y) for that pixel, thereby to update the height field H(x, y) based on the updated shading. Operations performed by the computer graphics system 200 in connection with updating the height value r(x.j) will be described in connection with FIGS. 9 and i f). FiG. 9 depicts an illustrative updated shading for the image plane 220 depicted in FIG. 6. For the image plane 220 depicted in FlG. 9, the pixels @ ,-. ; have been provided with coordinates, with the rows being identified by numbers in the range from 1 through K, .inclusive, and the columns being identified by letters in the range A through ϊ 5 inclusive. As shown in FlG. 9, in the updated shading, the pixels ® & ; through © ;:. j,
© ;j..5 through © D, 4 and © <-. .> through Θ c, a have all been modified, and the computer graphics system 200 is to generate an updated height value h(x,y) therefor for use as the updated height value for the pixel in the updated height field H(x,y). To accomplish that, the computer graphics system 200 performs several operations, which will be described below, to generate a height
! 0 value for each pixel © , . , whose shading has been modified along a vertical direction, a horizontal direction, and two diagonal directions, and generates the final height value for the pixel as the average of the four height values (that is, the height values along the vertical, horizontal, and two diagonal directions).
TITC operations performed by the computer graphics system 200 in generating an updated i 5 height value will be described in connection with one of the modified pixels in the image plane 220, namely, pixel Φa ,/ . along one of the directions, namely, the horizontal direction. Operations performed in connection with the other directions, and the other pixels whose shading is updated, will be apparent to those skilled in the art. In generating an updated height value, the computer graphics system 200 makes use of Bezier-Bernstein interpolation, which defines a 0 curve P(f) of degree ";?": as
Figure imgf000015_0001
where t is a numerical parameter on the interval between 0 and 1 , inclusive, and vectors B1 (defined by components (6,ΛS b,v, />,,)) define "VH " control points for the curve P(O- with control points AV and Bfl comprising the endpoints of the curve. The tangents of the curve P(t) at the 5 endpoints correspond to the vectors £y>'s and B!iΛB,, . In one embodiment, the computer graphics system 200 uses a cubic Bezaer-Bernstein interpolation
to generate the updated height value. The points B0- #<, Bz, and Ih. are control points for the cubic curve P,,Mi). 0 Equation (2.09), as applied Io the determination of the updated height value h\ for the pixel Φ.o.,< corresponds to h^ha (l~f)5+3Bit(l~tf+3£::tI{l -t)+hbt* (2. l 0)
it will be appreciated from Equation (2, 10) that for / ::: 0, the updated height value /?. for pixel ® Ά.,< corresponds to /?«, which is the height value for pixel Φc,,,<, and for i - L the updated height value h\ for pixel Θ&. ,- corresponds to Iv,, which is the height value for pixel ©£,< , On the other hand, for t having a value other than 0 or 1 , the updated height value hi is a function of the height values Aβ and A* of the pixels ®C J and β^ and the height values for control points B) and B2.
As noted above, for an n degree curve P(t% the tangents at the eπdpøiπts B.) and
Figure imgf000016_0001
correspond to the vectors BJi÷ and Bn.ΛB«. Thus, for the curve /Vs(O shown in FiG. 10. the vector #;& that is defined by endpoint Ba and adjacent control point B; is tangent to the curve /V?,U) at endpoint Ba and the vector B1B^ defined fay endpoint & and adjacent control point B^ is tangent to the curve at endpoinr B3. Accordingly, the vector BJ.k; is orthogonal to the normal vector i\, at pixel Φc.-t and the vector BS* is orthogonal to the .normal vector iv, at pixel ΦU,J- Thus,
Q**(Bx-B0)'tιΛ and Q^(B2~B3)-nb Q.11)
which leads to
0=(iV>y-»fl and Q=(B2-hb)'nb {2.12}
For the deteπnination of the updated height value h: for the horizontal direction (sec FlG, 9), the Equation (2.10), which is m vector forrn. gives rise to the following equations for each of the dimensions "x" and "z" (the *'/' dimension being orthogonal to the image plane):
Figure imgf000016_0002
and
Λ1.-/ϊβ.(i-/)"V3&i/(i-/)~+3&2^2(l~/)+/ϊfcJ3 (2.14)
where the x and 2 subscripts in Equations {2.1 ?) and {2.14) indicate the respective .r and z components for the respective vectors in Equation (2.10). It will be appreciated that, for Equations (2.13) and (2.14). only value of the z component, h): . of the height value is unknown; the value of the "V component, hix, will be a function of the position of the pixel whose height value is being determined, in tins case pixel ® ;.>.->■ ^ addition. Equation (2 12) gives rise to the following two equations
Figure imgf000017_0001
and
h^KzO--~ή*+3h1j(l-~tf+3h2J2(l~ή+hbj3 (2.16) where subscripts .ι\ v and z in Equations (2.15) and (2,16) indicate the respective x. y and 2 components for the respective vectors i» Equation (2.12). in addition, as noted above, there is the further constraint on the curve /V-.?{*), 'β panicular the constraint that the updated norma! n\ be normal to the curve at the point corresponding to pixei ©»,•■/- If the vector B.-^-B ^5 in FlG. !0 is tangent to the curve at the point correspond! rig to pixei ®o 4, then the point fa,, whose 2 component corresponds to the updated height value, also lies on the vector ihviByn- Thus,
Figure imgf000017_0002
and
Figure imgf000017_0003
Based on the convex combination depicted in FIG. 10,
(2.19)
and
(2 20) = #{2( 1 - r.) + β-B?
which lead to
and
Bι2^Bϊ+t(B2~Bj+t{B2+tφ3~B2;}~Bι~t(B2~B1)] (2,22)
Combining Equations (2,17). (2.19) and (2.2 J )
Figure imgf000017_0004
which leads to
Figure imgf000018_0001
for the x and z components of the respective vectors. Similarly, for Equations (2.18), (2,20) and (2.22).
ΗbJ;l~Oz+2h2Al ~t)+h3J2~h)nlx and
Figure imgf000018_0002
for the x and z components of the respective vectors.
It will be appreciated that the eight Equations (2.13} through (2.16), (2.24) and (2.25) are all onc-dimensionai in the respective x and z components. For the Equations (2.13) through (2.16), (2.24) and (2.25), there are six unknown values. namely, the value of parameter f, the values of the x and z components of the vector Ih (that is, values b\x and bis), the x and z components of the vector Bi (that is. values &:..- , and b:.,), and the z component of the vector
Figure imgf000018_0003
(that is. value hi:) to the point /V- 3(t) for the pixel ®D A. The eight equations (2.13) through (2.16), (2.24) and (2.25) are sufficient to a system of equations which will suffice to allow the values for the unknowns to be determined by methodologies which will be apparent to those skilled in the art. The computer graphics system 200 will, in addition to performing the operations described above in connection with the horizontal direction (corresponding to the 'V coordinate axis), also perform corresponding operations similar to those described above for each of the vertical and two diagonal directions to determine the updated height vector h: for the pixel ©& 4. After the computer graphics system 200 determines the updated height vectors for all four directions, it will average them together. The z component of the average of the updated height vectors corresponds to the height value for the updated model for Um object.
The operations performed by the computer graphics system 200 will be described in connection with the flowchart in FIGS, 1 1 A and 1 1 B. Generally, it is anticipated that the operator will have a mental image of the object that is to be modeled by the computer graphics system. With reference to FKl 1 1 A and 1 1B, the initial model for the object is determined (step 250), and the computer graphics system displays a two dimensional image thereof to the operator based on a predetermined illumination direction, with the display direction corresponding to an image plane (reference image plane 20 depicted in Fl.G. 6) (step 251). As noted above, the initial model may define a predetermined default shape, such as a hemisphere or ellipsoid, provided by the computer graphics system, or alternatively a shape as provided by the operator. In any case. the shape will define an initial normal vector field N(x, y) and height field H(x, y), defining a normal vector and height value for each pixel in the image. After the computer graphics system 200 has displayed initial model, the operator can select one of a plurality of operating modes. including a shading mode in connection with the invention, as well as one of a plurality of conventional computer graphics modes, such as erasure and trimming (step 252). If the operator selects the shading mode, the operator will update the shading of the two-dimensional image fay means of. for example, the system's pen and digitizing tablet (step 253). While die operator is applying shading to the image in step 253, the computer graphics system 200 can display the shading to the operator. The shading that is applied by the operator will preferably be a representation of the shading of the finished object as it would appear illuminated from the predetermined illumination direction, and as projected onto die image plane as displayed by the computer graphics system 200.
When the operator has updated the shading for a pixel in step 253, the computer graphics system 200 will generate an update to the model of the object, In generating the updated model, the computer graphics system 200 will first determine, for each pixel in the image, an updated normal vector, as described above in connection with FIGS. ? and 8. thereby to provide an updated normal vector field for the object (step 254). Thereafter, the computer graphics system 200 will determine, for each pixel in the image, an updated height value, as described above in connection with FIGS. 9 and 10. thereby to provide an updated height field for the object (step 255).
After generating the updated norma! vector field and updated height field, thereby to provide an updated model of the object the computer graphics system 200, will display an image of the updated model to the operator from one or more directions and zooms as selected by the operator (step 256). in the process rotating, translating and scaling and/or zooming the image as selected by the operator (step 257). ϊf the operator determines that the updated model is satisfactory (step 258), which may occur if for example, the updated model corresponds to his or her mental image of the object to be modeled, he or she can enable the computer graphics system 200 to save the updated model as the final model of the object {step 259). On the other hand, if the operator determines in. step 257 that the updated model is not satisfactory, he or she can enable the computer graphics system 200 to return to step 251.
Returning to step 252, if the operator in that step selects another operating mode, such as the erasure mode or a conventional operational mode such as the trimming mode, the computer graphics system will sequence to step 260 to update the model based on the erasure information, or the trimming and other conventional computer graphic information provided to the computer graphics system 200 by the operator. The computer graphics system will sequence to step 257 to display an image of the object based on the updated model. If the operator determines that the updated model is satisfactory (step 108), he or she can enable the computer graphics system 200 to save the updated model as the final model of the object (step 25Q). On the other hand, if the operator determines in .step 25? that the updated model is not satisfactory, he or she can enable the computer graphics system 200 to return to step 251 ,
The operator can enable the computer graphics system 200 to perform steps 251 , 253 through 257 and 260 as the operator updates the shading of the image of the object (step 253). or provides other computer graphic information (step 260), and the computer graphics system 200 will generate, in steps 254 and 255, the updated normal vector field and updated height field, or, in step 260, conventional computer graphic components, thereby to define the updated model of the object. When the operator determines in step 258 that the updated model corresponds to his or her mental image of the object, or is otherwise satisfactory, he or she can enable the computer graphics system 200 to store the updated normal vector field and the updated height field to define the final model for the object (step 259). The invention provides a number of advantages. In particular, it provides an interactive computer graphics system which allows an operator, such as an artist, to imagine the desired shape of an object and how the shading on the object might appear with the object being illuminated from a particular illumination direction and as viewed from a particular viewing direction (as defined by the location of the image plane). After the operator has provided some shading input corresponding to the desired shape, the computer graphics system displays a model of the object, as updated based on the shading, to the operator. The operator can accept the model as the final object, or alternatively can update the shading further, from which the computer graphics system will further update the model of the object. The computer graphics system constructed in accordance with the invention avoids the necessity of solving partial differential equations, which is required in prior art systems which operate in accordance with the shape- from-shading methodology.
A further advantage of the invention is that it readily facilitates the use of a hierarchical representation for the model of the object that is generated. Thus, if, for example, the operator enables the computer graphics system 200 to increase the scale of the object or zoom in on the object thereby to provide a higher resolution, it will be appreciated that a plurality of pixels of the image will display a portion of the image which, at the lower resolution, were associated with a single pixel In that case, if the operator updates the shading of the image at the higher resolution, the computer graphics system will generate the normal vector and height value for each pixel at the higher resolution for which the shading is updated as described above, thereby to generate and/or update the portion of the model associated with the updated shading at the increased resolution. The updated portion of the model at the higher resolution will be associated with the particular portion of the model which was previously defined at the lower resolution, thereby to provide the hierarchical representation, which may be stored. Thus, the object as defined by the model inherits a level of detail which corresponds to a higher resolution in the underlying surface representation. Corresponding operations can be performed if the operator enables the computer graphics system 200 to decrease the scale of the object or zoom out from the object thereby providing a lower resolution.
It will be appreciated that a number of variations and modifications may be made to the computer graphics system 200 as described above in connection with FIGS. 5-1 1 , For example, the computer graphics system 200 can retain the object model information, that is, the noπna! vector field information and height field information, for a number of updates of the shading as provided by the operator, which it (that is, system 200) may use in displaying models of the object for the respective updates. This can allow the operator to view images of the respective models to. for example, enable him or her to see the evolution of the object through the respective updates. In addition, this can allow the operator to return to a model from a prior update as the base which is to be updated. This will allow the operator, for example, to generate a tree of objects based on different shapings at particular models. in addition although the computer graphics system 1.0 has been described as making use of Bezier-Bemstein interpolation to determine the updated height field h{x, y), it will be appreciated that other forms of interpolation, such as Taylor polynomials and B-splines. may be used. In addition, multiple forms of surface representations may be used with the invention. Indeed, since the model generation methodology used by the computer graphics system 200 is of general applicability , all free-form surface representations as well as pieeewise linear surfaces consisting of, for example, triangles, quadrilaterals and/or pentagons can be used. Furthermore, although the computer graphics system 200 has been described as making use of an orthogonal projection and a single light source, it will be appreciated that the other forms of projection, including perspective projection, and multiple light sources can be used. f.n addition, although the computer graphics system 200 has been described as providing shape of an object by shading of an image of the object, it will be appreciated that it may also provide computer graphics operations, such as trimming and erasure, through appropriate operational modes of the pen 204 and digitizing tablet.
Furthermore, although the computer graphics system has been described as generating a model of an object on the assumption that the object's surface is Lambertian. it will be appreciated that other surface treatments may be used for the object when an image of the object is rendered, it will be appreciated that a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a genera! purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program. Any program may in whole or in part comprise part of or be .stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner. In addition, it will be appreciated that the system may be operated and/or otherwise controlled fay means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.
HL Improvements to the SBS Modeler
3.1 Introduction
The above-described systems and techniques have undergone significant development. Section 3.2 sets forth a short summary of the SBS shading and shaping process. Sections 3.3 through 3.7 describe specific extensions and other improvements to the SBS shading and shaping process.
3.2 The SBS Shading and Shaping Process
FlG. 12 shows a flow diagram illustrating the SBS shading and shaping cycle 300. Step 301 ; Hierarchical subdivision surfaces, polygon meshes and Non-Uniform Rational
B-Spline (NURBS) surfaces axe supported as input surfaces to Shape-by -Shading (SBS). The internal algorithms of SBS use properties of hierarchical subdivision surfaces, so each of the latter two types is converted to a subdivision surface before shading begins.
Step 302: Once a subdivision surface is in place and displayed to the user, it is matched to a 2D model view, including information about grid corners, grid width and height, pixel size and camera to object transformation.
Steps 303-305: Using the 21) model view, the user sets a lighting direction, times input parameters, and shades, i.e., modifies the intensities of selected pixels, or loads a set of pre- shaded pixels. This information is passed to the shaping algorithm 306, Steps 306-309. The shaping algorithm 306 determines the correct geometric alterations to make to the surface. More surface primitives are added where needed via subdivision in the area of the shading in order to ensure that sufficient detail is present (step 307). A height field is found that reflects in 3D the changes that were requested in the 2D setting (step 308), and the subdivision surface is then altered so that it reflects these heights (step 309). The result is a shaped hierarchical subdivision surface that can be altered further (steps 302-309), saved (step 3 JO). or converted to the desired output surface type.
3.3 Surface Handling Improvements
The SBS systems and techniques described m Section IK above, arc designed for processing NURBS surfaces. The presently described systems and techniques extend SBS to accept any hierarchical subdivision surface, polygon mesh, or NURBS surface. The incoming mesh is converted to a hierarchical subdivision surface if it is not already one, and the resulting subdivision surface is the one on which die SBS shading and shaping cycle is performed. Adaptive subdivision is used to add detail to the surface, and analysis and synthesis are used to propagate changes to all levels of the surface, allowing for modifications at specified levels of detail. The Hierarchical Subdivision Surface (IiSDS) library of method images:K provides ail subdivision support needed. Features of the HSDS library are set forth in patents owned by the owner of the present patent application. TIw subdivision surface model mat results from the SBS process can be converted to another surface type if desired. In this way. SBS allows both for flexibility in choosing incoming and outgoing mesh types and takes advantage of hierarchical subdivision properties in its algorithms.
3.4 Additional Shaping Algorithm
The surface of interest, which is assumed to be continuous, is projected orthographieally onto the viewing plane. This projection has an associated height field, whose intensities are determined by one light source with a Larnbertian reflectance map so that the discrete intensity / at the point («, v) in the model view of a given surface with height field H is defined by:
Figure imgf000023_0001
where Nu ts a discrete normal to the surface and f. is a u»it vector that points in the direction of the light: source, which is infinitely far away.
The intensities of selected pixels on the projected surface are changed by means of shading or loading a pre-defined set of pixels. The SBS shaping algorithm finds a shape determined by the shading One solution method involves Bezier-Berastein polynomials A new technique for interpreting 2D shading in the model view as 3D shape on the surface is now implemented in SBS and is described in the remainder of this section.
The technique described herein produces a set of height increments F over the model view that minimizes the following.
/(F) - £ (ICiW(MO - ^M)If + ^wM) α()2) where Ca- denotes the curvature of a surface with associated height field H. λ is a constant called the smoothing coefficient and the sum is performed over pixels in the model view that intersect the interior of the projected surface.
Let P be the set of pixels in the model view whose intensities have been modified. This set is called the set of modified pixels. It is possible, through a series of simplifications, to reduce the pixels over which Function 3.02 is summed to
Q ~ Bbfetl(P) πS. (3.02 ! )
1» addition, the set of height increments F can be reduced to a vector x containing one entry for each pixel or connected area whose corresponding heights value may be altered by the algorithm. The size of F matches that of the height field of the projected surface. The vector x reduces the size based on the number of unique non-zero height increments in F, a potentially much smaller set. Function 3.02 can then be reduced to the unconstrained minimization of
Figure imgf000024_0001
The reduced function is less computationally intensive to minimize, both because it is only necessary to sum over a neighborhood of the modified pixels instead of all pixels in the model view and because the dimension of the minimization problem is reduced form the size of F ' to the length of x.
Tlie method used to minimize Function 3,03 is the Trust-Region method. First the function is modeled by the quadratic function
Figure imgf000024_0002
(3.04) where WIs the gradient vector of /and VYis the Hessian matrix of/ Then, the mode! F is minimized in the region jj y || < Δ . for some Δ > 0. The method used to do the minimization is the CG-Steinhaug method with a special sparse matrix multiplication. A test value is built from the resuming minimum point X\ and is
Figure imgf000024_0003
Kp is close to ! , then F is considered a good model for f within the trust region. The
center of the trust region is moved to X5 and the trust region radius is increased. If p is far away from F then the radius of the trust region is decreased. The process is repeated, i.e., a minimum for Fin the new trust region is found. The criterion to stop the process is if \\ Wl| is sufficiently small at the center of the current trust region, i.e., a local minimum of /"has been attained, it can be proven that this method converges. 3.5 The SBS C++ API
A C++ applications programming interface (API) has been developed recently for SBS. It works above the mental niatterK library and requires hbmentalmati'er.hb at linking time. The mental matter library is initialized and terminated internally within the SBS library. There are three main SBS classes: miSbs surface miSbs ogl view nuSbs solver
Access and creation methods for these are provided by ftu'Sbs module
Initialization of miSbs module is implicit and is done when first accessing the class. An instance odniSbs module is returned by its static method get. TIw terminate method must be called when unloading the library. SBS uses objects from the miCapi ci subsurf class of the mental matter library, which are wrapped into a miSbs surface object via the create surface method. A phigin writer may provide the SBS API with an instance of miCapi cl subsurfov with a tessellated mesh in the form of a nnCkoBox. Another possibility is not to provide a surface, in which case the library creates a wrapper holding an empty subdivision surface. Other methods in the miSbs module class include creai viewer m\ά get solver. The miSbs surface class is used in calls to rmSbs solver to access the SBS shaping algorithm and in miSbs ogl view for display and interaction. miSbs surface implementations are instantiated using the create surface method of miSbs module, and are destroyed with the destroy method. Other methods in the miSbs surface class include gel sitbswfi get depth and convert (which converts mesh indices to and from surf indices). Other related classes are provided to speed up integration. For instance. miSbsMax mesh hides the technical details of converting 3ds niaxΦ meshes to and from the nuSbs surface object.
The miSbs ogl view class may be used to display an instance oimiShs surface. ffliSbs ogf view implementations are instantiated using the create viewer method of miSbs mocluk and are destroyed with the destroy method. The niiSbs ogl view entity does not maintain any reference to a given miSbs surface instance, ft only carries data related to its mesh representation (possibly simplified), and auxiliary graphic data such as OpenGL* contexts and triangle strip buffers. Methods of the miSbs ogl view class include set settings and update, as well as a set of methods around the 2D projection of mesh vertices and faces. These include get Jάce Jn j»xel\ get ytxs Jn Jdce, get ytx pixels, get 2d distance from vtt. get βtce colon set face color^ reset face colors, set pixel buffer, get pixels no^ mtά copy pixel buffer. An additional helper class. miSh$-wirni32 viewport, is provided to be hooked up on an existing window. It provides basic refresh and message handling capabilities.
A third class, called iniSbs solver., is exposed in die API Io perform SBS operations. It is implemented as a static, stateless instance and is accessed via die get. solver method of mtSbs module. Its methods include get default settings, udpdate (the main algorithm that determines 3D shape from 21) shading), and cancel,
3.6 A Prototype: SBS PIugin for 3ds max
SBS has been implemented recently as a modifier for 3ds maxx from
Ai!todcsk*/discrcefΛ The modifier allows an artist to perforin SBS modeling by shading in 2D with a simple brush directly on the 3ds max viewports. AIi the functionality of the SBS modeling library is available within the plugin FlG. 13 shows a screenshot 350 of die SBS Modifier in 3ds max.
The plugin features iaciude:
Shading tool with a basic 2D paint package and the ability to load and save shadings;
Light controls; Parameter tuning; Update of surface shape based on shading information, light direction and input parameters; Undo/redo that is internal to the modifier:
Tool for selecting the area to be updated (masking); and Selection tool with standard subdivision surface manipulations.
3.7 Extensions to SBS
It is contemplated that the above-described systems and techniques may be enhanced in a number of ways. For example, these systems and techniques may be modified to include the following,
Mesh-Based SBS: The ability to run the SBS process on polygon meshes, without first converting to and/or using properties of subdivision surfaces.
Real -Time Mesh Display: The ability to display large, complex meshes at an interactive rate . Other modi fications may i ucl ude view -tie pendent sinipli fication and custom mesh operations.
Trimming: The ability to trim arbitrarily across surface faces. Creasing: The ability to crease arbitrarily across surface faces.
Shape from Contour Lines: The ability to sketch contour lines to produce an imtial 3D shape, as a tool to complete the SBS modeling process. NURBS-Based SBS: The ability to run the SBS process on Non-Uniform Rational B-Sρiine (NURBS).
IV. Shaping Methods and Algorithms Implemented in the SBS Moddcr 4, J introduction
The goal of the Shape-By-Shading (SBS) process is to interpret two-dimensional (2D) shading as a three-dimensional (3D) shape. There are described herein a number of techniques and systems for accomplishing this, ft is assumed, for the purposes of the present description. that a grayscale shading has been done on the projection of a surface onto a viewing plane, that the underlying surface is continuous, and that the grayscale intensities of the projected surface can be described by a matbematica! scheme Let H denote the height values in camera space associated with the part of a surface visible in the viewing window, let Pn denote the intensity of the surface given lighting condition f;» and let Cn denote the curvature of the surface. The SBS
shading algorithm tries to find increments I" to be added to the height values H that minimize the following function:
HX) - I
Figure imgf000027_0001
± *Cti*v{x*. v))dΛ (4.01)
where λ is a positive constant called the smoothing coefficient and the integration is performed over the area of the projected surface in the viewing window. The ideal result is a new set of surface heights with smoothness determined by λ whose intensities match those of the shading. The 2D shading results in a 3D modification.
The SBS process must be done in an efficient way and one that does not disturb the continuity of the underlying surface. In order to move from a theoretical setting to a computational one it is necessary to diseretize, winch is discussed in the next section.
4.2 Rasterization In order to find a solution for SBS, the problem must be adequately described and rasterized for discrete calculation. 4.2.1 Model View
The viewing window is rasterized as a rectangular grid of pixels, M, that have integer- valued coordinates («. v) and corresponding camera space coordinates (x. y). This rasterization is called die "model view." A neighborhood of raster pixel (««. vη) € Mis defined to be all pixels Ut, v) e M such that
! 11$ — iι \ < ] and ; ve. — v | < 1. including die pixel itself. The neighborhood of a set of pixels /4 is denoted by nbhd{Λ). A. pi.ve! is defined to be on the boundary of set /ϊ if the pixel is a member of .4 but one or more of its neighbors is not in A . 'The boundary of A is denoted OA. The interior
5 of A is defined to be -4 ::: A x- OA. where the symbol "V' denotes set subtraction. Pixels Ln the interior of the model view have a neighborhood containing 9 pixels.
4.2.2 Projection of the Surface
The SBS algorithm assumes orthographic projection of the surface onto the viewing plane. .V denotes the set of pixels in die model view that intersect the projected surface. The 10 height field of the surface, denoted by H, is defined over the pixels in 5 and contains the heights of the surface in camera space. H{u, v) is the floating-point height of the part of the surface visible at pixel {«, v) in the model view .
4.2.3 First Derivatives
It is assumed that raster points in the model view are spaced so that the vertical and ! 5 horizontal distances between them are equal, i.e.. that the model view pixels are square and are of uniform size. However, it should be noted that it would also be possible for the presently described systems and techniques to be implemented with respect to non-square pixels.
Non-square pixels can be used, for example, in a mesh-based implementation of SBS. In that case, projected primitive vertices can be used to define the pixels. 0 Let c be the floating-point width (height), in camera space, between neighboring raster points. The discrete first derivative in the w-direction of a surface with height field H is based on a simple slope formula across two pixels that straddle the point of interest and is defined to be
Similarly, the discrete first derivative in the v-di.rection is defined to be
[hH{^ υ) . g(», » -H) - fl(u, t> - l ) (4 03)
5
4.2.4 Surface Norma!
A discrete normal to a surface with height field // is defined to be
Nrf {u, v) **< ~/>, #(tt> t?), ~~D*&(u, v% I > <4.<J4) which is the cross product of the surface tangent vectors < 1, 0. Z) 1 H(u, ι>) > and < 0. L D2 H(U, v) >.
4.2.5 Lighting Conditions and Intensity
In SBS it is assumed that the lighting condition is Lanibertian with one light source. However, it would also be possible to use another lighting model, and/or to use multiple lighting sources. The light soitrce is described by a unit vector C that points in die direction of the light. which is infinitely far away. The discrete intensity / of a given surface with height field // and light vector f: is defined by
If'f iu, v) - " ' . £ (4.05)
This formula produces scalar values between 0.0 (black) and 1.0 (white), and implies that areas of the surface that face toward the light source arc lighter than those parts that face away.
4.2.6 Second Derivatives
The discrete second derivatives of a surface with height field H are based on using the alternative formulas
H{u+l tv)- H{UΛή (4 ()5 {
and
H(UtV-H)-HjUtV) (4 052)
for the discrete first derivatives in the u- and v- directions, respectively. The discrete second derivative in the ^-direction is defined to be
Figure imgf000029_0001
H(u -j- h v) -i- H(u - l: ϋ) - 2H(u, v)
Similarly, ι4 H bι v) = jr(B.. ÷ i) ÷ jrfa,. - i) - 2if(n,,) R()7)
and (4 0g)
Figure imgf000030_0001
where
J{«, v) = H(u + I , t>) -j- Jf/(w ~ 1, v) + #(«. w ÷ 1) 4- H{u, v ~ 1). (4.081)
4.2.7 Curvature The discrete curvature of a surface with height field H is defined to be the nonnegative scalar
C ϊ-i hi. ΪJ) \ < DfH(u,v), DlH(u, v), Df_2i?{u; v) > |p
(4.09)
4,2.8 Discrete Function to Be Minimized
A discretized version of Function 4.01 can be made by using the definitions of discrete intensity and curvature. Since those formulas depend on neighboring pixel values, only pixels that are both in the interior of the mode! view and in the interior of the projected surface w ill be considered as being contained in the area of interest. The discrete version of Function 4.01 is defined to be
/(F) - £ (j/|r4r(«, t>) - /JOMOI* + ACWCtMO) (4 10)
where A is a positive constant called the smoothing coefficient and F is a set of height increments
that is defined on the projected surface S. The condition
Figure imgf000030_0002
v) = 0.0 for (u. v) 6 S π 0$ U /iM) is imposed in order to avoid artifacts on the surface at the boundary of its projection onto the model view or at the edge of the model view.
4.3 Reduction of the Function to Be Minimized In this section it is shown that Function 4, 10 seed only involve a sura over pixels in a neighborhood of those pixels whose intensities were modified by the user rather that all jjixeis in
S M . In addition, it is shown that the dimension of the function can be reduced by representing the proposed height increments in a more compact way .
4,3.1 Modified Pixels Ut
P "" {Pa>Pi T-~> Ptn~ i }i wfwe pi ~~ (u^ x^) € M for each i.. (4.101 ) be the set of pixels in the model view whose intensities the user has modified. P is called the set of "modified pixels'" and it is assumed that
P e SnM (4.102) since neighboring information is needed to calculate intensity. 43.2 View Code
The view code aids in the development of reduction m calculation It categorizes pixels by proposed height increment. The notions of path and connected set are needed to build it. A path from pixel {««, v(>) to pixel (a,, V1) is defined to be any sequence of pixels
(UQ t VQ ) , (U I , Vi ) , ... , (Uj , Vj ) (4 J 03) such that
{ui , Vi) € rtbhd( { ('«j.... i , ^....3. )} ) for 1 < i < j. (4.104)
Set /i is a connected component of set B if A C B and for each (W0, v<>), {«>. V5) S A there is a path from (ulh Vfi) to (?»'i, Vs) completely contained in .-4. The view code assumes that the height increments ϊ. are constant on connected components of Λ'\ P. In order to avoid surface artifacts, if
is assumed that this constant is 0.0 if the connected component intersects dS U <!iM
Let T- (7'.]. 7y, :7s, ....?«.:) be a partition of Λ' \ P such that T. ■, is the union of all connected components that intersect cλV U fiM and T, for 0 < / < ΓK IS a connected component that is maximal in the sense that there is no connected component of 5\ P that properly contains it. Then the set V of increments has at most m ÷ « unique non-zero values, one for each of the
modified pixels and one for each 7} such that i -^ - 1. The "View code" is defined to be
V(u, v) { i («, «) - pi € P (4. i i)
Figure imgf000031_0001
This means, for instance, mat if 0 < 1' (M0, vy.) < m then it is known mat (U0. v\.) is a modified pixel.
Also, if V(uv, V1,) = - 1 then pixel /"' {««, Vα) will not be altered as a result of the 2D shading information. Tliis is also true of pixels for which no view code is assigned,, i.e., pixels not on fee projection of the surface onto the mode! view 4.3.3 Function PϊxeSs and a First Reduction in Calculation
Moving the heights of a section of the surface by a constant amount does not change the intensity or curvature of the interior of that section. Since the pixels in each /7 arc moved by a constant amount and T is a partition of .V \ i\ {hen the only possible pixels at which if H+T(u,v) ≠ ϊH ((u..v) (4.1 in
arc contained i n nbhd (P) π S . Let
Q S= UbUcI(P) nSn M
where q, denotes a pixel {«,. vt) in the model view Q is called the set of "function pixels," and Function 4. IO can now be reduced to
/(F) = Yβlf u+r (qt) ~~ Ifcqt)? + XCH+HQU) + A" (4.13)
where K\ which is the sum of
Figure imgf000032_0001
over the set
(Sf)M)XQ, H 132) is a constant.
4.3.4 increment Vector and a Second Reduction ϊn Calculation
Since there are at most m + n unique non-zero height increments in J '. the function to be minimized need only be a function of those increments, as identified by the view code. Let
X "< Xo, Xi, ..., Xm+n-i > (4.M)
r; is the increment value of V tor all pixels such that l'(n, v) ------ i. x is called the "increment vector." For
(u,v) € $\T...£, (4.141) let
Hx{tuv) — H(u. v) 4- Xγ{iuv) (4.15) Function 4.10 can then be reduced to a mi a mi izatioπ of /(*) Σ (1Ij1Ju1V) /fr(«,v)|2 + ACjr, («,»)) (4.16)
4.3.5 Objective Function
Functions 4. 13 and 4.16 can be combined to form the following function winch when minimized is equivalent to the minimization of Function 4.10:
f(x) fø) ~ 4iMf + AC*, fe}) (4.17)
Figure imgf000033_0001
The minimization can now also be unconstrained . Function 4.17 is the final version of the discrete function to be minimized by the shaping algorithm. It is called the "objective function" and is of dimension m + n.
4,4 The Trust-Region Newton-CG Method
K) 4.4.1 Overview
SBS uses the Trust-Region Newton-CG method to minimize the objective function {Function 4.17). The main ideas of Trust-Region methods are to set up a quadratic model for the function, to minimize the mode! function within a *vtτust region." to adjust the trust region according to certain criteria, to minimize the model function within the new trust region, to adjust 15 the region again, to minimize again, etc. Given certain restrictions, such a method is guaranteed to converge to a point corresponding to a local minimum of the original function.
The Trust-Region model of the objective function centered at an increment vector
X- .0 . <- ,,J) r0 r0 (4.1 71 ) is
Figure imgf000033_0002
where Vf is the gradient vector
Figure imgf000033_0003
0 and V~f is the Hessian matrix
Figure imgf000033_0004
The "IMewton"' in the Trust-Region N'ewton-CG name comes from the fact that the Hessian matrix is used in the model. Some other, usually positive definite, matrix may be used instead, in which case Newton is dropped from the name. In the case of SBS. use of the Hessian matrix is convenient and allows for weakened convergence conditions to be used. If Il Vf {x"') || is sufficiently small, i.e.. less than some convergence threshold, then it is concluded that a iocai minimum off occurs at x' and the process is finished. Otherwise, the model Fo is minimized in a circular trust region || x \\ < A0. for some positive Δo , via the
CG-Steihaug method, described below. Let // be the resulting increment vector at which a minimum of the model in the trust region occurs. If || W(x° + p') || is less than the convergence threshold, then it is concluded that a local minimum of/occurs at xy 4 /f and the process is finished. Otherwise, calculate the test value
p° - Fo(O) - F0(^) ( - °
If pa passes a threshold, then the actual reduction and predicted reduction are somewhat close to one another and the center of the trust region is moved to x l ::: x ° 4 //' . Otherwise, the center of the trust region stays at x *' = λ'° . lfpc is close to 1 , then /'» is considered a good model for/' within the trust region and the radius is increased. If ρ-Λ is far away from 1 , then the radius of the trust region is decreased. The new region radius is labeled Λ) 5 a .new model F-, centered at ,v' is built, and the minimization process is repeated. || \(f{x s -*- p ') \\ is tested for closeness to 0. If it is not close enough, then the process is again repeated until || V/ "(x' -'■- p!) || is under the convergence threshold. A local minimum of /' is found at x: *- p ! . Details about how to calculate the mode! (in particular how to find/ Vf and Vf) are given in Section 5, calculations needed for the CG-Steihaug method are given in Section 6. and a discussion about the convergence of Il \?f(x ' 4 /> ' ) Il is discussed in Section 7.
4.4.2 Pseudo Code FfG. 14 shows a pseudo code listing of the described technique.
4,5 Computation of the Trust-Region Model
The goal of this section is to build fools that will aid in the calculation of Function 4.18, the quadratic model used for the Trust-Region method. In particular, residuals are used to find formulas fer/{r), Vf (x) and Vf (x) , 4,5.1 Using Residuals to Obtain the Function, its Gradient and its Hessian
Recall that Q = nbhd(P) O S H M = \qi>; qh .... q,.h { and define the residuals of
Function 4.18 to be
Figure imgf000035_0004
for 0 < i < /. Define the residual vector of Function 4.18 to be the concatenation of all residuals of the first, second, third and fourth kinds, as follows; r(x) ~< ro(»), T1 Ia-). ..., ru~ι (x) > (4.22)
Now the norm squared of the residua! vector is broken down in terms of intensity and curvature.
Figure imgf000035_0001
* Ajj < DξHA<n)> βfv ffyf<fs) > H3)
Figure imgf000035_0002
i)
Figure imgf000035_0003
Thus.
In fact, the residuals were chosen so that the above equation is true. Formulas for the gradient and Hessian of/can also be derived from the residuals. Let
,- ,N ... $ ^ ,, Λ d ^ , A d ^ t. Λ ^
Then
Figure imgf000036_0001
- V fix)
Thus,
Figure imgf000036_0002
A similar derivation leads to the following formula for the Hessian;
4t - ϊ
Figure imgf000036_0003
where V r, is the residual gradient vector, as previously defined, and V: r, is the residual Hessian
Figure imgf000036_0004
Sinipie formulas will be derived below for eacli of
Figure imgf000036_0005
and
dzj dχk (4.292)
by breaking down the residual formulas and then taking derivatives. The view code is used to simplify the calculations. The resulting formulas show that these two values must be 0 except on a small set of known values. This information can be used to find Vr1 (x) and V2 r, (x) . and in turn to fmd/'(λ-), V/'(x) and ¥~f{x) . The net result is an easy way to calculate the Trust-Region model function (Function 4 18) 4,5.2 Residuals of the First Kind
The formula for intensity can be used to write residuals of die first kind as
Figure imgf000037_0001
for 0 < i < f . Using the quotient rtsle to take partial derivatives it fellows that
I t
Figure imgf000037_0002
and
Figure imgf000037_0003
5
Key parrs of Formulas (4.31 ) and (4.32) can be broken down into more manageable parts. The chain rule cat) he used on the definition of
!iiWφ)ji (4.321) to obtaiβ
Figure imgf000037_0004
10 and taking Um partial derivative of
NnJm) € (4.331) leads to
Figure imgf000037_0005
Formulas (4.33) and (4,34) can be broken down even further by finding formulas for the partial derivatives of AHS(<?,) and D-J-I. ,{qt). Recall that
HJ. ,u, tή ~ .ff (tι, υ) +- x.V UttV) , (4.341)
! 5 where P is the view code. The definition of the discrete first derivative in the Η-dinϋction gives
Figure imgf000037_0006
where δs is the raster point (1 , 0), which means
Figure imgf000037_0007
Similarly.
^; *>***<*> - ****+**. - *π*~*>) (4 37}
where 6; is the raster point (0, 1), The view code gives
ife/V l!M|! " \ 0 otherwise H cϊ!i)
Now that the formulas for the first and second partial derivatives for .residuals of the first find have been broken down as much as possible, it is time to use the atomic pieces to calculate back up the chain. Substitute Formula (4.3S) into Formulas (4.36) and (4.37) to calculate sfj OiH* {4.381} and
|J~£}A- (4.382) Then substitute these into Formulas (4.33) and (4.34) to calculate
Figure imgf000038_0001
and τ$: (&uΛ<iύ - £) - (4.384)
Lastly, substitute the results into Formulas (4.31 ) and (4.32) to calculate
S;n(χ} (4,385) and
Figure imgf000038_0002
as desired. Note in particular that
If — n(x) - O il j f {YCqi + Si), V(qi -- I1 }, V(qi + &Λ), V(qi ■- 4)} (4.39)
and
Figure imgf000038_0003
- h)} (4.40) dx-jdzk
4.53 Residuals of the Second, Third and Fourth Kinds
The second residuals anϋ
ri+έ(x) « Λ/λi3jff,(φ) (4.41)
Figure imgf000038_0004
for 0 < / < f which aives d ,. , Vx 0 2x V(Ui)I (4.42)
Use Fo.miu.la (4.38) for calculation of die above and note that
0 0 ifiglWφ-MiXVfø-^Vfo)} (4.43)
and
Figure imgf000039_0001
dxjdX- k
The third residuals are
Figure imgf000039_0002
for 0 < / < t which gives
Figure imgf000039_0003
Use Formula (4.38) for calculation of the above and note ihat
- S2), VIq1)] (4.47)
Figure imgf000039_0004
and
*Vf3*{&5 - 0 for ail j and fe (448)
The fourth residuals are
Figure imgf000039_0005
for 0 < i < i where
Jx (Qi ) ~ Ha. (qt + <5i ) ÷ ^ ((Jf1 - 4i) 4- Bx (qi + <fe) -r fl^ (^ ™ 4> ) * (4.4« 1 )
which gives
Figure imgf000040_0001
Use Fo.rmu.1a (4.38) tor calculation of the above and note that
Figure imgf000040_0002
and
*i^3.<.(ϋp} ~ 0 for ail j and k (4.52)
Note that for first, second, third and fourth residuals, i.e., for 0 < / < 4?,
Figure imgf000040_0003
and
^ σxjch--ϊ-kTi(X) = 0 if i or k & {V{fϊi + <Ss )^ ' fe - s)ι } . ; ϊ%. + <b). , V'(.«i - ^)} (4.54)
4.6 Minimization of the Trust-Region Model: The CG-Steihaug Method
4.6.1 Overview
At each step in die Trusi-Region Newtcm-CG method, SBS uses the CG-Steihaug method to fi»d a minimum of the mode! fuβctioπ in the trust region. Conjugate gradient methods try to solve a iittcar system Ax ::: —ό. where ,4 is a symmetric matrix. The problem can be rc-formuiatcd as follows:
,* (Ax)r - ~br
,;-> χτ.47' ™ ~6r (4.55)
^> φf^) ™ arτ.4;ε + Λ; ™ (J
which puts the problem in terms of minimizing (;>(x). Recall the Trust-Region model F1 for/at step / as F1(X) ^ /(*'} * Vftiψx + |?:J/(Λ- (4.55 i )
cAx), Λ and b can be expressed in terms of the Trust-Region method as follows:
Φ(r) -^ Fi(x) ~~ /(£% (4.56)
4 - iv3/(^) (4.57)
and
6 ^ Vf(^) (4.58)
Note that A is, in fact, symmetric In terms of the above, the function to be minimized by the CG-Steihaug method is
<#*) - *r< | V2 /(*'})* -f (V/fx1))7"* (4.59)
CO methods use the notion of conjugate gradient (CG) to build a sequence of vectors that converge to the minimum of Φ{x). For a given /, a set of vectors
Figure imgf000041_0001
) called conjugate (the "C" in CG) with respect to a matrix ,4 if
{(P'kf AtP'1 ■■- 0 (4.592)
for all k =ss 1 Given such a set D at step i of the Trust-Region method, define a sequence/'' '' by p'-! ------ 0 and for / ≥ 0
Figure imgf000041_0002
where sltl is one-dimensional mimnuzer o£θ{x) along
;y rr |/<i ~|~ adL* . (4.601 )
Tlie sequence //'' conx:eiges to the desired minimum// in at most m ÷ « steps (where, as before. ffj ÷ « is lhc dimension of the objective function). Now the goals are how to build the set D and how to find β,, , for each j > 0. Let
(.ji,0 ^V /is'} ^ -6. (4.602) In other words, choose the first direction ~m which to search for a minimum to be the direction of steepest descent. This is the direction determined by the gradient (the "G" in CG). The residua! of the system is defined as fix) ~ Ax + ft. and is used at each step to determine the .next direction in which to search. Since/1"0 ::: 0, then the residual at step 0 is r' ::: b and for / > 0
(4.6! ) ~ (A$f<3 4- b) -f Oi,jAd*J
For / > 0 define d' J ' '' as
Figure imgf000042_0001
where
Such a choice for β, r x resul ts in a d'" r~l such that
Figure imgf000042_0002
)
for all k < J ÷ 1. which guarantees that the set D is conjugate tor each /'
To calculate the minimizer au , . find an expression for along φp'' ! + α"1''-'' by multiplying out terms and rearranging according to powers of α:
~ ^((piv/ )7' V^ + 2(pi^)7'.4ϊ"«r^α -f (tfJyt'AfNά*) ■■}■ (br:S + hT(tJa} (4M)
Figure imgf000042_0003
where it' is constant with respect to α. "Next a derivative is taken with respect to α:
ώi
Figure imgf000042_0004
Set the above quantity equal to 0 to find the critical point: )
Figure imgf000043_0001
In order Io ensure that the critical point corresponds to a minimum, the second derivative is taken to find concavitv;
Figure imgf000043_0002
It is not known if the above quantity is positive. The case
(φψAtf'J < 0 (4.671) is handled later, and the derivation continues assuming that
(^f Ad^ > CL (4.672) The formula for a. . can be written in a better way using the fact that for each / > O5
(f^) rΦk ~ 0 (4.673) for ail k <j, a fact that will now be proved by induction. For / ::: 1 ,
Figure imgf000043_0003
rrrr {}
Now assume
Figure imgf000043_0004
(d*>ψ
Figure imgf000043_0005
- 0 (4.69) for A- <j — 1 where
by the induction hypothesis aiϊd
($'k)'r A(F* ~ 0 (4 692)
by the conjugacy of D. Now handle the k ~j •••■ 1 case:
^ } ΛΛ (4.70)
Figure imgf000044_0001
0,
(r^ )rάitk ~ 0 (4701 )
for all k <j, A new expression for « ,. , can «o\v be presented:
Figure imgf000044_0002
To summarize, the CG-Steihaug mctlϊod is used at step / in UK Trust-Region Nevvton-CG method to find a point at winch a minimum of the model function occurs in the trust region. Three sequences are used to do this. They are <:/' •'. r'~' and p ' J. d'-J is a sequence of conjugate gradient direction vectors. rU ! is a sequence of residuals, and p K) converges to the desired minimum p '. The conjngacy of d ! -; guarantees convergence in at most m + n steps. The starting quantities for the sequences are //" 0 ::: 0, r : ϊi ::: -Vy(V) ::: /j, and
ύ? '■ l! = ~Vf(χf) ~ b. Subsequent entries in the sequences are given by using the helper formulas
Figure imgf000045_0001
and
0W+ 1 ^ . ^ (4.73)
and arc pW = p^ + αt^ «-74>
and
(f,?-fi ~ r*<i+i + β^^J (4.76)
Up to this poiβt. the method described is a standard CG method. The sequeβee p!'J is generated until the residual falls under some threshold. Hie Steihaug variant of the CG method takes into account the cases of
{d*^)TA^ < 0, (4.761 )
which would violate the assumption that the «,-, , calculated corresponds to a minimum, and tlie minimum of die model being found outside the area of interest, i.e., the given trust region. To handle these eases, two extra stopping criteria are added. If
Figure imgf000045_0002
or if
ri'1 ≥ Δ,- (4.763) then the intersection of the trust boundary and direction d''1 is assigned as the final point p'. If />'- ° === 0, then
![p*^l! < !IP^+1II (4.7M) for each / > 0, meaning that returning a solution of intersection with the trust region boundary is the best that the sequence can do if the boundary is reached. The Tryst-Region method then uses p' to calculate the test value. Depending on the results, either the solution is seen as good enough or the Trust-Region method goes into iteration / -H , with the center of the Trust-Region χ.:÷ ! being either the same as ,v! or moved to x' +p>
4.6.2 Pseud o Code
In the pseudo code that follows, the sequence of points /J'"' is denoted by rain pt, the sequence of directions d!-! is denoted by direction, and the sequence of residuals r'-' is denoted by residual, "a dot tT is used to denote a'h.
4.6.3 Sparse Matrix Multiplication
By far the most expensive operation in each iteration of the CG-Steihaug method is the multiplication of/i with the direction vector. Recall the following formula for A in terms of residuals of the objective function:
A - VV(«) ^ V (ri(z)Vn(a>f + rMV*n(x)) (4.77) i™0
Let
A1(T) - VdX)Vn(XV -T r,(x)V*ri (z) (4.771) tor each ?'. Let (c) >. >s be the { _/. k) element of A1 and let e1 the vector of length m + n such that
tή — 1 if i — j and e?j ~ 0 otherwise, (4 772}
Suppose that </is one of the conjugate direction vectors built by the CG-Stcihaug rnetliocl at a fixed step of the Trust-Region method. Then
•it ~ t M - Y^ A>d
(4.78)
4t ~ i (HtJi- ) w-i- n - E
which requires (nι > nf multiplications, one for each pair ( /. K). However, the calculation can be done more efficiently by using Equations (4.53) and (4.54). These show that for each i,(a,)hi is nonzero only if / and k are both in
V
Figure imgf000046_0001
- ^)). (4.781)
Thus, for each / and ^r
>«+«■■■ f
]P (α,)j,jt4«* - £ («i)iiif'l; (4.79) i-ft ,e.V which can be computed in O (1 ) multiplications. The overall order is then reduced from (m ÷»r to {fti + «).
4.7 Convergence øf tbe Trust-Region Newton-CG Method
The sequence ,v! built by a Trust-Region method satisfies yv/tø1')!! -> o (4.791 )
as i --'- -x; if the follow ins conditioβs hold:
• At
Figure imgf000047_0002
th^ j>s built by ύn-
Figure imgf000047_0001
iάgaήlhrti satssfe
(4792) si v 'j'ip'} foe so«i«* αaisi ajλt r< S (*'), I]. • The stjqmsoce ρ! buiiJ. by the ssλόihϊib.&Uon 3.Sgorni)«> sati^fi^s
Figure imgf000047_0003
foe soiJis* «aisi <«it *>j > I.
• TS«< thπsSKiki foe retoviπg ι?«< «'at»r of th« uyst. c<!gjo!i is ootstairiwl m UH? (0, ~).
Figure imgf000047_0004
• / is ϊκ.'!ϊfκi»<i frotβ be>ϊ«)w on tlw kv^l s&i. L -~ |;f <; W^'^ lfix) < fix?}} , ΪΛ-. for »»!» constant
/(4 > '-s (4794) for JiJi a- mdi th&i /(.r) < /{.sΛ}.
• {. is Jχj«iκks<ϊ, j.<i.
(4.795) for SOSiH? «>">ϊ:rfi:<»ϊs. <:,i C ^.
• TSswe exists Stoπsp constant Cs £~ E sitch lh<ι&.
(4.796)
^pyn < <*
• / is Lipschit* co»tiuyoysJy (hlϊeri-ϋusbk' on t, i,«, th<?r«? exists s<isw>-? <>->s-sts.»t c& & sK such tbat for ;my ;«, $ € t
(4.797)
Each of the above claims is proved below. 4.7 J Model Estimate
SBS uses die CG-Sfeihaug method at each step / to find a point ρi ~ .Urn p*-J (4.798)
that corresponds to a minimum of the model F1 in the trust region. At step ι. a Cauchy point (/? )! is defined to be Δ<
(≠r - ~r> jfψyfpβ V/{**) (4.80)
where i ic {v/(*ι))rv2/(*ι)v/{»") < o
. , <ι - /(#'')Jl® , % , . (4.8! ) mmt τ"7pp-"τi-j-r;;-7;"-"TF7"" -v-! i otherwise
It follows that; (4.82)
Figure imgf000048_0001
Tlic CG-Slcihaug method gives
Figure imgf000048_0002
Thus.
(4.84)
iW**'l iivaiV)ii'
4.7.2 Bound on Mode! Minύnum
Il P Il < A since for any_/ if lhc /> " ' computed by the CC-Sleihaug algorithm falls outside of the tnjst region, then il is replaced with a vector inside and the algorithm returns that value,
4.7.3 Trust Region Center Move Threshold Range
Enforcing a range on the threshold to move the center of the trust region is a matter of setting it properly. In the pseudo code, the move threshold is set to 0.1 € (0. '/4).
4.7.4 Bound Below of/ on the Level Set
Recall that ftø ~ - HiUh) f + A6W, (ft))- (4.841)
Figure imgf000049_0001
Bv definition
Cw. (ft) > 0 I4-842)
for each <:_?, , and by choice A > 0 Therefore, /(x) > 0 for all x € /?" ' ''. and thus also forx such
that fix) < fix'' I
4.7.5 Bound on the Level Set Let
Figure imgf000049_0002
;md
Figure imgf000049_0003
where £ , ::: (1. 0). Define /? by
^ j:::0
Figure imgf000049_0004
i- i τ ∑ih* + *(*»* is; s)
and v by
Figure imgf000049_0005
h ( x ) < f ( x ) for stW x, so if there exists a c4 > 0 such that || .r || = 1 and t > cΛ implies
h(fx) > f{χ:'), then |i Λ: ij - i a«d r > cA would imply /~(/x) > f{χ!') . mcjming that /'(«) < fix")
would imply || x \\ = 1 a«d t < C4, i .e., || i: || < c4, as desired. For any scalar s > 0 and for each ./
Figure imgf000050_0001
where k\, is constant with respect to Λ\ which implies
VMaaO ™ *(fe + Vt?(*)) (4.89! ) where k is constant with respect to s. For any scalar / > 0, let hx(i) ~ h (ix), Tlien f ft
I -^~-h(sx) " ~i- {$x)ds
Figure imgf000050_0002
ft- . ie
Figure imgf000050_0003
A is a sum of squares, so /Ϊ > 0. Given increment vector ft:, h (tx) ■■■■ 0 oiiiy if aJΪ #?#**(«*) - 0. (4.901)
This would imply (x is the zero vector because increment vectors arc required to be 0 on the boundary of the projected surface. Any Use of pixels through the projected surface in the /(-direction would have to be 0 on the boundary and eo«id not. vary from 0 since
Figure imgf000051_0001
5
Since \\ x \\ = J and t > 0. then h x(j) > 0, which implies
{(ft 4- VvOe)) x) > 0- (4.903)
Let
K ϊviϊα ((A' 4- Vt'{x) } a;) {4 9041
10 and
r . -- , /Mfϋ {4 90S)
Then for any / > c.-. and |i x |i = 1. hΛ (J.) - /{.c(i} ™ C(^- -f \?v(;P)] - x) ™ .... /(χi!)
(k -r- Vv(«)) ■ a;} / {{& 4- Vv(T)) ;(-}
1V '/{***)){ '1(-^))
Figure imgf000051_0002
/(^K^/ii±l|£lLii , ^lu^LU .... ,}
which implies hiim) > /(a?0) (4,92)
! 5
4.7.6 Bound on Hessian
In die Trust-Region Newton-CG method there are two possibilities for x ' '' " ' . Either .\-: "" ' ■■■■■■ x: . in which case ,/"(A:! "! ) :~ f{x'), oϊ Λ"' "" 1 ::: r' + />\ where /;' is the point resulting from the CG-Steihayg method. The latter case is only permitted if the model test value fix*) ™ /<V -f ρr\
is greater than the threshold to move the center of the test region. Hie threshold is chosen in (U, XΛ), so if the center of the trust region is moved then /Λ > 0. Since the €G-Steihaug method minimizes the model in the trust region, then
W) > W (4.922) is guaranteed, which implies that the denominator of μ, is nonnegalive. If p, > 0, then
/(a?*) - /(a?* -f p1) > 0. (4.92?)
Thus
J) > /V~H ) (4.924)
for all / > 0, which means that L includes ali x:. If the claim is proved on L, then it wiil also be proved for ail x'. /and all of its partiai derivatives arc continuous. On the compact set [C α,| (where vΛ is the bound on L calculated above), / and ail of its derivatives are hounded. For each i and / > 0 and < m -+ /j there exist constants kh} such that
|. i /(.r}| < fc (4 925)
for all x such dial ji JC ;| e fθ, tii] , and thus
Figure imgf000052_0001
4.7.7 IJpscbϊtz Continuous Differentiability of /
LcI t ,. . he as in the previous section, and let x, y e L . Fix i and let x ~~ y « = H Tμ (4.93 ! )
IF ml
v(t) ~ y + ttt (4.Q32) and
hΦ ~ -^T/CKO)- (4.933)
Then
Figure imgf000053_0001
This shows that a β(x) (4.941) ihu is Ltpschitz continuous for each ?. which means
ϊ«4-« — I
!!V/{a;) ~ V/W! Σ ■A /^) fiyW >'i
>7λ ^i- >"Λ w i >^ϊ -^^ϊ w I
Σ Σ *M-* (4.95)
Figure imgf000053_0002
4,8 Notation
FIGS. 16A and 16B set forth rabies 500a and 500b providing, for convenient reference, a listing of mathematical notation used in describing systems and techniques according to aspects of the present invention. y.,..ilQ>Λ.ζliarts.of Generalized. Methods
FIGS 17-22 show a series of flowcharts illustrating a generalized method 600 and sub-methods 62(L 640, 66(L 680. and 700 according Io die above-discussed aspects of the invention for generating a geometrical model representing geometry of at least a portion of a surface of a three-dimensional (3d) object by shading by an operator in connection with a two- dimensional (2d) image of the object,, fee image representing the object as projected onto an image plane.
The generalized method 600 shown in FlG. 1? comprises the following steps:
Step 601 : Receiving shading information provided by the operator in connection with the image of the object, the shading information representing a change in brightness level of at least a portion of the image.
Step 602: Generating, in response to the shading information, an updated geometrical model of the object, the shading information being used to determine at least one geometrical feature of the updated geometrical model. Step 603: Displaying the image of the object as defined by the updated geometrical model,
As discussed above, the generalized method 600 can operate upon a digital input of any- hierarchical subdivision surface, polygon mesh or N URBS surface.
Generalized method 600 may include sub-method 620 shown in FIG. 18. comprising the following steps:
Step 621: Once a subdivision surface has been generated and displayed to a user, matching the subdivision surface to a 2d model view, the 2d model view including information about grid comers, grid width and height, pixel size and camera to object transformation.
Step 622: Utilizing 2ά model view to sets a lighting direction, tune input parameters and shade, thereby modifying the intensities of selected pixels; or load a set of prε-shaded pixels.
This information is then utilized by a shaping algorithm. The parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
Step 623: Determining the correct geometric alterations to make to the surface, adding surface primitives where needed via subdivision in the area of the shading, in order to ensure that sufficient detail is present; and determine a height field that reflects in 3D the changes that were requested in the 2D setting, altering the subdivision surface to reflect the determined height values, thereby resulting in a shaped, hierarchical subdivision surface that can be altered further.
S^ saved, or converted to a desired output surface type. The surface primitives can include any of triangles, quadrilaterals, or other polygons.
Generalized method 600 may also include sub-method 640 shown in FlG. 19. comprising the following steps: Step 641; Creating an underlying subdivision surface.
Step 642: Displaying a 2D shade view.
Step 643: Enabling a user to set lighting, shading, and tone parameters:
Step 644: Executing a shaping process comprising (a) introducing detail on the surface; (b) determining new height parameters for the surface; and (c) shaping the subdivision surface. thereby generating a 3D subdivision surface. The parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
Generalized method 600 may also include sub-method 660 shown in FfG. 20, comprising the following steps: Step 661 : Receiving an input comprising either a mesh representation or a NURBS surface.
Step 662: Converting the input to a hierarchical subdivision surface if it is not already one.
Step 663: Performing shading and shaping on the hierarchical subdivision surface. Step 664; Utilizing adaptive subdivision to add detail to the surface, and analysis and synthesis to propagate changes to all levels of the surface, thereby allowing for modifications at selected levels of detail.
Step 665: Providing a hierarchical subdivision surface library.
Step 666: Converting the subdivision surface model resulting from the SBS process, to another surface- type if desired.
Generalized method 600 may also include sub-method 680 shown in FiG. 21. comprising the following steps:
Step 681 : Applying a selected shaping operation, the selected shaping operation being configured to attempt to produce a set of height increments f over the model view that minimizes the function given by:
where ϊ is the discrete intensity at the pixel («, v), £ is a unit vector that points in the direction of the infmitely distant simulated light source, and // is the associated height field, (V denotes the curvature of a surface with associated height field H, λ is a smoothing coefficient, and the sum is performed over pixels in the model view that intersect the interior of the projected surface. Step 682: reducing the function to the unconstrained minimization of:
Figure imgf000056_0001
wherein the method used to perform the minimization is a trust-region method.
Step 683; Performing further reduction from summing over all the pixels in the model view that intersect the interior of the projected surface to summing only over that set reduced by intersecting it with the neighborhood of modified pixels, such that the calculation need not be made over the entire projected surface as seen in the model view, the reduced set being referred to as Q,
Generalized method 600 may also include the following sub-method 700 shown in FlG. 22. comprising the following steps;
Step 701 : modeling the function by the quadratic function:
Ftp) « /(**) + V/<a*)τ* + |*ΓVV<*Q)*
where Vf is the gradient vector of /and VY is the Hessian matrix of/.
Step 702: Minimizing the model F in a selected region. In the present example, the selected region is ϊjxjj < Δ, for some A > 0.
Step 703: Implementing minimization utilizing the CG-Steihaug method with a special sparse matrix multiplication.
Step 704: Constructing a test value from the resulting minimum point .Y-. and is:
/M - /(afc + *i)
P F(CJ) - FC**: and wherein if pis close to L then F is considered a good model tor/within the trust region, the center of the trust region is moved to X1 and the trust region radius is increased; or, ϊip is far away from 1 , then the radius of the trust region is decreased.
Step 705: repeating the process until a minimum for / in the trust region is found based on established criterion, in the present example, the criterion to stop the process is if ||V/|| is sufficiently small at the center of the current trust region, wherein a local minimum of /has been attained.
33 it should be noted that the above described generalized method and sub-methods may be implemented as a computer software plug-in product adapted for interoperability with any of a computer-assisted design (CAD) system, a computer graphics system or a software application operable to create, display, manipulate or model geometry. The plug-in product features may include any of a shading tool with a 2d paint function and the ability to load and save shadings, light controls, parameter tuning, updating of surface shape based on shading information, light direction and input parameters, an undo/redo function internal to the modifier, a tool for selecting an area to be updated, utilizing a masking technique, and a selection tool with a set of standard subdivision surface manipulations, wherein the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
In addition, as described above, the generalized method and sub-methods may include the ability to nm the SBS process on polygon meshes or NURBS surfaces without first converting to or using properties of subdivision surfaces. The method and sub-methods may further include the ability to display large, complex meshes at an interactive rate, the ability to trim arbitrarily across surface faces, and/or the ability to sketch contour lines to produce an initial 3d shape, useable in conjunction with the SBS modeling process.
While the foregoing description includes details which will enable those skilled in the art to practice the invention, it should be recognized that the description is illustrative in nature and that many modifications and variations thereof will be apparent to those skilled in the art having the benefit of these teachings. It is accordingly intended that the invention herein be defined solely by the claims appended hereto and that the claims be interpreted as broadly as permitted by the prior art.

Claims

We claim'
1. A computer implemented graphics method for generating a geometrical model representing geometry of al least: a portion of a surface of a three-dimensional (3D) object by shading by an operator m connection with a two-dimensional (2D) image of the object, the image representing the object as projected onto an image plane, the method comprising;
A. receiving shading information provided by the operator in connection with the image of the object, the shading information representing a change in brightness level of at least a portion of the image.
B. generating, in response to the shading information, an updated geometrical mode! of the object, the shading information being used to determine at least one geometrical feature of the updated geometrical model;
C. displaying the image of the object as defined by the updated geometrical in ode U and
D. wherein ihe method can operate upon a digitai input of any hierarchical subdivision surface, polygon mesh or NURBS surface.
2. The method of claim I wherein once a subdivision surface has been generated and displayed to a user, it is matched to a 2D model view.
3. The method of claim 2 wherein the 2D mode! view includes information about grid corners, grid width and height, pixel size and camera to object transformation.
4. The method of claim 3 wherein. utilizing the 2D mode! view, the user sets a lighting direction, tunes input parameters and shades, thereby modifying the intensities of selected pixels, or loads a set of pre-shaded pixels, and this information is then milked by a shaping algorithm: and wherein the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
5. The method of claim 4 wherein: the shaping algorithm determines the correct geometric alterations to snake to the surface; additional surface primitives are aaaaa where needed via subdivision in the area of the shading, in order to ensure {hat sufficient detail is present; m\ά wherein a height field is determined that reflects in 3D the changes that were requested in the 2D setting, and the subdivision surface then altered so as to reflect the determined height values; thereby resulting in a shaped, hierarchical subdivision surface that can be altered further, saved, or convened to a desired output surface type; and wherein surface primitives can include any of triangles, quadrilaterals, or other polygons. (>. Fhc method of claim I further comprising: creating an uπdcrh ing subdh isioπ surface, dispiav ing a 2D shade \ jew ; enabling a user to set lighting, shading, and tune parameteis. and executing a shaping process, the shaping process comprising introducing detail on the surface, determining new height parameters for the surface, and shaping the suhdh isiαn surface;
Figure imgf000059_0001
can comprise an\ of (.a) influence how much subdhmon occurs in the area of modification and (b) influence oxer how pronounced tiic geometrical modifications arc.
7 The method of dams 6 further comprising: receiving an input comprising cither a mesh representation or a NURBS surface, convening tiic input to a hierarchical subdn tsion surface if it is not ahead} one; and performing shading and shaping on (he hierarchical snfadh isioπ surface
8. The method of claim. 7 further comprising, utilizing ndnpmc subdh iston to add detasl to she surface and anah sis and s\ nthesss Io propagate changes to all levels of the surface, thereby alknung for modifications at selected
Figure imgf000059_0002
of detail
9 The method of claim ύ further comprising: presiding a hierarchical subdivision stxtfacc libran .
10 The method of claim 9 further comprising' comerting the subdh ision surface model resulting from {he SBS process, to another surface
11. The method of claim 6 further comprising. apph ing a selected shaping operation, {he selected shaping operation being configured to attempt to produce a scl of height increments ϊ o\ cr the model \ iew (hat minimizes (he function gi\ en where 1 is the discrete intensify at the ptxe? (u. \ > / is a unit vector ihst points m the direction of the mfmitch distant simulated light source, and H is the associated height field, CU denotes the curv ature of a surface with associated height field H. λ is a smoothing coefficient, and the sum is performed pixels m the model \ iew that intersect the interior of the projected surface
12. The method of claim 1 1 further wherein the set of height increments T can be reduced to a \ ector ,v containing one cntn for each pixel or connected area whose corresponding height -valuers} ma\ be altered bj the shaping algorithm, sisch that the function is reduced to the unconstrained nimuπi/ation of
/(») * 5 £ IWirM) ~ 4tø)P + ACκ.{*})
«?< «
and furthei wherein the inetiiod used to perform the mimmi/atioπ is 3 Trust-Region method.
13 The method of claim 12 comprising a further reduction from summing o\ er all the pixels m the model vicu that intersect the interior of the projected surface to summing only over that set reduced h\ intersecting it with the neighborhood of modified pixels, such that the calculation need not be made o\ er the entire projected surface as seen in the mode! \iw , the reduced set being referred to as O
14. The method of clasm 12 \\ herein the Trust-Rcgjoti method composes first models tig the fu notion the quadratic function'
m « /w + v/(*o)r * + |*τv/ w*
where Vy' is the gradient x ector of /"and V-Y is the hcssian matrix of / : and then nϊinimi/iκg the model F in the region — Λ. for some Δ N 0.
15 The method of claiin 14 further u herein: the method utilized Io implement the minimization is the CG-Steihaug method with a special sparse matrix multiplication; a test value is constructed from (he resulting minimum point x; and is:
Figure imgf000061_0001
and wherein if p is close Io S., then F is considered a good mode! for /'within the trust region, the center of (he trust region is moved to .Y; and the trust region radius is increased: or, if p is far away from L then the radius of the trust region is decreased; and the process is repeated, until a minimum for P in the trust region is found; and wherein the criterion to stop the process is if j|V/|| is sufficiently small at the center of the current trust region, wherein a local minimum of /has been attained,
16 The method of claim 15 further comprising implementing the method as a computer software plug-in product adapted for interoperability with any of a computer-assisted design (CAD) system, a computer graphics system or a software application operable to create, display, manipulate or model geometry .
1?. The method of claim 16 further wherein the plug-in product features include any of: a shading tool with a 2D paint function and die ability to load and save shadings, lsghi controls, parameter tuning, updating of surface shape based on shading information, light direction and input parameters, an undo/redo function internal to the modifier. a tool for selecting an area to be updated, utilising a masking technique, and a selection tool vith a set of standard subdivision surface manipulations, wherein the parameters can comprise any of (a) influence over how much subdivision occurs in the area of modification and (b) influence over how pronounced the geometrical modifications are.
18. The method of claim 1 o further comprising the ability to run the SBS process on polygon meshes or NURBS surfaces without lust converting to or using properties of subdivision surfaces.
19. The method of claim U) further comprising the ability to display large, complex meshes at an interactive rate.
20 The method of claim 10 further comprising the ability to lritn arbitrarily across surface faces. 21 The method of claim 10 further comprising the ability to sketch contour lines to produce an jnitia! 3D shape, useabie in conjunction with the SBS modeling process.,
22 A computer graphics
Figure imgf000062_0001
stein for generating a geometrical model representing geometry of at least a portion of a surface of a three-dimensional object
Figure imgf000062_0002
shading b> an operator in connection Λ\ ith a mα-dnnensionai image of the object, lhc image representing the object as projected onto an image plane, the computer graphics s> stem comprising
A an operator input de\iee configured to receive shading information prov ided tn the operator, the shading information representing a change in brightness.
Figure imgf000062_0003
of at least a portion of the image,
B, a model generator configured to rcccue the shading information from the operator input dc\ ice and to generate in response thereto an updated geometrical mode! of the object, ilie model generator being configured to use the shading information to determine at least one geometrical feature of the updated geometrical model, and
C an object display configured to display the image of the object as defined b> the updated geometi tea! model, and
D. « herein the s> steni can accept am hierarchical siibdiv ision surface poly gon mesh or NURBS surface
23. A computer program product operable vuthm a computer graph ics s> stem, the computer graphics s> stem comprising a human -useable input
Figure imgf000062_0004
ice and a display
Figure imgf000062_0005
ice operable to geneiate a huπian- percepubie displa> . the coiaputet program product comprising computer software code instructions executable by the computer graphics s> stem and encoded on a computer ieadabic medium,
Uic computer program product being operable vutinn the computer graphics system to generate a geometrical mode! representing geometry of at least a portion of a surface of a three- dimensional object by shading by an operator in connection with a t\\ o-dtmcnsional image of the object, the image it-presenting the object as projected onto an image plane, ϊfac computer program product comprising
Λ first computer softw are code means operable to recene shading information proλ ided by an operator using an input de\ ice, the shading information representing a change in brightness lex el of at least a portion of the unage.
E mode! generator e-oinputer softw are code means operable to receive the shading information from the operator input deuce and to generate in response thereto an updated geometrical model of the object, the model generator computer sofmarc code means bctng operable to use the
6! shading mfbi mstion to determine at least one geometrical feature of the updated geometrical model, and
C object chspfcn compulet software code means configured in enable the computer gtaphics s\ stem to display , on a display de\ ice, the image of the object as defined b> the updated geαnierπcal niodol, and
D λshcfcti) lho computer piogram product is operable to accept an> hieratcbicai MibdiMston surface polj gois mesh os "Nt(RBS surface
24 Hw method of claim 14 further comprising muiiim/mg b% usmg a I rusl-Regjon New tort-CG method
25 The method of ekiiin 24 whcrem residuals are utilized to obtain the function. its> gradient and us
26 The method of c laim 14 wherein calculations arc integrated w ftb or into an APi
27 The method of claim 14 wherein calculations are integrated with oi into a computci snfiuaie application plug-in
PCT/US2006/062405 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image WO2007079361A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU2006332582A AU2006332582A1 (en) 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image
EP06849019A EP1964065A2 (en) 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image
CA002633680A CA2633680A1 (en) 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image
JP2008547751A JP2009521062A (en) 2005-12-20 2006-12-20 Modeling the 3D shape of an object by shading 2D images

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US75223005P 2005-12-20 2005-12-20
US60/752,230 2005-12-20
US82346406P 2006-08-24 2006-08-24
US60/823,464 2006-08-24

Publications (2)

Publication Number Publication Date
WO2007079361A2 true WO2007079361A2 (en) 2007-07-12
WO2007079361A3 WO2007079361A3 (en) 2008-04-03

Family

ID=38228928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/062405 WO2007079361A2 (en) 2005-12-20 2006-12-20 Modeling the three-dimensional shape of an object by shading of a two-dimensional image

Country Status (5)

Country Link
EP (1) EP1964065A2 (en)
JP (1) JP2009521062A (en)
AU (1) AU2006332582A1 (en)
CA (1) CA2633680A1 (en)
WO (1) WO2007079361A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8984447B2 (en) 2011-03-11 2015-03-17 Oracle International Corporation Comprehensibility of flowcharts
CN110262865A (en) * 2019-06-14 2019-09-20 网易(杭州)网络有限公司 Construct method and device, the computer storage medium, electronic equipment of scene of game
WO2019183420A1 (en) * 2018-03-23 2019-09-26 Lawrence Livermore National Security, Llc Additive manufacturing power map to mitigate overhang structure
WO2020047307A1 (en) * 2018-08-30 2020-03-05 Houzz, Inc. Virtual item simulation using detected surfaces
WO2021009631A1 (en) * 2019-07-18 2021-01-21 Sony Corporation Shape-refinement of triangular three-dimensional mesh using a modified shape from shading (sfs) scheme
CN113409451A (en) * 2021-03-16 2021-09-17 浙江明度智控科技有限公司 Digital three-dimensional model construction method and system of production equipment and storage medium
CN114777680A (en) * 2017-10-06 2022-07-22 先进扫描仪公司 Generating one or more luminance edges to form a three-dimensional model of an object

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11679053B2 (en) 2017-02-17 2023-06-20 Medtec Llc Body part fixation device with pitch and/or roll adjustment
US11712580B2 (en) 2017-02-17 2023-08-01 Medtec Llc Body part fixation device with pitch and/or roll adjustment
EP3773220B1 (en) 2018-03-26 2023-10-25 Medtec Llc Easy on/easy off clips or clamps for mounting mask to body part fixation device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5255184A (en) * 1990-12-19 1993-10-19 Andersen Consulting Airline seat inventory control method and apparatus for computerized airline reservation systems
US6313840B1 (en) * 1997-04-18 2001-11-06 Adobe Systems Incorporated Smooth shading of objects on display devices
US6449560B1 (en) * 2000-04-19 2002-09-10 Schlumberger Technology Corporation Sonic well logging with multiwave processing utilizing a reduced propagator matrix
US6487322B1 (en) * 1999-03-03 2002-11-26 Autodesk Canada Inc. Generating image data
US20030156117A1 (en) * 2002-02-19 2003-08-21 Yuichi Higuchi Data structure for texture data, computer program product, and texture mapping method
US20040215429A1 (en) * 2001-04-30 2004-10-28 Nagabhushana Prabhu Optimization on nonlinear surfaces
US20050062739A1 (en) * 2003-09-17 2005-03-24 International Business Machines Corporation Method and structure for image-based object editing
US20050096525A1 (en) * 2003-10-02 2005-05-05 Kazunori Okada Volumetric characterization using covariance estimation from scale-space hessian matrices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5255184A (en) * 1990-12-19 1993-10-19 Andersen Consulting Airline seat inventory control method and apparatus for computerized airline reservation systems
US6313840B1 (en) * 1997-04-18 2001-11-06 Adobe Systems Incorporated Smooth shading of objects on display devices
US6487322B1 (en) * 1999-03-03 2002-11-26 Autodesk Canada Inc. Generating image data
US6449560B1 (en) * 2000-04-19 2002-09-10 Schlumberger Technology Corporation Sonic well logging with multiwave processing utilizing a reduced propagator matrix
US20040215429A1 (en) * 2001-04-30 2004-10-28 Nagabhushana Prabhu Optimization on nonlinear surfaces
US20030156117A1 (en) * 2002-02-19 2003-08-21 Yuichi Higuchi Data structure for texture data, computer program product, and texture mapping method
US20050062739A1 (en) * 2003-09-17 2005-03-24 International Business Machines Corporation Method and structure for image-based object editing
US20050096525A1 (en) * 2003-10-02 2005-05-05 Kazunori Okada Volumetric characterization using covariance estimation from scale-space hessian matrices

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8984447B2 (en) 2011-03-11 2015-03-17 Oracle International Corporation Comprehensibility of flowcharts
CN114777680A (en) * 2017-10-06 2022-07-22 先进扫描仪公司 Generating one or more luminance edges to form a three-dimensional model of an object
US11433480B2 (en) 2018-03-23 2022-09-06 Lawrence Livermore National Security, Llc Additive manufacturing power map to mitigate overhang structure
WO2019183420A1 (en) * 2018-03-23 2019-09-26 Lawrence Livermore National Security, Llc Additive manufacturing power map to mitigate overhang structure
WO2020047307A1 (en) * 2018-08-30 2020-03-05 Houzz, Inc. Virtual item simulation using detected surfaces
US10909768B2 (en) 2018-08-30 2021-02-02 Houzz, Inc. Virtual item simulation using detected surfaces
CN110262865A (en) * 2019-06-14 2019-09-20 网易(杭州)网络有限公司 Construct method and device, the computer storage medium, electronic equipment of scene of game
WO2021009631A1 (en) * 2019-07-18 2021-01-21 Sony Corporation Shape-refinement of triangular three-dimensional mesh using a modified shape from shading (sfs) scheme
KR20210146353A (en) * 2019-07-18 2021-12-03 소니그룹주식회사 Shape-segmentation of triangular 3D mesh using modified shape from shading (SFS) method
CN113826148A (en) * 2019-07-18 2021-12-21 索尼集团公司 Shape refinement of triangular three-dimensional meshes using a modified shape-from-shadow (SFS) scheme
KR102487918B1 (en) 2019-07-18 2023-01-13 소니그룹주식회사 Shape-segmentation of a triangular 3D mesh using a modified shape from shading (SFS) approach
CN113409451A (en) * 2021-03-16 2021-09-17 浙江明度智控科技有限公司 Digital three-dimensional model construction method and system of production equipment and storage medium
CN113409451B (en) * 2021-03-16 2022-04-15 明度智云(浙江)科技有限公司 Digital three-dimensional model construction method and system of production equipment and storage medium

Also Published As

Publication number Publication date
AU2006332582A1 (en) 2007-07-12
CA2633680A1 (en) 2007-07-12
EP1964065A2 (en) 2008-09-03
WO2007079361A3 (en) 2008-04-03
JP2009521062A (en) 2009-05-28

Similar Documents

Publication Publication Date Title
EP1964065A2 (en) Modeling the three-dimensional shape of an object by shading of a two-dimensional image
US6037948A (en) Method, system, and computer program product for updating texture with overscan
US6483518B1 (en) Representing a color gamut with a hierarchical distance field
US6603484B1 (en) Sculpting objects using detail-directed hierarchical distance fields
Lindstrom et al. Image-driven simplification
US6437782B1 (en) Method for rendering shadows with blended transparency without producing visual artifacts in real time applications
US6396492B1 (en) Detail-directed hierarchical distance fields
US7170527B2 (en) Interactive horizon mapping
EP0637814B1 (en) Method and apparatus for performing dynamic texture mapping for complex surfaces
US7425954B2 (en) Systems and methods for providing signal-specialized parametrization
US5995110A (en) Method and system for the placement of texture on three-dimensional objects
US20070103466A1 (en) System and Computer-Implemented Method for Modeling the Three-Dimensional Shape of An Object by Shading of a Two-Dimensional Image of the Object
JP2002520749A (en) Method and system for generating a fully textured three-dimensional model
GB2479461A (en) Bounding of displaced parametric surfaces
CA2214433A1 (en) Computer graphics system for creating and enhancing texture maps
US6724383B1 (en) System and computer-implemented method for modeling the three-dimensional shape of an object by shading of a two-dimensional image of the object
JP2003256865A (en) Method and program for generating two-dimensional image with cartoon-like expression from stereoscopic object data
WO2008024869A2 (en) Computer graphics methods and systems for generating images with rounded corners
CA2282240C (en) System and computer-implemented method for modeling the three-dimensional shape of an object by shading of a two-dimensional image of the object
Ertl Computer graphics—principles and practice
Brill et al. Immersive surface interrogation
Teutsch et al. A hand-guided flexible laser-scanner for generating photorealistically textured 3D data
Parisy et al. Object-Oriented Reformulation and Extension of Implicit Free-Form Deformations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2633680

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2006849019

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008547751

Country of ref document: JP

NENP Non-entry into the national phase in:

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2006332582

Country of ref document: AU

ENP Entry into the national phase in:

Ref document number: 2006332582

Country of ref document: AU

Date of ref document: 20061220

Kind code of ref document: A