WO2014092740A1 - Capture systems and methods for use in providing 3d models of objects - Google Patents

Capture systems and methods for use in providing 3d models of objects Download PDF

Info

Publication number
WO2014092740A1
WO2014092740A1 PCT/US2012/069959 US2012069959W WO2014092740A1 WO 2014092740 A1 WO2014092740 A1 WO 2014092740A1 US 2012069959 W US2012069959 W US 2012069959W WO 2014092740 A1 WO2014092740 A1 WO 2014092740A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
capture
images
user
processor
Prior art date
Application number
PCT/US2012/069959
Other languages
French (fr)
Inventor
Daniel Lauer
Original Assignee
Daniel Lauer
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daniel Lauer filed Critical Daniel Lauer
Priority to PCT/US2012/069959 priority Critical patent/WO2014092740A1/en
Publication of WO2014092740A1 publication Critical patent/WO2014092740A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00795Reading arrangements
    • H04N1/00827Arrangements for reading an image from an unusual original, e.g. 3-dimensional objects

Definitions

  • the field of the disclosure relates generally to capture systems and methods and, more particularly, to providing an electronic three-dimensional (3D) model of an object, compiled from multiple two-dimensional (2D) images of the object.
  • Toy dolls for example, often emulate the human form to provide a child with a familiar form with which to play. Toy dolls are further offered in a variety of different sizes, clothing, races, and/or genders to aid the child in identifying with the toy doll and to capture the child's attention for extended periods of time.
  • FIG. 1 is a block diagram of an exemplary capture system, including a capture assembly.
  • FIG. 2 is a perspective view of an example capture assembly, that may be used with the capture system of Fig. 1.
  • FIG. 3 is a plan view of another example capture assembly, that may be used with the capture system of Fig. 1.
  • Fig. 4 is a block diagram of an exemplary method for use in providing an electronic 3D model of object, that may be used to print a 3D replica of the object.
  • Figs. SA-B illustrate a person and a replica of the head and/or face of the person.
  • Figs. 6A-C illustrate a replica mask of the face of a person, separate from and assembled onto a toy doll.
  • 3D models can form the basis for generating a 3D replica of the object, which may be used to personalize a variety of different products, such as, for example, toys, figurines, jewelry, ornaments, charms, coins, statues, video games, social network profiles, and movies.
  • Exemplary technical effects of systems and methods described herein include at least one of: (a) sequentially and/or concurrently capturing multiple 2D images of an object, a first image of the multiple 2D images captured from a first angle or perspective, a second image of the multiple 2D images captured from a second angle or perspective, the first and second perspectives being different, (b) compiling the multiple 2D images into a 3D model of the object, the 3D model representative of the full face of the object and more than about 90% likeness of the entire targeted object; and (c) printing a replica of the object based on the 3D model.
  • Fig. 1 illustrates an exemplary capture system 1.
  • capture system 1 includes a capture assembly 10 configured to capture multiple 2D images of an object and compile a 3D model of the object.
  • Capture assembly 10 includes a memory 12 and a processor 14 coupled to memory 12.
  • executable instructions are stored in memory 12 and executed by processor 14.
  • Capture assembly 10 is configurable to perform one or more operations described herein by programming and/or configuring processor 14.
  • processor 14 may be programmed or configured to perform one or more operations by encoding such operations as one or more executable instructions and providing the executable instructions in memory 12.
  • Memory 12 is one or more devices operable to enable information such as executable instructions and/or other data to be stored and/or retrieved.
  • Memory 12 may include one or more computer readable media, such as, without limitation, hard disk storage, optical drive/disk storage, removable disk storage, flash memory, non- volatile memory, ROM, EEPROM, random access memory (RAM), etc.
  • Memory 12 may be configured to store, without limitation, computer-executable instructions, 3D models, 2D images, scaling factors, product models, product information, purchase requests, and/or any other types of data referred to herein, expressly or inherently.
  • Memory 12 may be incorporated into and/or separate from processor 14, and/or accessible through one or more networks (e.g., Cloud storage).
  • Processor 14 may include one or more processing units (e.g., in a multi-core configuration).
  • the term processor refers to central processing units, microprocessors, microcontrollers, reduced instruction set circuits (RISC), application specific integrated circuits (ASIC), logic circuits, and any other circuit or processor capable of executing instructions to perform functions described herein.
  • processor 14 may include several processors included in different subassemblies, for example, of capture assembly 10.
  • Capture assembly 10 includes a communication interface 16 coupled to processor 14.
  • Communication interface 16 is configured to be coupled in communication with a network (e.g., a network 28) and/or one or more other devices, such as another capture assembly or a server.
  • Communication interface 16 may include, without limitation, a serial communication adapter, a wired network adapter, a wireless network adapter, a mobile adapter, a Bluetooth adapter, a Wi-Fi adapter, a ZigBee adapter, and/or any other device capable of communicating with one or more other devices, networks, etc.
  • communication interface 16 is configured to communicate with network 28, which may include, without limitation, Internet, Intranet, a local area network (LAN), a cellular network, a wide area network (WAN), or other suitable network.
  • network 28 may include, without limitation, Internet, Intranet, a local area network (LAN), a cellular network, a wide area network (WAN), or other suitable network.
  • capture assembly 10 includes an output device 18, such as a display device, for presenting data to user 22.
  • data may include for example, a 3D model of an object, 2D images of the object, or other data related to one or more of the processes described herein.
  • Output device 18 may include, without limitation, a cathode ray tube (CRT), a liquid crystal display (LCD), a light-emitting diode (LED) display, a flash device, an organic LED (OLED) display, an "electronic ink” display, a 3D printer, and/or other device suitable to display information. Additionally, or alternatively, output device may include an audio output device (e.g., an audio adapter and/or a speaker, etc.).
  • CTR cathode ray tube
  • LCD liquid crystal display
  • LED light-emitting diode
  • OLED organic LED
  • output device may include an audio output device (e.g., an audio adapter and/or a speaker, etc.).
  • capture assembly 10 includes multiple input devices 20.
  • Input devices 20 may include, without limitation, buttons, knobs, keypads, pointing devices, a barcodes scanners, mice, image capture devices (e.g., a camera, cameras, or a scanner), card reader, touch sensitive panel (e.g., a touch pad or a touchscreen), gyroscopes, position detectors, and/or audio inputs (e.g., a microphone).
  • multiple inputs devices 20a-b are illustrated as image capture devices, which are located and configured to capture multiple 2D images of an object from multiple different perspectives.
  • output device 18 further includes multiple lights (e.g., a diffused strobe light), one associated with each of image capture devices 20 to provide efficient and/or favorable lighting conditions for image capture by image capture devices 20.
  • lights e.g., a diffused strobe light
  • FIG. 1 the number of output devices 18 and input devices 20 illustrated in Fig. 1 is for purposes of illustration only. Any number of output devices 18 and input devices 20, for various purposes, may be included in other capture assembly embodiments.
  • Fig. 2 illustrates an exemplary capture assembly 100, which includes four image capture devices 20a-d positioned to capture images of an object from four different perspectives and/or angles.
  • the object is the head and/or face region including the side of the head of a person, such as user 22. It should be appreciated that various different portions of a person and/or different persons or objects may be object 22. In one or more examples, the object may be one or more portions of a person or animal, other than user 22.
  • Capture assembly 100 further includes a camera flash output device 18a-d associated with each image capture device 20a-d.
  • the multiple input devices (i.e., four image capture devices 20a-d) are positioned to capture a right-upper front image, a right-lower front image, a left-upper front image, and a left-lower front image of user 22. It should be appreciated that the number and/or position of image capture devices 20 may be different in other capture system embodiments. In one example, one or more image capture devices 20 may be attached to a mobile fixture (e.g., a rotating camera head or scanner), which is configured to rotate relative to an object to capture images of the object from multiple different perspectives.
  • a mobile fixture e.g., a rotating camera head or scanner
  • one or more image capture devices 20 may be attached to a stationary fixture (e.g., a pedestal), which remains stationary while an object is moved to provide different perspectives of the object to the stationary image capture devices.
  • the object is supported by, for example, a rotating fixture, such as a seat or platform.
  • the object may be moved by the fixture, for example, to enhance consistent location of the object relative to one or more image capture devices, as compared to the object (e.g., a person) moving itself.
  • Capture assembly 100 further includes output device 18e, which is a display device to display information, selections, directions, choices, and other information to user 22 or another person associated with user 22, such as a parent.
  • output device 18e is provided to direct the location of user 22, to ensure proper capture of the multiple 2D images.
  • output devices 18e may display a "bull's eye" indicating the desired position of the user's head.
  • capture assembly 100 includes a module 102, which is provided to interface with user 22 and/or an operator to provide product information to user 22 and/or solicit selections from user 22, while user 22 or an agent for user 22 (e.g.
  • Module 102 is further provided to receive information from user 22, such as payment, contact, and/or shipping information.
  • module 102 includes processor 14 and memory 12, as well as input and output devices 20 and 18 for interacting with user 22.
  • module 102 is a workstation, tablet, touchscreen computer, or laptop computer coupled to image capture devices 20a-d and output devices 18a-e.
  • image capture devices 20 are positioned about a user 22 at substantially even intervals to capture images from multiple angles and/or perspectives of user 22.
  • Capture assembly 200 may, for example, be a kiosk positioned at a shopping mall, movie theatre, toy store, pet store, or other location generally accessible to the public and/or person interested in utilizing capture assembly 200.
  • image capture devices 20a and 20b are omitted from capture assembly 200, user 22 is situated on a rotating fixture 202, configured to rotate user 22 to present different perspective of user 22 to image captures devices 20a and 20b, or software may be engaged to interpolate image capture of more than about 90% of the object to complete the object's full appearance (e.g.
  • image capture assemblies may be used to capture images of an object at various perspectives and/or angles.
  • the capture assemblies and methods contemplated herein therefore are not limited to any particular number and/or location of image capture devices.
  • image capture devices e.g., smartphones and/or digital cameras
  • capture assembly 200 includes multiple output devices 18f and 18g, which are display devices in this exemplary embodiment. Similar to output device 18e, output devices 18f and 18g are provided to display information, selections, choices, directions, and other information to user 22 or another person associated with user 22.
  • output device 18e is an LCD screen, which provided location commands to user 22 (e.g., a bull's eye, other position graphic, or instruction), to position user 22 properly for image capture.
  • output devices 18g is an LCD screen mounted on an outside of the kiosk formed by capture assembly 200 and configured to continuously or intermittently display advertising content, such as through displaying images and product applications that entice and/or permit users 22 to buying at this location.
  • capture assembly 200 includes multiple user modules 204, which are provided to interface with user 22 to provide product information to user 22, solicit selections from user 22, and/or proofing of the 3D model and/or selected product, as explained in further detail below.
  • 2D images captured from image capture devices 20 are stored in memory 12, which may be local to the image capture devices 20 and/or disposed remotely therefrom, via one or more networks.
  • capture system 1 further includes a server 24 coupled to capture assembly 10, via network 28.
  • Capture assembly 10 is configured to provide multiple 2D images of an object and/or 3D models of the object compiled from multiple 2D images captured at capture assembly 10.
  • server 24 includes a processor 34 and a memory 32.
  • a database 26 is maintained within memory 32 and configured to store 2D images and/or 3D models received from capture assembly 10, such that the 2D images and/or 3D models may be retrieved for processing as described herein.
  • Memory 32 and processor 34 are consistent with the description of memory 12 and processor 14 provided above. While server 24 is illustrates as a single server in Fig. 1 , server 24 may be comprised of several servers, located together or separately over a geographic region.
  • capture system 1 includes a 3D printer 30 configured to fabricate at least one replica of the object from the 3D model of the object.
  • 3D printer 30 may be coupled to any one or more of capture assembly 10, server 24, and network 28, to receive and/or retrieve the 3D model.
  • Capture systems and/or capture assemblies provided herein may be operated in accordance with several different methods.
  • Fig. 4 illustrates an exemplary method 300 for use in providing an electronic 3D model of an object, and printing a replica of the object from the 3D model.
  • the object is described herein as the head of user 22.
  • products offered through the capture system 1 are toy dolls, intended to incorporate the head of the user 22.
  • an object may be a one or more parts of a person, an entire person, an animal, or other object for which an electronic 3D model and/or physical replica may be desired.
  • a variety of other products may be offered, which consist of and/or incorporate one or more replicas produced through use of a capture system, such as capture system 1.
  • processor 14 displays 302, at output device, such as output device 18g of Fig. 3, advertisement content related to products available through capture system 1 (e.g., a doll having a replica head).
  • Advertising content may include, without limitation, product demonstrations, examples, descriptions, images, and/or pricing, as well as sales or special offers inviting the user to purchase a product.
  • the advertising content is generally provided to increase interest and/or knowledge about products available through capture system 1.
  • user 22 determines to purchase a product
  • user 22 is situated for image capture by one or more capture devices, such as image capture devices 20.
  • the capture assembly such as capture assembly 200, detects the presence of user 22, automatically through image capture devices 20 (or a sensor associated with fixture 202), or manually through an input from user 22 to an input device, such as a keyboard or touchscreen.
  • processor 14 causes the image capture devices to focus on the user 22 and capture 304 a plurality of 2D images of the head of user 22, i.e., the object.
  • processor 14 causes image capture devices 20a-d to capture 304 four images: a right-upper front image, a right-lower front image, a left-upper front image, and a left-lower front image.
  • capturing 304 the images includes controlling 306 one or more flash output devices, associated with image capture devices, to provide appropriate lighting to the object when the images are captured.
  • processor 14 controls flash output devices I 8a-d of capture assembly 100 to provide appropriate lighting conditions to capture images of user 22, by image capture devices 20a-d.
  • processor 14 stores the 2D images in memory 12.
  • capturing 304 images may include providing 308 one or more locations commands.
  • Processor 14 may provide the location commands may be provided to user through an output devices (e.g., output devices 18f of Fig.
  • Locations commands may be displayed as a bull's eye, over an image of user 22, to direct to user 22 to be a desired positioned. Further, location commands may include, for example, rotating the fixture to rotate the object about 20°, about 45°, about 90°, about 180°, or another increment of rotation to properly position object relative to one or more image capture devices.
  • processor 14 provides command to an armature, supporting one or more image capture devices, to move the image capture devices relative to the stationary object. It should be appreciated that the object and/or image capture device(s) may be moved in a variety of directions to achieve one or more images of the object at desired angles and/or perspectives.
  • processor may display, at a display device (e.g., display device I 8f of Fig. 3), the images to user 22. In this manner, user 22 is able to select one or more of the 2D images to be compiled into the 3D model.
  • processor 14 automatically captures only images to be compiled into the 3D model, or automatically selects 2D images based on, for example, lighting, clarity, brightness, contrast, noise, distortions, or other factors potentially affecting complication and/or quality of the 3D model.
  • processor 14 automatically compiles 310 the plurality of 2D images into a 3D model of the head of user 22, through one or more stitching and/or meshing techniques. Because the 3D model is compiled from 2D images captured at the capture assembly, processor 14 is able to control the resolution, format, and other characteristics of the 2D images in native format, thereby reducing the potential for degradation of the compiled 3D model. In one or more embodiments, processor 14 uses one or more template portions, such as hair, back of the head, or other parts, to automatically fill in portions of the 3D model, where the 2D image data is unavailable. In various embodiments, processor 14 is configured to compile the electronic 3D model in about 1 minute or less, and less than about 45 second in other embodiments.
  • processor 14 While processor 14 is compiling the 3D model of the head of the user, processor 14 directs the user 22 to one or more modules, such as module 204 of Fig. 3, to view the 3D model and/or select a product for purchase.
  • processor 14 displays 312, at output device 18 (e.g., output device 18 of module 102 of Fig. 2), at least one product including the 3D model of the head of user 22.
  • output device 18 e.g., output device 18 of module 102 of Fig. 2
  • one or more toy doll for example, may be displayed including the head of user 22, to provide an accurate depiction of the product to be purchased.
  • user 22 may be able to scroll, search or browse through the products, with the 3D model applied in the browser or upon selection of a particular product.
  • processor 14 scales 314 the 3D model of the object depending on the size of the product selected by the user 22. More specifically, processor 14 scales 314 the 3D model, for example, differently for an 11 inch fashion doll, than a 15 inch robot.
  • the scaling of the 3D model may be indicated by a scaling factor stored as part of a product template in memory 12. In some embodiments, scaling 314 by processor 14 may be omitted.
  • the 3D models are originally compiled by processor 14, according to a size consistent with some, all or a majority of the products offered through the kiosk.
  • a product template may includes color and/or shading instructions to adapt coloring and/or shading of the 3D model (or 2D images) to more closely match other portions of the product (e.g., skin color).
  • user 22 may be able to select one or more options for the product, such as clothing, shoes, size, or other features.
  • user 22 is able to select a portion of the 3D model provided from the 2D images.
  • processor 14 compiles the face of a head from the captured 2D images, while relying on user selections for hair color, style, or length.
  • processor 14 permits user 22 to select aspect of the object not included in the 2D images, such as the back of a head. Accordingly, in displaying the product processor 14 receives 316 one or more selection of product options, which are appended to the product and displayed 312 to user 22.
  • module e.g., input device 20 of module 102 of Fig. 2
  • User 22 may further provide, through the input device, other information, such as payment, contact, and/or shipping information.
  • processor 14 receives 318 the selection of the product and generates 320 a purchase request, which is transmitted 322, by processor 14, to a central server, such as server 24.
  • processor 14 transmits the 3D model and/or 2D images to a central server, such as server 24.
  • a central server such as server 24.
  • the 3D model is compiled at server 24, and not at capture assembly 10.
  • the 3D models are stored in database 26.
  • database 26 provides a 3D model library.
  • the 3D model library may be made accessible to users 22 or third-parties according to access criteria determined by an operator of server 24 and/or capture assembly 10.
  • the 3D model library may be accessed, by user 22, through one or more other computing devices (e.g., a smartphone, laptop, computer, workstation) to manage and/or append the 3D model to a wide variety of products across existing brands and/or form factors, to be used, for example, in dolls, toys, social networking video games, jewelry, movies, books, and other items/media in which a personal 3D model may provide enhanced entertainment.
  • user 22 may be able to download the 3D model from server 24, and maintain the 3D model for various purposes.
  • user 22 may be able to individually grant access to his/her 3D model to one or more third-parties (e.g., friends or a social networking website).
  • access to the 3D model library permits user 22 to incorporate the 3D model into a variety of products, whether offered through capture system 1, or separately by a third-party. More specifically, in such embodiments, user 22 is provided access to the 3D model of an object, such as the user's head or face, which can be used to personalize any number of products or services.
  • server 24 When the purchase request is received 324 by the central server, such as server 24, processor 34 stores the purchase request in memory 32. Based on the purchase request and the 3D model, server 24 assembles 326 a print package for a replica of the 3D model to be transmitted to a 3D printer, such as 3D printer 30 shown in Fig. 1. Assembling 326 the print package may include forming 328 a print plan for the 3D model.
  • the 3D model is printed as a replica 400 having a substantially solid mass, as shown in Fig. 5. In another example, the 3D model is printed as a replica formed from a substantially hollow shell.
  • the replica is printed as a portion of the 3D model, such as the replica "mask" 402 shown in Fig. 6.
  • the type of print for the 3D model may depend on the type, size, and/or configuration of the product, and other various factors related to, for example, print efficiencies.
  • processing by server 24 may include handling exception in the 3D model. Exception includes, for example, nose rings, glasses, earrings, or other metallic/reflective appliances to an object, such as a person's head, which may interfere with and/or affect the perception of the replicated head.
  • assembling 326 the print plan includes compiling 330, by processor 34, multiple different 3D models into a single print plan, i.e., a batch process.
  • processor 34 when a print plan is for a replica mask (such as, replica mask 402 of Fig. 6), processor 14 is able to nest multiple masks adjacent to one another and/or substantially within one another. In this manner, the multiple different (or same) replica masks can be printed during a single print operation to potentially provide efficiencies, over individually printing each replica individually.
  • compiling 330 multiple different 3D models into a single print plan is not limited to replica masks, and may be used when replicas are to be provided in solid, hollow, or other forms. In one or more embodiments, such batch processing may provide sufficient efficiencies to enable the mass production of products including replica parts.
  • the print plan is provided to a 3D printer, such as 3D printer 30, for printing 332 of the replica of the object (e.g., head) based on the 3D model.
  • the 3D printer 30 may be located proximate to server 24, or remote from server 24 and communicating with server 24 through network 28.
  • the replica is further processed.
  • Such post-processing generally includes coating (e.g., by infiltrant) and drying of the replica.
  • the replica is assembled into the product of the purchase request.
  • replica mask 402 is assembled onto a stock body 404 to provide the toy doll product 406.
  • a quality assurance inspection is completed during the post-processing of the replicas. Subsequently, the replicas are shipped according to the purchase request.
  • processor 14 and/or processor 34 provide communication to user 22, indicative of a current status of the purchase request.
  • processor 14 of a capture assembly (such as capture assembly 100) transmits an electronic mail, via network 28, to user 22 when the purchase request is completed.
  • the electronic message (e.g., an email, a SMS message) may include, without limitation, the details of the product order, payment information, contact information, estimate delivery information, and/or other information related to the product of the processing of the purchaser request.
  • additional electronic message may be generated and sent to user 22 at various stages of fulfilling the purchase request.
  • one or more electronic messages may be transmitted to user 22 to solicit feedback and/or provide offers for further products.
  • user 22 is able to access a website, hosted by the operator of capture system, to view purchase request status, products, kiosk locations, and/or current offers/pricing.
  • One or more embodiments transform a general-purpose computing device into a special-purpose computing device when configured to execute the instructions to perform methods and/or processes described herein.

Abstract

Capture systems and methods for use in providing electronic 3D models are provided. One example method includes capturing multiple 2D images of an object. A first image of the multiple 2D images is captured from a first perspective, and a second image of the multiple 2D images captured from a second perspective. The first and second perspectives are different. The method further includes compiling, by a processor, the multiple 2D images into a 3D model of the object, the 3D model representative of the full face of the object and more than about 90% likeness of the entire targeted object, and printing a replica of the object based on the 3D model.

Description

CAPTURE SYSTEMS AND METHODS FOR USE IN
PROVIDING 3D MODELS OF OBJECTS
BACKGROUND
[0001 ] The field of the disclosure relates generally to capture systems and methods and, more particularly, to providing an electronic three-dimensional (3D) model of an object, compiled from multiple two-dimensional (2D) images of the object.
[0002] Various types of toys, figurines, and/or other consumer items are known to take on a variety of different shapes, sizes, and features to represent one or more objects. Toy dolls, for example, often emulate the human form to provide a child with a familiar form with which to play. Toy dolls are further offered in a variety of different sizes, clothing, races, and/or genders to aid the child in identifying with the toy doll and to capture the child's attention for extended periods of time.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Fig. 1 is a block diagram of an exemplary capture system, including a capture assembly.
[0004] Fig. 2 is a perspective view of an example capture assembly, that may be used with the capture system of Fig. 1.
[0005] Fig. 3 is a plan view of another example capture assembly, that may be used with the capture system of Fig. 1.
[0006] Fig. 4 is a block diagram of an exemplary method for use in providing an electronic 3D model of object, that may be used to print a 3D replica of the object.
[0007] Figs. SA-B illustrate a person and a replica of the head and/or face of the person. [0008] Figs. 6A-C illustrate a replica mask of the face of a person, separate from and assembled onto a toy doll.
DETAILED DESCRIPTION
[0009] Exemplary embodiments of systems and methods for use in providing electronic 3D (3-dimensional) models of objects are described herein. The 3D models can form the basis for generating a 3D replica of the object, which may be used to personalize a variety of different products, such as, for example, toys, figurines, jewelry, ornaments, charms, coins, statues, video games, social network profiles, and movies.
[0010] Exemplary technical effects of systems and methods described herein include at least one of: (a) sequentially and/or concurrently capturing multiple 2D images of an object, a first image of the multiple 2D images captured from a first angle or perspective, a second image of the multiple 2D images captured from a second angle or perspective, the first and second perspectives being different, (b) compiling the multiple 2D images into a 3D model of the object, the 3D model representative of the full face of the object and more than about 90% likeness of the entire targeted object; and (c) printing a replica of the object based on the 3D model.
[001 1] When introducing elements of aspects of the invention or embodiments thereof, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements.
[0012] Fig. 1 illustrates an exemplary capture system 1. In the exemplary embodiment, capture system 1 includes a capture assembly 10 configured to capture multiple 2D images of an object and compile a 3D model of the object. Capture assembly 10 includes a memory 12 and a processor 14 coupled to memory 12. In some embodiments, executable instructions are stored in memory 12 and executed by processor 14. Capture assembly 10 is configurable to perform one or more operations described herein by programming and/or configuring processor 14. For example, processor 14 may be programmed or configured to perform one or more operations by encoding such operations as one or more executable instructions and providing the executable instructions in memory 12.
[0013] Memory 12 is one or more devices operable to enable information such as executable instructions and/or other data to be stored and/or retrieved. Memory 12 may include one or more computer readable media, such as, without limitation, hard disk storage, optical drive/disk storage, removable disk storage, flash memory, non- volatile memory, ROM, EEPROM, random access memory (RAM), etc. Memory 12 may be configured to store, without limitation, computer-executable instructions, 3D models, 2D images, scaling factors, product models, product information, purchase requests, and/or any other types of data referred to herein, expressly or inherently. Memory 12 may be incorporated into and/or separate from processor 14, and/or accessible through one or more networks (e.g., Cloud storage).
[0014] Processor 14 may include one or more processing units (e.g., in a multi-core configuration). The term processor, as used herein, refers to central processing units, microprocessors, microcontrollers, reduced instruction set circuits (RISC), application specific integrated circuits (ASIC), logic circuits, and any other circuit or processor capable of executing instructions to perform functions described herein. Further, processor 14 may include several processors included in different subassemblies, for example, of capture assembly 10.
[00 I S] Capture assembly 10 includes a communication interface 16 coupled to processor 14. Communication interface 16 is configured to be coupled in communication with a network (e.g., a network 28) and/or one or more other devices, such as another capture assembly or a server. Communication interface 16 may include, without limitation, a serial communication adapter, a wired network adapter, a wireless network adapter, a mobile adapter, a Bluetooth adapter, a Wi-Fi adapter, a ZigBee adapter, and/or any other device capable of communicating with one or more other devices, networks, etc. In the exemplary embodiment, communication interface 16 is configured to communicate with network 28, which may include, without limitation, Internet, Intranet, a local area network (LAN), a cellular network, a wide area network (WAN), or other suitable network. [0016] Further, capture assembly 10 includes an output device 18, such as a display device, for presenting data to user 22. Such data may include for example, a 3D model of an object, 2D images of the object, or other data related to one or more of the processes described herein. Output device 18 may include, without limitation, a cathode ray tube (CRT), a liquid crystal display (LCD), a light-emitting diode (LED) display, a flash device, an organic LED (OLED) display, an "electronic ink" display, a 3D printer, and/or other device suitable to display information. Additionally, or alternatively, output device may include an audio output device (e.g., an audio adapter and/or a speaker, etc.).
[0017] In the exemplary embodiment, capture assembly 10 includes multiple input devices 20. Input devices 20 may include, without limitation, buttons, knobs, keypads, pointing devices, a barcodes scanners, mice, image capture devices (e.g., a camera, cameras, or a scanner), card reader, touch sensitive panel (e.g., a touch pad or a touchscreen), gyroscopes, position detectors, and/or audio inputs (e.g., a microphone). In the exemplary embodiment, multiple inputs devices 20a-b are illustrated as image capture devices, which are located and configured to capture multiple 2D images of an object from multiple different perspectives. Further, in at least one embodiment, output device 18 further includes multiple lights (e.g., a diffused strobe light), one associated with each of image capture devices 20 to provide efficient and/or favorable lighting conditions for image capture by image capture devices 20. It should be appreciated that the number of output devices 18 and input devices 20 illustrated in Fig. 1 is for purposes of illustration only. Any number of output devices 18 and input devices 20, for various purposes, may be included in other capture assembly embodiments.
[0018] Fig. 2 illustrates an exemplary capture assembly 100, which includes four image capture devices 20a-d positioned to capture images of an object from four different perspectives and/or angles. In the exemplary embodiment, the object is the head and/or face region including the side of the head of a person, such as user 22. It should be appreciated that various different portions of a person and/or different persons or objects may be object 22. In one or more examples, the object may be one or more portions of a person or animal, other than user 22. [0019] Capture assembly 100 further includes a camera flash output device 18a-d associated with each image capture device 20a-d. The multiple input devices, (i.e., four image capture devices 20a-d) are positioned to capture a right-upper front image, a right-lower front image, a left-upper front image, and a left-lower front image of user 22. It should be appreciated that the number and/or position of image capture devices 20 may be different in other capture system embodiments. In one example, one or more image capture devices 20 may be attached to a mobile fixture (e.g., a rotating camera head or scanner), which is configured to rotate relative to an object to capture images of the object from multiple different perspectives. In another example, one or more image capture devices 20 may be attached to a stationary fixture (e.g., a pedestal), which remains stationary while an object is moved to provide different perspectives of the object to the stationary image capture devices. In such an embodiment, the object is supported by, for example, a rotating fixture, such as a seat or platform. The object may be moved by the fixture, for example, to enhance consistent location of the object relative to one or more image capture devices, as compared to the object (e.g., a person) moving itself.
[0020] Capture assembly 100 further includes output device 18e, which is a display device to display information, selections, directions, choices, and other information to user 22 or another person associated with user 22, such as a parent. In various embodiments, output device 18e is provided to direct the location of user 22, to ensure proper capture of the multiple 2D images. For example, output devices 18e may display a "bull's eye" indicating the desired position of the user's head. Additionally, in the exemplary embodiment, capture assembly 100 includes a module 102, which is provided to interface with user 22 and/or an operator to provide product information to user 22 and/or solicit selections from user 22, while user 22 or an agent for user 22 (e.g. a buyer and/or a parent) remains within the capture assembly or moves to a nearby location to interface with module 102 (or views the 3D model or product(s) through an electronic device via network 28). Module 102 is further provided to receive information from user 22, such as payment, contact, and/or shipping information. As shown, module 102 includes processor 14 and memory 12, as well as input and output devices 20 and 18 for interacting with user 22. In the exemplary embodiment, module 102 is a workstation, tablet, touchscreen computer, or laptop computer coupled to image capture devices 20a-d and output devices 18a-e.
[0021] In another example, such as the exemplary capture assembly 200 of Fig. 3, image capture devices 20 are positioned about a user 22 at substantially even intervals to capture images from multiple angles and/or perspectives of user 22. Capture assembly 200 may, for example, be a kiosk positioned at a shopping mall, movie theatre, toy store, pet store, or other location generally accessible to the public and/or person interested in utilizing capture assembly 200. As indicated, if image capture devices 20a and 20b are omitted from capture assembly 200, user 22 is situated on a rotating fixture 202, configured to rotate user 22 to present different perspective of user 22 to image captures devices 20a and 20b, or software may be engaged to interpolate image capture of more than about 90% of the object to complete the object's full appearance (e.g. capture about half of the scalp area and project the full hair appearance based on the scalp area captured). As should be apparent, various different capture assemblies may be used to capture images of an object at various perspectives and/or angles. The capture assemblies and methods contemplated herein therefore are not limited to any particular number and/or location of image capture devices. Furthermore, image capture devices (e.g., smartphones and/or digital cameras) may be associated with and/or owned by a user 22, rather than one or more dedicated kiosks.
[0022] Further, capture assembly 200 includes multiple output devices 18f and 18g, which are display devices in this exemplary embodiment. Similar to output device 18e, output devices 18f and 18g are provided to display information, selections, choices, directions, and other information to user 22 or another person associated with user 22. In one example, output device 18e is an LCD screen, which provided location commands to user 22 (e.g., a bull's eye, other position graphic, or instruction), to position user 22 properly for image capture. In another example, output devices 18g is an LCD screen mounted on an outside of the kiosk formed by capture assembly 200 and configured to continuously or intermittently display advertising content, such as through displaying images and product applications that entice and/or permit users 22 to buying at this location. Furthermore, capture assembly 200 includes multiple user modules 204, which are provided to interface with user 22 to provide product information to user 22, solicit selections from user 22, and/or proofing of the 3D model and/or selected product, as explained in further detail below.
[0023] In the exemplary embodiments, 2D images captured from image capture devices 20 are stored in memory 12, which may be local to the image capture devices 20 and/or disposed remotely therefrom, via one or more networks.
[0024] Referring again to Fig. 1, capture system 1 further includes a server 24 coupled to capture assembly 10, via network 28. Capture assembly 10 is configured to provide multiple 2D images of an object and/or 3D models of the object compiled from multiple 2D images captured at capture assembly 10. In the exemplary embodiment, server 24 includes a processor 34 and a memory 32. A database 26 is maintained within memory 32 and configured to store 2D images and/or 3D models received from capture assembly 10, such that the 2D images and/or 3D models may be retrieved for processing as described herein. Memory 32 and processor 34 are consistent with the description of memory 12 and processor 14 provided above. While server 24 is illustrates as a single server in Fig. 1 , server 24 may be comprised of several servers, located together or separately over a geographic region.
[0025] Further, capture system 1 includes a 3D printer 30 configured to fabricate at least one replica of the object from the 3D model of the object. 3D printer 30 may be coupled to any one or more of capture assembly 10, server 24, and network 28, to receive and/or retrieve the 3D model.
[0026] Capture systems and/or capture assemblies provided herein may be operated in accordance with several different methods. Fig. 4 illustrates an exemplary method 300 for use in providing an electronic 3D model of an object, and printing a replica of the object from the 3D model. For ease of description, the object is described herein as the head of user 22. Further, products offered through the capture system 1 are toy dolls, intended to incorporate the head of the user 22. It should be appreciated that a variety of different objects may be the subject of the methods and systems described herein. For example, an object may be a one or more parts of a person, an entire person, an animal, or other object for which an electronic 3D model and/or physical replica may be desired. Additionally, a variety of other products may be offered, which consist of and/or incorporate one or more replicas produced through use of a capture system, such as capture system 1.
[0027] In the exemplary embodiment, processor 14 displays 302, at output device, such as output device 18g of Fig. 3, advertisement content related to products available through capture system 1 (e.g., a doll having a replica head). Advertising content may include, without limitation, product demonstrations, examples, descriptions, images, and/or pricing, as well as sales or special offers inviting the user to purchase a product. The advertising content is generally provided to increase interest and/or knowledge about products available through capture system 1.
[0028] When the user 22 determines to purchase a product, user 22 is situated for image capture by one or more capture devices, such as image capture devices 20. In the example of Fig. 3, user 22 is seated at fixture 202, and thereby positioned for image capture devices 20a-d. The capture assembly, such as capture assembly 200, detects the presence of user 22, automatically through image capture devices 20 (or a sensor associated with fixture 202), or manually through an input from user 22 to an input device, such as a keyboard or touchscreen. Once the user 22 is detected, processor 14 causes the image capture devices to focus on the user 22 and capture 304 a plurality of 2D images of the head of user 22, i.e., the object. In the embodiment of Fig 2, for example, processor 14 causes image capture devices 20a-d to capture 304 four images: a right-upper front image, a right-lower front image, a left-upper front image, and a left-lower front image.
[0029] In the exemplary embodiment, capturing 304 the images includes controlling 306 one or more flash output devices, associated with image capture devices, to provide appropriate lighting to the object when the images are captured. For example, processor 14 controls flash output devices I 8a-d of capture assembly 100 to provide appropriate lighting conditions to capture images of user 22, by image capture devices 20a-d. Upon capturing of the plurality of 2D images, processor 14 stores the 2D images in memory 12. [0030] Additionally, when a fixture (e.g., fixture 202 of Fig. 3) is included in the capture assembly, capturing 304 images may include providing 308 one or more locations commands. Processor 14 may provide the location commands may be provided to user through an output devices (e.g., output devices 18f of Fig. 3), or to the fixture to position the object, as needed, to capture appropriate images of the object. Locations commands may be displayed as a bull's eye, over an image of user 22, to direct to user 22 to be a desired positioned. Further, location commands may include, for example, rotating the fixture to rotate the object about 20°, about 45°, about 90°, about 180°, or another increment of rotation to properly position object relative to one or more image capture devices. Alternatively, in one example, processor 14 provides command to an armature, supporting one or more image capture devices, to move the image capture devices relative to the stationary object. It should be appreciated that the object and/or image capture device(s) may be moved in a variety of directions to achieve one or more images of the object at desired angles and/or perspectives.
[0031] After the plurality of 2D images are captured, processor may display, at a display device (e.g., display device I 8f of Fig. 3), the images to user 22. In this manner, user 22 is able to select one or more of the 2D images to be compiled into the 3D model. Alternatively, in other examples, processor 14 automatically captures only images to be compiled into the 3D model, or automatically selects 2D images based on, for example, lighting, clarity, brightness, contrast, noise, distortions, or other factors potentially affecting complication and/or quality of the 3D model.
[0032] In the exemplary embodiment, processor 14 automatically compiles 310 the plurality of 2D images into a 3D model of the head of user 22, through one or more stitching and/or meshing techniques. Because the 3D model is compiled from 2D images captured at the capture assembly, processor 14 is able to control the resolution, format, and other characteristics of the 2D images in native format, thereby reducing the potential for degradation of the compiled 3D model. In one or more embodiments, processor 14 uses one or more template portions, such as hair, back of the head, or other parts, to automatically fill in portions of the 3D model, where the 2D image data is unavailable. In various embodiments, processor 14 is configured to compile the electronic 3D model in about 1 minute or less, and less than about 45 second in other embodiments.
[0033] While processor 14 is compiling the 3D model of the head of the user, processor 14 directs the user 22 to one or more modules, such as module 204 of Fig. 3, to view the 3D model and/or select a product for purchase. In the exemplary embodiment, processor 14 displays 312, at output device 18 (e.g., output device 18 of module 102 of Fig. 2), at least one product including the 3D model of the head of user 22. In this manner, one or more toy doll, for example, may be displayed including the head of user 22, to provide an accurate depiction of the product to be purchased. Through one or more inputs to input device 20 (e.g., input device 20 of module 102 of Fig. 2), user 22 may be able to scroll, search or browse through the products, with the 3D model applied in the browser or upon selection of a particular product.
[0034] In at least one embodiment, when displaying the product, processor 14 scales 314 the 3D model of the object depending on the size of the product selected by the user 22. More specifically, processor 14 scales 314 the 3D model, for example, differently for an 11 inch fashion doll, than a 15 inch robot. The scaling of the 3D model may be indicated by a scaling factor stored as part of a product template in memory 12. In some embodiments, scaling 314 by processor 14 may be omitted. In one or more examples, the 3D models are originally compiled by processor 14, according to a size consistent with some, all or a majority of the products offered through the kiosk. Further, in various embodiments, a product template may includes color and/or shading instructions to adapt coloring and/or shading of the 3D model (or 2D images) to more closely match other portions of the product (e.g., skin color).
[0035] Furthermore, as part of the selection process, user 22 may be able to select one or more options for the product, such as clothing, shoes, size, or other features. In various embodiments, user 22 is able to select a portion of the 3D model provided from the 2D images. In one example, processor 14 compiles the face of a head from the captured 2D images, while relying on user selections for hair color, style, or length. In at least one embodiment, processor 14 permits user 22 to select aspect of the object not included in the 2D images, such as the back of a head. Accordingly, in displaying the product processor 14 receives 316 one or more selection of product options, which are appended to the product and displayed 312 to user 22.
[0036] When the user 22 identifies one of the products for purchase, user 22 provides one or more inputs to module (e.g., input device 20 of module 102 of Fig. 2), indicating selection of the product. User 22 may further provide, through the input device, other information, such as payment, contact, and/or shipping information. In response to such information, processor 14 receives 318 the selection of the product and generates 320 a purchase request, which is transmitted 322, by processor 14, to a central server, such as server 24.
[0037] In addition to the purchase request, processor 14 transmits the 3D model and/or 2D images to a central server, such as server 24. In at least one embodiment, the 3D model is compiled at server 24, and not at capture assembly 10. The 3D models are stored in database 26. In this manner, database 26 provides a 3D model library. In various embodiments, the 3D model library may be made accessible to users 22 or third-parties according to access criteria determined by an operator of server 24 and/or capture assembly 10. The 3D model library may be accessed, by user 22, through one or more other computing devices (e.g., a smartphone, laptop, computer, workstation) to manage and/or append the 3D model to a wide variety of products across existing brands and/or form factors, to be used, for example, in dolls, toys, social networking video games, jewelry, movies, books, and other items/media in which a personal 3D model may provide enhanced entertainment. Further, user 22 may be able to download the 3D model from server 24, and maintain the 3D model for various purposes. Further still, user 22 may be able to individually grant access to his/her 3D model to one or more third-parties (e.g., friends or a social networking website).
[0038] Further, access to the 3D model library permits user 22 to incorporate the 3D model into a variety of products, whether offered through capture system 1, or separately by a third-party. More specifically, in such embodiments, user 22 is provided access to the 3D model of an object, such as the user's head or face, which can be used to personalize any number of products or services.
-I I- [0039] When the purchase request is received 324 by the central server, such as server 24, processor 34 stores the purchase request in memory 32. Based on the purchase request and the 3D model, server 24 assembles 326 a print package for a replica of the 3D model to be transmitted to a 3D printer, such as 3D printer 30 shown in Fig. 1. Assembling 326 the print package may include forming 328 a print plan for the 3D model. In one example, the 3D model is printed as a replica 400 having a substantially solid mass, as shown in Fig. 5. In another example, the 3D model is printed as a replica formed from a substantially hollow shell. In still other examples, the replica is printed as a portion of the 3D model, such as the replica "mask" 402 shown in Fig. 6. The type of print for the 3D model may depend on the type, size, and/or configuration of the product, and other various factors related to, for example, print efficiencies.
[0040] Moreover, additional processing of the 3D model may be performed by processor 34. In one or more examples, processing by server 24 may include handling exception in the 3D model. Exception includes, for example, nose rings, glasses, earrings, or other metallic/reflective appliances to an object, such as a person's head, which may interfere with and/or affect the perception of the replicated head.
[0041] In various embodiments, assembling 326 the print plan includes compiling 330, by processor 34, multiple different 3D models into a single print plan, i.e., a batch process. Specifically, when a print plan is for a replica mask (such as, replica mask 402 of Fig. 6), processor 14 is able to nest multiple masks adjacent to one another and/or substantially within one another. In this manner, the multiple different (or same) replica masks can be printed during a single print operation to potentially provide efficiencies, over individually printing each replica individually. It should be appreciated that compiling 330 multiple different 3D models into a single print plan is not limited to replica masks, and may be used when replicas are to be provided in solid, hollow, or other forms. In one or more embodiments, such batch processing may provide sufficient efficiencies to enable the mass production of products including replica parts.
[0042] The print plan is provided to a 3D printer, such as 3D printer 30, for printing 332 of the replica of the object (e.g., head) based on the 3D model. The 3D printer 30 may be located proximate to server 24, or remote from server 24 and communicating with server 24 through network 28. After execution of the print operation, the replica is further processed. Such post-processing generally includes coating (e.g., by infiltrant) and drying of the replica. Further, the replica is assembled into the product of the purchase request. For example, without reference to Fig. 6, replica mask 402 is assembled onto a stock body 404 to provide the toy doll product 406. In one or more embodiments, a quality assurance inspection is completed during the post-processing of the replicas. Subsequently, the replicas are shipped according to the purchase request.
[0043] At one or more steps during method 300, processor 14 and/or processor 34 provide communication to user 22, indicative of a current status of the purchase request. In on example, processor 14 of a capture assembly (such as capture assembly 100) transmits an electronic mail, via network 28, to user 22 when the purchase request is completed. The electronic message (e.g., an email, a SMS message) may include, without limitation, the details of the product order, payment information, contact information, estimate delivery information, and/or other information related to the product of the processing of the purchaser request. Similarly, additional electronic message may be generated and sent to user 22 at various stages of fulfilling the purchase request. Further, even after the product has been shipped and received, one or more electronic messages may be transmitted to user 22 to solicit feedback and/or provide offers for further products. In at least one embodiment, user 22 is able to access a website, hosted by the operator of capture system, to view purchase request status, products, kiosk locations, and/or current offers/pricing.
[0044] One or more embodiments transform a general-purpose computing device into a special-purpose computing device when configured to execute the instructions to perform methods and/or processes described herein.
[0045] This written description uses examples to disclose various embodiments, which include the best mode, to enable any person skilled in the art to practice those embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

Claims

WHAT IS CLAIMED IS:
1. A capture assembly for use in providing an electronic 3D model of an object, said capture assembly comprising: a plurality of image capture devices configured to capture multiple 2D images of an object; a display device; and a processor coupled to said plurality of image capture devices and said display devices, said processor configured to cause said plurality of image capture devices to capture the multiple 2D images of the object, to compile the multiple 2D images into an electronic 3D model of the object, and to display a product, at said display device, including the 3D model of the object as at least a portion of the product.
2. The capture system of Claim 1, wherein the product is a toy doll, and wherein the object includes a head or face of a person.
3. A capture system comprising: a capture assembly including a display device, said capture assembly configured to capture multiple 2D images of an object, to compile the multiple 2D images into an electronic 3D model of the object, and to display a product, at said display device, including the 3D model of the object as at least a portion of the product; and a server coupled to said capture assembly and configured to receive the 3D model from said capture assembly and to store the 3D model of the object in a library of 3D models.
4. The capture system of Claim 3, wherein said server is configured to provide access to the 3D model of the object, based on a permission provided by a person associated with the object.
5. A computer implemented method for use in providing an electronic 3D model of an object, said method comprising: capturing multiple 2D images of an object, a first image of the multiple 2D images captured from a first perspective, a second image of the multiple 2D images captured from a second perspective, the first and second perspectives being different; compiling, by a processor, the multiple 2D images into a 3D model of the object, the 3D model representative of the full face of the object and more than about 90% likeness of the entire targeted object t; and printing a replica of the object based on the 3D model.
6. The method of Claim 5, further comprising assembling the replica with a toy doll and shipping the toy doll to a customer.
7. The method of Claim 5, further comprising assembling a print plan including multiple replicas, a first replica of the multiple replicas being different than a second replica of the multiple replicas.
PCT/US2012/069959 2012-12-15 2012-12-15 Capture systems and methods for use in providing 3d models of objects WO2014092740A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2012/069959 WO2014092740A1 (en) 2012-12-15 2012-12-15 Capture systems and methods for use in providing 3d models of objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/069959 WO2014092740A1 (en) 2012-12-15 2012-12-15 Capture systems and methods for use in providing 3d models of objects

Publications (1)

Publication Number Publication Date
WO2014092740A1 true WO2014092740A1 (en) 2014-06-19

Family

ID=50934795

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/069959 WO2014092740A1 (en) 2012-12-15 2012-12-15 Capture systems and methods for use in providing 3d models of objects

Country Status (1)

Country Link
WO (1) WO2014092740A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9504925B2 (en) 2014-02-14 2016-11-29 Right Foot Llc Doll or action figure with facial features customized to a particular individual
WO2017115149A1 (en) 2015-12-31 2017-07-06 Dacuda Ag A method and system for real-time 3d capture and live feedback with monocular cameras

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010011341A1 (en) * 1998-05-05 2001-08-02 Kent Fillmore Hayes Jr. Client-server system for maintaining a user desktop consistent with server application user access permissions
US20040041804A1 (en) * 2000-03-08 2004-03-04 Ives John D. Apparatus and method for generating a three-dimensional representation from a two-dimensional image
US20060003111A1 (en) * 2004-07-01 2006-01-05 Tan Tseng System and method for creating a 3D figurine using 2D and 3D image capture
US20120162379A1 (en) * 2010-12-27 2012-06-28 3Dmedia Corporation Primary and auxiliary image capture devcies for image processing and related methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010011341A1 (en) * 1998-05-05 2001-08-02 Kent Fillmore Hayes Jr. Client-server system for maintaining a user desktop consistent with server application user access permissions
US20040041804A1 (en) * 2000-03-08 2004-03-04 Ives John D. Apparatus and method for generating a three-dimensional representation from a two-dimensional image
US20060003111A1 (en) * 2004-07-01 2006-01-05 Tan Tseng System and method for creating a 3D figurine using 2D and 3D image capture
US20120162379A1 (en) * 2010-12-27 2012-06-28 3Dmedia Corporation Primary and auxiliary image capture devcies for image processing and related methods

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9504925B2 (en) 2014-02-14 2016-11-29 Right Foot Llc Doll or action figure with facial features customized to a particular individual
WO2017115149A1 (en) 2015-12-31 2017-07-06 Dacuda Ag A method and system for real-time 3d capture and live feedback with monocular cameras
EP4053795A1 (en) 2015-12-31 2022-09-07 ML Netherlands C.V. A method and system for real-time 3d capture and live feedback with monocular cameras
US11631213B2 (en) 2015-12-31 2023-04-18 Magic Leap, Inc. Method and system for real-time 3D capture and live feedback with monocular cameras

Similar Documents

Publication Publication Date Title
US10235810B2 (en) Augmented reality e-commerce for in-store retail
US10049500B2 (en) Augmented reality e-commerce for home improvement
US20210224765A1 (en) System and method for collaborative shopping, business and entertainment
US10497053B2 (en) Augmented reality E-commerce
US10417825B2 (en) Interactive cubicle and method for determining a body shape
US10565616B2 (en) Multi-view advertising system and method
US10002337B2 (en) Method for collaborative shopping
US11717070B2 (en) Systems and methods of adaptive nail printing and collaborative beauty platform hosting
US20140063056A1 (en) Apparatus, system and method for virtually fitting wearable items
US20130215116A1 (en) System and Method for Collaborative Shopping, Business and Entertainment
US20160198146A1 (en) Image processing apparatus and method
US9818224B1 (en) Augmented reality images based on color and depth information
WO2017070286A1 (en) Apparatus and method for providing a virtual shopping space
US20160212406A1 (en) Image processing apparatus and method
WO2013120851A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform
KR20220128620A (en) A system for identifying products within audiovisual content
US20170061490A1 (en) Beacon-faciliated content management and delivery
US10499025B2 (en) Projecting interactive information from internally within a mannequin
KR20160067373A (en) System of giving clothes wearing information with AVATA and operating method thereof
WO2016183629A1 (en) Augmented reality system and method
US20130080287A1 (en) Virtual doll builder
US20150026016A1 (en) System and method of producing model figure of person with associated clothing
CN116324850A (en) Providing AR-based garments in messaging systems
CN106779774A (en) Virtual fitting system and virtual fit method
WO2014092740A1 (en) Capture systems and methods for use in providing 3d models of objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12890097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12890097

Country of ref document: EP

Kind code of ref document: A1