WO2002041127A2 - Inward-looking imaging system - Google Patents

Inward-looking imaging system Download PDF

Info

Publication number
WO2002041127A2
WO2002041127A2 PCT/CA2001/001604 CA0101604W WO0241127A2 WO 2002041127 A2 WO2002041127 A2 WO 2002041127A2 CA 0101604 W CA0101604 W CA 0101604W WO 0241127 A2 WO0241127 A2 WO 0241127A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
cameras
imaging apparatus
computer
Prior art date
Application number
PCT/CA2001/001604
Other languages
French (fr)
Other versions
WO2002041127A3 (en
Inventor
Craig Summers
Original Assignee
Vr Interactive Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vr Interactive Corporation filed Critical Vr Interactive Corporation
Priority to AU2002223329A priority Critical patent/AU2002223329A1/en
Priority to CA002429236A priority patent/CA2429236A1/en
Publication of WO2002041127A2 publication Critical patent/WO2002041127A2/en
Publication of WO2002041127A3 publication Critical patent/WO2002041127A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the present invention relates to an imaging system and in particular to an imaging system for obtaining multiple inward-looking images of a three-dimensional object for use in a 360-degree virtual reality display system.
  • 360-degree images of three-dimensional objects can be useful for a number of purposes, including remote display of artistic works such as sculptures, display of historical or archeological objects, display of retail merchandise on a web site, frozen-in-time images of moving objects in sports, remote diagnostics in the health field, or education or training in all fields.
  • artistic works such as sculptures, display of historical or archeological objects, display of retail merchandise on a web site, frozen-in-time images of moving objects in sports, remote diagnostics in the health field, or education or training in all fields.
  • QuicktimeTM VR virtual reality computer program
  • AppleTM Computer released a virtual reality computer program called QuicktimeTM VR that permits the assembly of a series of photographs on a computer screen which can be navigated by moving the computer mouse.
  • One of the advantages of this general approach is that it creates a photorealistic image, while at the same time having small file sizes and streaming downloads that are easy to handle and are well suited for transfer from one computer to another.
  • a series of inward-looking photographs taken of a three-dimensional object from various angles can be sequentially assembled and input to QuicktimeTM VR for viewing as a navigable 360-degree three-dimensional image.
  • the navigation capabilities provide one with the sense of moving around the object in a 360-degree, three-dimensional space, as if one was actually present in the scene.
  • QuicktimeTM VR Since the development of QuicktimeTM VR, other similar 360-degree virtual reality viewing systems have been produced that can be used to view and navigate a sequential series of photographs. Virtual reality display programs such as QuicktimeTM VR require systems for quickly and inexpensively capturing and assembling multiple, inward-looking images of an object taken from various angles.
  • One approach for obtaining the required images is to take a series of photographs from various angles completely surrounding the object.
  • this technique involves mounting a camera on a tripod, and taking a series of photographs of the object while moving the tripod and camera around the object, or alternatively keeping the camera fixed and rotating the object.
  • One available option for assembling the resulting series of photographs for viewing in a virtual reality display program is to use a software program created by AppleTM Computer called QuicktimeTM VR Authoring Studio. To obtain a good quality 360-degree image, this approach often requires a day or more to capture and assemble as many as 300 photographs.
  • the technique produces a very large number of computer files that occupy large amounts of disk space.
  • a better technique for obtaining the required images would be to set up an array of inward-looking digital cameras on tripods surrounding an object and take one photograph from each camera. This approach takes less time.
  • this alone does not solve the problem of assembling the images in a format capable of viewing in a 360-degree virtual reality display program.
  • An object of the present invention is to overcome the above shortcomings by providing a new and improved system for rapidly obtaining high quality, inexpensive 360-degree inward-looking images of three-dimensional objects.
  • a further object of the present invention is to provide a system for obtaining and assembling inward-looking images of a three-dimensional object which, when converted to a format for display using 360-degree virtual reality display software, are small in size and easy to manipulate.
  • Another object of one preferred embodiment of the present invention is to provide a system for almost simultaneously obtaining multiple inward-looking images of a three-dimensional object which can be quickly and easily assembled and converted to a format for display using 360-degree virtual reality display software.
  • the present invention provides an array of multiple spaced apart digital cameras arranged in a pattern surrounding a centrally located three-dimensional object, each camera being connected to a central computer through its universal serial bus (USB) or some similar communication protocol, and including a modified camera driver program installed on the computer to provide individual, virtually simultaneous communication between the computer and the cameras in the array, so as to permit the virtually simultaneous acquisition of images from each of the cameras.
  • the invention further includes image processing software and image assembly software to remove unwanted segments of the captured images and to sort and assemble the images in a format that can be exported for viewing by a 360-degree virtual reality display program.
  • an inward-looking imaging apparatus for acquiring multiple images of a three- dimensional object located in a scene, comprising: an array of multiple cameras located to view the object from various angles; a computer connected to the cameras for remotely controlling the cameras; and camera drivers connected to the computer for enabling the computer to individually communicate with each of the cameras for the purpose of requesting each of the cameras to capture an image of the object and to transmit the image to the computer, the camera drivers being capable of distinguishing the images and matching the images to the cameras used to capture the images upon receipt by the computer.
  • a method for acquiring multiple inward-looking images of a three-dimensional object comprising the steps of: locating the object within an array of multiple cameras, each camera viewing the object from a different angle; connecting the cameras to a computer; providing camera drivers to individually communicate with each of the cameras to request that each of the cameras capture and send an image of the object to the computer; and distinguishing and matching up each of the captured images with each of the cameras used to capture the images.
  • a method for removing background image segments from an image of a scene containing an object comprising the steps of: capturing a reference image of the scene without the object; capturing a final image of the scene containing the object; comparing the final image to the reference image; and removing the background image segments from the final image that are common to both the final image and the reference image, thereby leaving only an image of the object in the scene.
  • a method for activating the use of an imaging apparatus connected to a computer for the purpose of collecting a processing fee for each use of the imaging apparatus comprising the steps of: providing the computer with authentication software to control the use of the imaging apparatus; requiring payment of the processing fee in exchange for providing an authorization number; and requiring that the authorization number be entered into the authentication software before the authentication software will permit the imaging apparatus to capture images.
  • a method for activating the use of an imaging apparatus connected to a computer for the purpose of collecting a processing fee for each use of the imaging apparatus, comprising the steps of: providing the computer with authentication software to control the use of the imaging apparatus; providing the authentication software with a counter for indicating an available number of uses of the imaging apparatus; requiring payment of a processing fee for each of the available number of uses indicated by the counter; causing the counter to be reduced by one each time the ' imaging apparatus is used to capture an image; and causing the authentication software to deactivate the imaging apparatus when the counter reaches zero.
  • a method for adjusting an image of a scene captured by a camera comprising the steps of: pre-viewing the image of the scene to be captured by the camera and recording a desired central co-ordinate for the scene; recording the difference between the desired central co-ordinate for the scene and the centre of the previewed image from the camera; instructing the camera to capture an image of the scene; and using the difference to adjust the captured image of the scene by placing the centre of the captured image at the desired central co-ordinate.
  • the present invention advantageously provides for the acquisition of high quality, 360-degree, inward-looking images of three-dimensional objects much more rapidly and at a much lower cost than currently available systems, permitting the detailed inspection of a wide variety of three-dimensional objects and people.
  • a further advantage of the present invention is that a 360-degree, inward-looking image of a moving three-dimensional object, such as a golfer swinging a golf club, can be obtained.
  • Another advantage is that the computer file sizes of the resulting images for use in virtual reality display programs are much smaller than those typical of video. Therefore, the files transfer much quicker and activate much faster.
  • Another advantage of the present invention is that it permits imaging of three- dimensional objects varying in size from small pieces of jewellery, to large vehicles, the only limitation being the physical size of the particular installation.
  • Figure 1 is a perspective view of a preferred embodiment of the present invention.
  • Figure 2 is a schematic diagram of a preferred embodiment of the present invention.
  • Figure 3 is a schematic diagram showing the elements associated with the modified camera driver program of the present invention.
  • Figure 4 is a flow chart showing an embodiment of a method of the present invention for acquiring and processing a series of inward-looking images.
  • Figure 5 is a schematic diagram of an alternative preferred embodiment of the present invention.
  • Figure 6 is a flow chart showing an alternative embodiment of a method of the present invention for acquiring and processing a series of inward-looking images.
  • a preferred arrangement of the present inward-looking imaging system 10 which comprises an array of tripods 12, each supporting multiple cameras 14.
  • the applicant has used an equally spaced apart array of sixteen tripods 12, each supporting four equally spaced apart cameras 14, for a total array of sixty-four cameras.
  • Cameras 14 can be any suitable digital imaging devices, such as digital still cameras or digital video cameras, equipped with a digital image sensor such as a charged coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) sensor.
  • CCD charged coupled device
  • CMOS complementary metal oxide semiconductor
  • cameras 14 are remotely controllable by a central computer 40 through a communication protocol such as a universal serial bus (USB), a serial or parallel port, Firewire, Small Computer System Interface (SCSI), a wireless protocol such as Bluetooth, or some similar communication protocol that permits rapid communication between computers and peripheral devices such as digital cameras or digital video cameras.
  • a communication protocol such as a universal serial bus (USB), a serial or parallel port, Firewire, Small Computer System Interface (SCSI), a wireless protocol such as Bluetooth, or some similar communication protocol that permits rapid communication between computers and peripheral devices such as digital cameras or digital video cameras.
  • the spaced apart array of cameras 14 is located to completely surround a centrally located object 16 and cameras 14 are arranged vertically on tripods 12. Those skilled in the art will appreciate that cameras 14 need not necessarily completely . surround object 16, but in some cases need only partially surround object 16.
  • barrier 20 is erected circumferentially around centrally located object 16.
  • barrier 20 is a curtain, preferably suspended on rods 18 connected between tripods 12, but any other suitable manner of similarly suspending curtain 20 would be acceptable.
  • Barrier 20 is preferably made from a uniform-coloured, non-reflective material provided with tiny holes 22 or slits through which cameras 14 can view object 16. Any suitable material can be used for barrier 20, including self-supporting, rigid materials such as metal or wood.
  • USB ports (not shown) of cameras 14 are each connected by cables 24 to a central USB hub or hubs 26 that is/are in turn connected to the USB port of a central computer 40 (see Figure 2).
  • USB hub 26 serves to connect the USB ports of all cameras 14 to computer 40 through a single line permitting rapid, individual communication between cameras 14 and computer 40.
  • any high speed communication protocol include serial or parallel ports, Firewire, Bluetooth wireless, and Small Computer System Interface (SCSI).
  • Bluetooth wireless for example, do not require use of a hub.
  • a modified low-level camera driver program 45 is installed on computer 40 and is used to communicate individually with and differentiate between cameras 14.
  • Camera drivers are low-level computer programs that define the rules for communication between digital cameras or digital video cameras and computers. Camera drivers are unique to each make and model of digital camera.
  • One limitation of the standard camera driver is that if more than one identical camera 14 is connected to computer 40 the driver will not be able to identify and communicate individually with each of cameras 14.
  • camera drivers 45 are modified to permit direct, individual communication between computer 40 and each of cameras 14 so as to permit almost simultaneous capture of images from each of cameras 14 in the array.
  • the images received from each of cameras 14 are automatically identified and assembled by computer 40 using a central control program 60 that communicates with cameras 14 through drivers 45.
  • Central control program 60 includes image assembler software 50 that assembles, organizes and exports the images in accordance with defined protocols such as QuicktimeTM API (application programmer interface), a low-level software toolkit used for assembling images and exporting them in a format for display using a virtual reality display program, such as QuicktimeTM VR.
  • the assembled images could also be converted and exported in a format for display in other virtual reality viewers.
  • driver programs 45 that permit individual communication between computer 40 and each of cameras 14 in the array, will now be described by referring to Figure 3.
  • each camera 14 As each camera 14 is connected to computer 40 it supplies a camera identifier 210 to a device enumerator 215 which is part of the operating system of computer 40.
  • Device enumerator 215 loads camera identifier 210 into an operating system hardware tree 220 and assigns the appropriate hardware resources such as input/output (I/O) ports, interrupt requests (IRQs), direct memory access (DMA) channels, and memory locations.
  • An operating system hardware manager 225 determines which driver is required for the particular camera and loads that driver into computer 40.
  • drivers 45 are modified by assigning each of cameras 14 a unique camera driver identifier corresponding to camera identifier 210 stored in the operating system hardware tree 220. Once identified and distinguished by drivers 45, computer 40 is able to individually communicate with, and distinguish images received from, each of cameras 14.
  • central control program 60 includes imaging software 62 for manipulating the image obtained from each of cameras 14 prior to capture.
  • imaging software 62 includes manual hole selecting software 64 and cut and paste software 66.
  • Hole selecting software 64 is used to manually specify the number and location of each hole 22 on the image to be obtained from each of cameras 14, and cut and paste software 66 is used to copy the colour of barrier 20 from adjacent the location of each hole 22 and paste that colour over the location of each hole 22 on the image for each camera.
  • the covered-up image areas corresponding to holes 22 are blurred slightly to remove any visible evidence of the cut and paste process.
  • a technique called chromakeying can be used to create special effects in the final virtual reality image.
  • a dark or black barrier 20 is used for light coloured objects while a light coloured or white barrier 20 is used with dark objects.
  • a bright green or bright blue barrier 20 is used. The bright green or blue colour is specifically detected by imaging software 62 and is replaced on all captured frames by a uniform background of any colour or texture.
  • object 16 will appear to float as if suspended in midair in front of the background inserted.
  • the green or blue background can be replaced by a photograph depicting a scene to coordinate with the object being photographed. For example, an alpine image can be used when photographing a pair of skis.
  • an alternative method of background removal is used.
  • a reference image of the scene, without object 16 is captured by each camera 14.
  • a second final image of the scene, including object 16 is captured by each camera 14.
  • Background removal software 63 is used to compare the final captured images including object 16 to the corresponding reference images without object 16, and eliminate any unwanted background from the final captured images including object 16 that is common to both the reference images and the final captured images, thereby leaving only object 16 in the final captured images.
  • the backgrounds are then filled in with any desired colour or a photograph. If necessary, edge detection software 65 may be used to recognize the exterior edges of object 16 in the final captured images, before the backgrounds are filled in.
  • This method of background removal is faster than the above-described method using hole selecting software 64 and cut and paste software 66, and is more efficient than chromakeying for background replacement.
  • One further advantage of this method is that it is unnecessary to use a uniform barrier or curtain 20 behind which cameras 14 are secluded. Any background can be used, so long as the background does not change between the taking of the reference images without object 16 and the captured images containing object 16.
  • background removal software 63 compares the reference images without object 16, to the final captured images including object 16, on a pixel-by-pixel basis. The comparison is based on pixel location and colour. Pixels in the same location in both images and having the same colour, are removed from the final captured images. Since it is possible for there to be minor variations in colour between subsequent images taken from the
  • background removal software 63 also includes a colour adjustment feature, which permits the user to adjust the colour of the final captured images to match the colour of the reference images so as to ensure that pixels common to both images are removed from the final captured images.
  • Each of cameras 14 can be aimed manually to a common central location, but this takes time and has limited accuracy. Accordingly, the present invention uses a dejittering process to ensure that all captured images have the same central co- ordinates. This is done in two steps. First, a ping pong ball, or some other round object that looks the same from all sides, is placed in the centre of imaging system 10 at the intended location of object 16. Each camera 14 is manually aimed so that the ping pong ball is located approximately in the centre of each camera's image frame. Second, an image from each camera 14 is pre-viewed using dejittering software 67. The pre-viewed image is adjusted by moving cross hairs appearing on the pre-viewed image screen to the exact centre of the ping pong ball.
  • dejittering software 67 The difference between the centre of the manually aimed image frame and the cross hairs on the pre-viewed image from each camera is recorded by dejittering software 67 and used to adjust each captured image of object 16, so as to locate the centre of the camera's image frame at the same selected central co-ordinates in each captured image. Image elements moved outside the image frame of the camera as the result of this adjustment are cropped. This "dejittering" process ensures that even though the manually aimed cameras may not all be pointing at exactly the same coordinates, the image of object 16 in each captured image is always centred on the same co-ordinates.
  • central control program 60 includes camera selection software 68 for specifying which of cameras 14 are to be activated for image capture and in what order.
  • the selection of images can be made sequentially from cameras one through sixty-four, or alternatively, the selection can be made in a predetermined pattern and the images later sorted for assembly in sequential order.
  • the ability to capture images in a predetermined order can be useful if object 16 moves during the image capture process, as for example a golfer swinging a golf club. In this case, capturing images in non-sequential order can assist in giving the resulting three-dimensional virtual reality image a more defined, solid appearance.
  • Camera selection software 68 also permits a user to specify whether cameras 16 are oriented horizontally or vertically.
  • central control program 60 includes morphing software 75 which creates intermediate composite images using images taken from cameras 14 on adjacent tripods 12.
  • the resulting intermediate composite images represent simulated views of object 16 taken from a location between the adjacent tripods 12. The result is that fewer cameras are needed in the array to create a smooth final image for use in a
  • the central control program 60 will include authentication software 70.
  • authentication software 70 To initiate the image capture process, a user must contact a central location to activate authentication software 70 by obtaining an authorization number in exchange for payment of an image processing fee. This approach allows the applicant to make the invention available to remote users while at the same time maintaining the ability to collect user fees for image processing.
  • the user can be allowed to save an assembled demo image, including a watermark, before authentication software 70 is engaged and the processing fee is paid. This will permit a user to preview the assembled image or show it to clients before incurring a processing fee. Once the central location is contacted and the processing fee paid, the watermark is removed.
  • authentication software 70 includes a use counter indicating an available number of uses. Each time a final assembled image is processed for export to a virtual reality viewer, the use counter subtracts one from the total available uses, until none remain.
  • a user can obtain additional uses by purchasing a processing code number, which is used by authentication software 70 to reset the use counter with a desired number of uses.
  • the processing code number is a unique number which designates the number of processing units purchased and specifies the individual computer on which it can be used.
  • the processing code number is obtained by contacting the central location, requesting the desired number of uses, and providing a unique computer identifier number generated by authentication software 70.
  • the computer identifier number is generated by authentication software 70 from a serial number or numbers read from devices connected to computer 40, such as the computer's mother board, central processing unit, or hard drive to ensure that the requested processing uses are made available to only one computer.
  • FIG. 4 there is shown a flow chart illustrating a method for acquiring and processing 360-degree inward-looking images in accordance with one embodiment of the present invention.
  • the steps include, a step 110, of installing an array of spaced apart cameras 14 surrounding a centrally located object 16, a step 115, of covering cameras 14 with non-reflective barrier or curtain 20 leaving only small holes or slits 22 through which each camera 14 can view object 16, and a step
  • the method further includes, a step 123, of assigning each camera 14 a sequential location number and selecting the image capture order, and a step 125, of manually aiming cameras 14 toward object 16 and using dejittering software 67 to ensure object 16 is centrally located in each image captured by each camera 14. Additional steps include, a step
  • a step 130 of previewing the images to be obtained from each camera 14 using imaging software 62 and identifying the location of each opposing hole or slit 22, and a step 135, of individually communicating with all of the cameras 14, in the designated order, requesting each to activate and send an image to computer 40.
  • a step 140 of removing all previously identified camera viewing holes or slits 22 from each image
  • a step 145 of identifying and sorting the images as to order and location
  • a step 150 of temporarily storing the images received on computer 40.
  • the images are assembled and converted to a single data file in a format that can be exported and viewed in a virtual reality display program.
  • an additional step 147 may be added which uses morphing software 75 to create additional composite images representing simulated views of the object from between adjacent cameras 14.
  • the method of the embodiment of the present invention shown in Figure 4 may also include a further step 170, of obtaining an authentication number, in exchange for payment of an image processing fee, either prior to initiating the image capture process or before a watermark is removed from the final assembled image.
  • steps 115, 130, and 140, as shown in Figure 4 are replaced by alternative steps which include, a step 132, of obtaining a reference image from each of cameras 14 without object 16 in the scene, a step 137, of using background removal software 63 to compare captured images of the scene including object 16 to the reference images without object 16 and to eliminate any unwanted background from the captured images that is common to both the reference images and the captured images.
  • edge detection software 65 may be used to recognize the exterior edges of object 16, and the backgrounds are filled in with any desired colour or a selected photograph.
  • the inward-looking imaging apparatus and method of the present invention have applications in a wide number of areas requiring the acquisition of multiple inward-looking photographs for use in interactive 360-degree virtual reality displays, of which the following is a brief, but not exhaustive, list:

Abstract

An inward-looking imaging apparatus and method for capturing 360-degree inward-looking images of a three-dimensional object for use in a virtual reality display system comprises an array of multiple digital cameras for simultaneously recording multiple images of the object from multiple angles, a computer, including a modified camera driver program for simultaneously communicating with the cameras through a communication protocol, and imaging software for processing the images to remove unwanted background image segments. There is further included image assembling software for sorting and assembling the images and converting them to a single computer file in a format that can be displayed as a navigable 360-degree image on a virtual reality display program. Image processing can be controlled from a central location through the provision of an authorization number in exchange for payment of a fee prior to initiation of the image capture process.

Description

INWARD-LOOKING IMAGING SYSTEM
The present invention relates to an imaging system and in particular to an imaging system for obtaining multiple inward-looking images of a three-dimensional object for use in a 360-degree virtual reality display system.
BACKGROUND OF THE INVENTION
Inward-looking, 360-degree images of three-dimensional objects can be useful for a number of purposes, including remote display of artistic works such as sculptures, display of historical or archeological objects, display of retail merchandise on a web site, frozen-in-time images of moving objects in sports, remote diagnostics in the health field, or education or training in all fields.
It is possible to model a three-dimensional object by painstakingly applying colours and textures to a three-dimensional computer-generated wireframe mode_of the object. This process is slow and costly, as it requires skilled personnel and expensive, high-end computers to complete. Using wireframe modeling, it can take more than a day to create a realistic 360-degree, three-dimensional model of a single object. Furthermore, this approach does not produce a photorealistic image as it is not capable of accurately reproducing all of the object's details.
In approximately 1995, Apple™ Computer released a virtual reality computer program called Quicktime™ VR that permits the assembly of a series of photographs on a computer screen which can be navigated by moving the computer mouse. One of the advantages of this general approach is that it creates a photorealistic image, while at the same time having small file sizes and streaming downloads that are easy to handle and are well suited for transfer from one computer to another. A series of inward-looking photographs taken of a three-dimensional object from various angles can be sequentially assembled and input to Quicktime™ VR for viewing as a navigable 360-degree three-dimensional image. The navigation capabilities provide one with the sense of moving around the object in a 360-degree, three-dimensional space, as if one was actually present in the scene. Since the development of Quicktime™ VR, other similar 360-degree virtual reality viewing systems have been produced that can be used to view and navigate a sequential series of photographs. Virtual reality display programs such as Quicktime™ VR require systems for quickly and inexpensively capturing and assembling multiple, inward-looking images of an object taken from various angles.
One approach for obtaining the required images is to take a series of photographs from various angles completely surrounding the object. Generally, this technique involves mounting a camera on a tripod, and taking a series of photographs of the object while moving the tripod and camera around the object, or alternatively keeping the camera fixed and rotating the object. One available option for assembling the resulting series of photographs for viewing in a virtual reality display program, is to use a software program created by Apple™ Computer called Quicktime™ VR Authoring Studio. To obtain a good quality 360-degree image, this approach often requires a day or more to capture and assemble as many as 300 photographs. Furthermore, the technique produces a very large number of computer files that occupy large amounts of disk space.
A better technique for obtaining the required images would be to set up an array of inward-looking digital cameras on tripods surrounding an object and take one photograph from each camera. This approach takes less time. However, there is no available, inexpensive method for simultaneously communicating with, capturing, and assembling multiple images from multiple cameras. Although not in common use, it would be possible to connect all of the cameras to a large switchboard, which scans the camera outputs and individually select images from each camera. Switchboards can select images at up to 60 frames per second, but this is not sufficient to freeze the action of a moving object and requires the added expense of the switchboard. Moreover, this alone does not solve the problem of assembling the images in a format capable of viewing in a 360-degree virtual reality display program.
It is clear from the above that the techniques, skills and costs associated with obtaining and assembling inward-looking images of three-dimensional objects, suitable for use in 360-degree virtual reality display applications, could be significantly improved with the availability of improved inward-looking imaging systems. BRIEF SUMMARY OF THE INVENTION
An object of the present invention is to overcome the above shortcomings by providing a new and improved system for rapidly obtaining high quality, inexpensive 360-degree inward-looking images of three-dimensional objects.
A further object of the present invention is to provide a system for obtaining and assembling inward-looking images of a three-dimensional object which, when converted to a format for display using 360-degree virtual reality display software, are small in size and easy to manipulate.
Another object of one preferred embodiment of the present invention is to provide a system for almost simultaneously obtaining multiple inward-looking images of a three-dimensional object which can be quickly and easily assembled and converted to a format for display using 360-degree virtual reality display software.
Briefly, these objectives are achieved by the present invention, which provides an array of multiple spaced apart digital cameras arranged in a pattern surrounding a centrally located three-dimensional object, each camera being connected to a central computer through its universal serial bus (USB) or some similar communication protocol, and including a modified camera driver program installed on the computer to provide individual, virtually simultaneous communication between the computer and the cameras in the array, so as to permit the virtually simultaneous acquisition of images from each of the cameras. The invention further includes image processing software and image assembly software to remove unwanted segments of the captured images and to sort and assemble the images in a format that can be exported for viewing by a 360-degree virtual reality display program.
In accordance with one aspect of the present invention there is provided an inward-looking imaging apparatus for acquiring multiple images of a three- dimensional object located in a scene, comprising: an array of multiple cameras located to view the object from various angles; a computer connected to the cameras for remotely controlling the cameras; and camera drivers connected to the computer for enabling the computer to individually communicate with each of the cameras for the purpose of requesting each of the cameras to capture an image of the object and to transmit the image to the computer, the camera drivers being capable of distinguishing the images and matching the images to the cameras used to capture the images upon receipt by the computer.
In accordance with another aspect of the present invention there is provided a method for acquiring multiple inward-looking images of a three-dimensional object comprising the steps of: locating the object within an array of multiple cameras, each camera viewing the object from a different angle; connecting the cameras to a computer; providing camera drivers to individually communicate with each of the cameras to request that each of the cameras capture and send an image of the object to the computer; and distinguishing and matching up each of the captured images with each of the cameras used to capture the images.
In accordance a further aspect of the present invention there is provided a method for removing background image segments from an image of a scene containing an object, comprising the steps of: capturing a reference image of the scene without the object; capturing a final image of the scene containing the object; comparing the final image to the reference image; and removing the background image segments from the final image that are common to both the final image and the reference image, thereby leaving only an image of the object in the scene.
In accordance a still another aspect of the present invention there is provided a method for activating the use of an imaging apparatus connected to a computer for the purpose of collecting a processing fee for each use of the imaging apparatus, comprising the steps of: providing the computer with authentication software to control the use of the imaging apparatus; requiring payment of the processing fee in exchange for providing an authorization number; and requiring that the authorization number be entered into the authentication software before the authentication software will permit the imaging apparatus to capture images.
In accordance another aspect of the present invention there is provided a method for activating the use of an imaging apparatus connected to a computer, for the purpose of collecting a processing fee for each use of the imaging apparatus, comprising the steps of: providing the computer with authentication software to control the use of the imaging apparatus; providing the authentication software with a counter for indicating an available number of uses of the imaging apparatus; requiring payment of a processing fee for each of the available number of uses indicated by the counter; causing the counter to be reduced by one each time the ' imaging apparatus is used to capture an image; and causing the authentication software to deactivate the imaging apparatus when the counter reaches zero.
In accordance a further aspect of the present invention there is provided a method for adjusting an image of a scene captured by a camera comprising the steps of: pre-viewing the image of the scene to be captured by the camera and recording a desired central co-ordinate for the scene; recording the difference between the desired central co-ordinate for the scene and the centre of the previewed image from the camera; instructing the camera to capture an image of the scene; and using the difference to adjust the captured image of the scene by placing the centre of the captured image at the desired central co-ordinate.
The present invention advantageously provides for the acquisition of high quality, 360-degree, inward-looking images of three-dimensional objects much more rapidly and at a much lower cost than currently available systems, permitting the detailed inspection of a wide variety of three-dimensional objects and people. A further advantage of the present invention is that a 360-degree, inward-looking image of a moving three-dimensional object, such as a golfer swinging a golf club, can be obtained. Another advantage is that the computer file sizes of the resulting images for use in virtual reality display programs are much smaller than those typical of video. Therefore, the files transfer much quicker and activate much faster.
Another advantage of the present invention is that it permits imaging of three- dimensional objects varying in size from small pieces of jewellery, to large vehicles, the only limitation being the physical size of the particular installation.
Further objects and advantages of the present invention will be apparent from the following description, wherein preferred embodiments of the invention are clearly shown. BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be further understood from the following description with reference to the drawings in which:
Figure 1 is a perspective view of a preferred embodiment of the present invention.
Figure 2 is a schematic diagram of a preferred embodiment of the present invention. Figure 3 is a schematic diagram showing the elements associated with the modified camera driver program of the present invention.
Figure 4 is a flow chart showing an embodiment of a method of the present invention for acquiring and processing a series of inward-looking images.
Figure 5 is a schematic diagram of an alternative preferred embodiment of the present invention.
Figure 6 is a flow chart showing an alternative embodiment of a method of the present invention for acquiring and processing a series of inward-looking images.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION
Referring to Figure 1 , a preferred arrangement of the present inward-looking imaging system 10 is shown which comprises an array of tripods 12, each supporting multiple cameras 14. In a preferred embodiment of the present invention, the applicant has used an equally spaced apart array of sixteen tripods 12, each supporting four equally spaced apart cameras 14, for a total array of sixty-four cameras. Those skilled in the area will readily appreciate that other arrangements including larger or fewer numbers of cameras and/or tripods, which need not necessarily be equally spaced apart, can be used in the present arrangement. Cameras 14 can be any suitable digital imaging devices, such as digital still cameras or digital video cameras, equipped with a digital image sensor such as a charged coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) sensor. One essential requirement for cameras 14 is that their operable functions be remotely controllable by a central computer 40 through a communication protocol such as a universal serial bus (USB), a serial or parallel port, Firewire, Small Computer System Interface (SCSI), a wireless protocol such as Bluetooth, or some similar communication protocol that permits rapid communication between computers and peripheral devices such as digital cameras or digital video cameras.
As shown in Figure 1 , in one preferred arrangement of the present invention, the spaced apart array of cameras 14 is located to completely surround a centrally located object 16 and cameras 14 are arranged vertically on tripods 12. Those skilled in the art will appreciate that cameras 14 need not necessarily completely . surround object 16, but in some cases need only partially surround object 16.
A barrier 20 is erected circumferentially around centrally located object 16. In the illustrated case shown in Figure 1 , barrier 20 is a curtain, preferably suspended on rods 18 connected between tripods 12, but any other suitable manner of similarly suspending curtain 20 would be acceptable. Barrier 20 is preferably made from a uniform-coloured, non-reflective material provided with tiny holes 22 or slits through which cameras 14 can view object 16. Any suitable material can be used for barrier 20, including self-supporting, rigid materials such as metal or wood.
In the preferred arrangement shown in Figure 1 , the USB ports (not shown) of cameras 14 are each connected by cables 24 to a central USB hub or hubs 26 that is/are in turn connected to the USB port of a central computer 40 (see Figure 2). USB hub 26 serves to connect the USB ports of all cameras 14 to computer 40 through a single line permitting rapid, individual communication between cameras 14 and computer 40. As mentioned above, those skilled in the art will appreciate that use of the USB for individual communication between computer 40 and cameras 14 is not an essential feature of the present invention and that any high speed communication protocol will be acceptable. Examples of such currently existing high speed communication protocols include serial or parallel ports, Firewire, Bluetooth wireless, and Small Computer System Interface (SCSI). Some of these systems,
Bluetooth wireless, for example, do not require use of a hub.
Referring to Figure 2, a modified low-level camera driver program 45 is installed on computer 40 and is used to communicate individually with and differentiate between cameras 14. Camera drivers are low-level computer programs that define the rules for communication between digital cameras or digital video cameras and computers. Camera drivers are unique to each make and model of digital camera. One limitation of the standard camera driver, however, is that if more than one identical camera 14 is connected to computer 40 the driver will not be able to identify and communicate individually with each of cameras 14. In the present invention, camera drivers 45 are modified to permit direct, individual communication between computer 40 and each of cameras 14 so as to permit almost simultaneous capture of images from each of cameras 14 in the array.
The images received from each of cameras 14 are automatically identified and assembled by computer 40 using a central control program 60 that communicates with cameras 14 through drivers 45. Central control program 60 includes image assembler software 50 that assembles, organizes and exports the images in accordance with defined protocols such as Quicktime™ API (application programmer interface), a low-level software toolkit used for assembling images and exporting them in a format for display using a virtual reality display program, such as Quicktime™ VR. The assembled images could also be converted and exported in a format for display in other virtual reality viewers.
In practice, image capture from all cameras 14 will take place virtually simultaneously. Depending on the speed of computer 40, and the speed of the particular communication protocol used, images captured from individual cameras will appear to have been taken simultaneously, and in any case far faster than the 60 frames per second capable with existing switchboards. With today's high speed computers, having clock speeds above 800 mega hertz, the elapsed time between the first and last images captured is sufficient to freeze the action of most fast moving three-dimensional objects.
Modifications to driver programs 45, that permit individual communication between computer 40 and each of cameras 14 in the array, will now be described by referring to Figure 3. As each camera 14 is connected to computer 40 it supplies a camera identifier 210 to a device enumerator 215 which is part of the operating system of computer 40. Device enumerator 215 loads camera identifier 210 into an operating system hardware tree 220 and assigns the appropriate hardware resources such as input/output (I/O) ports, interrupt requests (IRQs), direct memory access (DMA) channels, and memory locations. An operating system hardware manager 225 determines which driver is required for the particular camera and loads that driver into computer 40. In a standard, unmodified system, the assigned driver would not be able to distinguish one camera from another identical camera within the array. Consequently individual communication with cameras 14 and differentiation of images sent from each of cameras 14 would not be possible. In the present invention, drivers 45 are modified by assigning each of cameras 14 a unique camera driver identifier corresponding to camera identifier 210 stored in the operating system hardware tree 220. Once identified and distinguished by drivers 45, computer 40 is able to individually communicate with, and distinguish images received from, each of cameras 14.
Referring again to Figure 2, central control program 60 includes imaging software 62 for manipulating the image obtained from each of cameras 14 prior to capture. In the arrangement described above and shown in Figure 1 , each of the images of object 16 captured by cameras 14 will also contain unwanted images of holes 22 in the opposing barrier. To eliminate these unwanted image segments, barrier 20 is first made of a uniform-coloured, non-reflective material. Second, imaging software 62 includes manual hole selecting software 64 and cut and paste software 66. Hole selecting software 64 is used to manually specify the number and location of each hole 22 on the image to be obtained from each of cameras 14, and cut and paste software 66 is used to copy the colour of barrier 20 from adjacent the location of each hole 22 and paste that colour over the location of each hole 22 on the image for each camera. Finally, the covered-up image areas corresponding to holes 22 are blurred slightly to remove any visible evidence of the cut and paste process.
A technique called chromakeying can be used to create special effects in the final virtual reality image. To emphasize the object being photographed and make it stand out against the background, a dark or black barrier 20 is used for light coloured objects while a light coloured or white barrier 20 is used with dark objects. If the need is to have a background of uniform colour, a bright green or bright blue barrier 20 is used. The bright green or blue colour is specifically detected by imaging software 62 and is replaced on all captured frames by a uniform background of any colour or texture. In the resulting final virtual reality image, object 16 will appear to float as if suspended in midair in front of the background inserted. If desired, the green or blue background can be replaced by a photograph depicting a scene to coordinate with the object being photographed. For example, an alpine image can be used when photographing a pair of skis.
In an alternative preferred embodiment of the invention, shown in Figure 5, an alternative method of background removal is used. In this method, a reference image of the scene, without object 16, is captured by each camera 14. Once object 16 is located in the scene, a second final image of the scene, including object 16, is captured by each camera 14. Background removal software 63 is used to compare the final captured images including object 16 to the corresponding reference images without object 16, and eliminate any unwanted background from the final captured images including object 16 that is common to both the reference images and the final captured images, thereby leaving only object 16 in the final captured images. The backgrounds are then filled in with any desired colour or a photograph. If necessary, edge detection software 65 may be used to recognize the exterior edges of object 16 in the final captured images, before the backgrounds are filled in. This method of background removal is faster than the above-described method using hole selecting software 64 and cut and paste software 66, and is more efficient than chromakeying for background replacement. One further advantage of this method is that it is unnecessary to use a uniform barrier or curtain 20 behind which cameras 14 are secluded. Any background can be used, so long as the background does not change between the taking of the reference images without object 16 and the captured images containing object 16.
In the above-described method of background removal, background removal software 63 compares the reference images without object 16, to the final captured images including object 16, on a pixel-by-pixel basis. The comparison is based on pixel location and colour. Pixels in the same location in both images and having the same colour, are removed from the final captured images. Since it is possible for there to be minor variations in colour between subsequent images taken from the
JO- same camera, background removal software 63 also includes a colour adjustment feature, which permits the user to adjust the colour of the final captured images to match the colour of the reference images so as to ensure that pixels common to both images are removed from the final captured images.
Before any images are captured it is important to accurately aim all cameras 14 toward the same central coordinates of the intended location of object 16. This central location becomes the axis around which all of the captured images are centred. If all of cameras 14 are not pointed to exactly the same coordinates, the resulting image, once assembled, will appear jittery and non-solid.
Each of cameras 14 can be aimed manually to a common central location, but this takes time and has limited accuracy. Accordingly, the present invention uses a dejittering process to ensure that all captured images have the same central co- ordinates. This is done in two steps. First, a ping pong ball, or some other round object that looks the same from all sides, is placed in the centre of imaging system 10 at the intended location of object 16. Each camera 14 is manually aimed so that the ping pong ball is located approximately in the centre of each camera's image frame. Second, an image from each camera 14 is pre-viewed using dejittering software 67. The pre-viewed image is adjusted by moving cross hairs appearing on the pre-viewed image screen to the exact centre of the ping pong ball. The difference between the centre of the manually aimed image frame and the cross hairs on the pre-viewed image from each camera is recorded by dejittering software 67 and used to adjust each captured image of object 16, so as to locate the centre of the camera's image frame at the same selected central co-ordinates in each captured image. Image elements moved outside the image frame of the camera as the result of this adjustment are cropped. This "dejittering" process ensures that even though the manually aimed cameras may not all be pointing at exactly the same coordinates, the image of object 16 in each captured image is always centred on the same co-ordinates.
In the preferred embodiments of the present invention, shown in Figures 2 and 5, central control program 60 includes camera selection software 68 for specifying which of cameras 14 are to be activated for image capture and in what order. The selection of images can be made sequentially from cameras one through sixty-four, or alternatively, the selection can be made in a predetermined pattern and the images later sorted for assembly in sequential order. The ability to capture images in a predetermined order can be useful if object 16 moves during the image capture process, as for example a golfer swinging a golf club. In this case, capturing images in non-sequential order can assist in giving the resulting three-dimensional virtual reality image a more defined, solid appearance. Camera selection software 68 also permits a user to specify whether cameras 16 are oriented horizontally or vertically.
In another preferred aspect of the present invention, also shown in Figures 2 and 5, central control program 60 includes morphing software 75 which creates intermediate composite images using images taken from cameras 14 on adjacent tripods 12. The resulting intermediate composite images represent simulated views of object 16 taken from a location between the adjacent tripods 12. The result is that fewer cameras are needed in the array to create a smooth final image for use in a
360-degree virtual reality display system.
Referring further to Figures 2 and 5, it is contemplated that in preferred embodiments of the invention, the central control program 60 will include authentication software 70. To initiate the image capture process, a user must contact a central location to activate authentication software 70 by obtaining an authorization number in exchange for payment of an image processing fee. This approach allows the applicant to make the invention available to remote users while at the same time maintaining the ability to collect user fees for image processing.
Alternatively, the user can be allowed to save an assembled demo image, including a watermark, before authentication software 70 is engaged and the processing fee is paid. This will permit a user to preview the assembled image or show it to clients before incurring a processing fee. Once the central location is contacted and the processing fee paid, the watermark is removed.
In a further preferred embodiment, it is unnecessary for the user to make contact with the central location to obtain an authorization number each time a final assembled image is processed. In this embodiment of the invention, authentication software 70 includes a use counter indicating an available number of uses. Each time a final assembled image is processed for export to a virtual reality viewer, the use counter subtracts one from the total available uses, until none remain. A user can obtain additional uses by purchasing a processing code number, which is used by authentication software 70 to reset the use counter with a desired number of uses. The processing code number is a unique number which designates the number of processing units purchased and specifies the individual computer on which it can be used. The processing code number is obtained by contacting the central location, requesting the desired number of uses, and providing a unique computer identifier number generated by authentication software 70. The computer identifier number is generated by authentication software 70 from a serial number or numbers read from devices connected to computer 40, such as the computer's mother board, central processing unit, or hard drive to ensure that the requested processing uses are made available to only one computer.
Referring to Figure 4, there is shown a flow chart illustrating a method for acquiring and processing 360-degree inward-looking images in accordance with one embodiment of the present invention. The steps include, a step 110, of installing an array of spaced apart cameras 14 surrounding a centrally located object 16, a step 115, of covering cameras 14 with non-reflective barrier or curtain 20 leaving only small holes or slits 22 through which each camera 14 can view object 16, and a step
120, of connecting cameras 14 to central computer 40. The method further includes, a step 123, of assigning each camera 14 a sequential location number and selecting the image capture order, and a step 125, of manually aiming cameras 14 toward object 16 and using dejittering software 67 to ensure object 16 is centrally located in each image captured by each camera 14. Additional steps include, a step
130, of previewing the images to be obtained from each camera 14 using imaging software 62 and identifying the location of each opposing hole or slit 22, and a step 135, of individually communicating with all of the cameras 14, in the designated order, requesting each to activate and send an image to computer 40. Next follows, a step 140, of removing all previously identified camera viewing holes or slits 22 from each image, a step 145, of identifying and sorting the images as to order and location, and a step 150, of temporarily storing the images received on computer 40. Finally, in a step 155, the images are assembled and converted to a single data file in a format that can be exported and viewed in a virtual reality display program. In a preferred embodiment of the present invention, to reduce the number of cameras required and to create a smooth final image, an additional step 147 may be added which uses morphing software 75 to create additional composite images representing simulated views of the object from between adjacent cameras 14. The method of the embodiment of the present invention shown in Figure 4 may also include a further step 170, of obtaining an authentication number, in exchange for payment of an image processing fee, either prior to initiating the image capture process or before a watermark is removed from the final assembled image.
In an alternative preferred embodiment of the method of the present invention, as shown in Figure 6, steps 115, 130, and 140, as shown in Figure 4, are replaced by alternative steps which include, a step 132, of obtaining a reference image from each of cameras 14 without object 16 in the scene, a step 137, of using background removal software 63 to compare captured images of the scene including object 16 to the reference images without object 16 and to eliminate any unwanted background from the captured images that is common to both the reference images and the captured images. Finally, in a step 139, if necessary, edge detection software 65 may be used to recognize the exterior edges of object 16, and the backgrounds are filled in with any desired colour or a selected photograph.
The inward-looking imaging apparatus and method of the present invention have applications in a wide number of areas requiring the acquisition of multiple inward-looking photographs for use in interactive 360-degree virtual reality displays, of which the following is a brief, but not exhaustive, list:
remote display of artistic works such as sculptures; display of historical or archeological objects; display of retail merchandise on a web site; frozen-in-time images of moving objects; remote diagnostics in the health field; and educational or training applications in all fields and in particular in the field of movement analysis and training in activities such as golf, karate or dance. The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. An inward-looking imaging apparatus for acquiring multiple images of a three- dimensional object located in a scene, comprising: an array of multiple cameras located to view the object from various angles; a computer connected to said cameras for remotely controlling said cameras; and camera drivers connected to said computer for enabling said computer to individually communicate with each of said cameras for the purpose of requesting each of said cameras to capture an image of the object and to transmit said image to said computer, said camera drivers being capable of distinguishing said images and matching said images to said cameras used to capture said images upon receipt by said computer.
2. An imaging apparatus as described in claim 1 , including image assembler software for assembling, organising and exporting said images in accordance with predetermined protocols for display in a virtual reality viewer.
3. An imaging apparatus as described in claim 1 , wherein each of said camera drivers is modified by assigning each of said camera drivers a camera driver identifier corresponding to a camera identifier provided by each of said corresponding cameras and stored in a system hardware tree of an operating system of said computer when each of said corresponding cameras was first connected to said computer, wherein said camera driver identifiers permit each of said camera drivers to individually communicate with each of said corresponding cameras and to identify and distinguish said captured images received from said cameras.
4. An imaging apparatus as described in claim 1 , including background removal software for removing background image segments from said captured images and for replacing said background image segments with a desired background colour or a selected photograph.
5. An imaging apparatus as described in claim 4, wherein said background removal software includes comparison software for comparing said captured images from each of said cameras to corresponding reference images obtained from each of said cameras before said object is placed into the scene, and removal software for removing said background image segments from said captured images that are common to both of said captured images and said corresponding reference images.
6. An imaging apparatus as described in claim 5, wherein said comparison of said captured images to said corresponding reference images, and said removal of said background image segments is done on a pixel-by-pixel basis.
7. An imaging apparatus as described in claim 6, wherein said comparison and said removal of said background is based on said pixel location and colour, whereby said corresponding pixels having the same location and colour are removed from said captured images.
8. An imaging apparatus as described in claim 7, wherein said background removal software includes colour adjustment software for adjusting the colour of said captured images to match the colour of said corresponding reference images.
9. An imaging apparatus as described in claims 4, 5, 6, 7 or 8, including edge detection software for detecting the edges of the image of the object after said background has been removed and prior to replacing said background with said desired colour or said selected photograph.
10. An imaging apparatus as described in claim 1 , including dejittering software to adjust said captured images so that the centre of each of said captured image from each of said cameras is located at a desired central location for the object within the scene.
11. An imaging apparatus as described in claim 10, wherein said dejittering software measures a difference between said desired central location and the centre of each captured image for each camera, and uses said differences to adjust each said captured image so that the centre of each said captured image is placed at said desired central location.
12. An imaging apparatus as described in claim 11 , wherein said dejittering software crops out any image segments from said captured images that are moved outside the boundaries of said captured images as the result of moving the centre of said captured images to said desired central location.
13. An imaging apparatus as described in claim 1 , including camera selection software for specifying which ones of said cameras are to be activated to capture and transmit said images of the object to said computer
14. An imaging apparatus as described in claim 13, wherein said camera selection software is capable of also specifying the order in which said cameras are to be activated for image capture and transmittal.
15. An imaging apparatus as described in claim 1 , including camera selection software for specifying whether said cameras are oriented horizontally or vertically.
16. An imaging apparatus as described in claim 1 , including morphing software for creating intermediate composite images from said images captured by adjacent ones of said cameras, said intermediate composite images representing simulated views of said object from locations between adjacent ones of said cameras.
17. An imaging apparatus as described in claim 1 , including authentication software that must be activated with an authorization number, before said images can be captured by said cameras.
18. An imaging apparatus as described in claim 17, wherein said authorization number is obtained from a central location on payment of a processing fee.
19. An imaging apparatus as described in claims 17 or 18, wherein said authentication software will permit demonstration images of said object to be captured, and pre-viewed before said authorization number is required.
20. An imaging apparatus as described in claim 19, wherein said demonstration images include a visible watermark that is removed only on activation of said authentication software with said authorization number.
21. An imaging apparatus as described in claim 1 , including authentication software having a counter indicating an available number of uses of said imaging apparatus, said available number of uses being reduced by one each time images are captured by said cameras for export to a virtual reality viewer, until said available number of uses equals zero, whereupon said imaging apparatus is inactivated.
22. An imaging apparatus as described in claim 21 , wherein said authentication software will permit demonstration images of said object to be captured, and pre-viewed before available number of uses is reduced by one, said demonstration images including a visible watermark, and wherein said available number of uses is not reduced by one until said watermark is removed from said demonstration image.
23. An imaging apparatus as described in claims 21 , or 22, wherein said available number of uses can be increased by providing a processing code number to said authentication software, said processing code number including codes for commanding said authentication software to increase said available number of uses by a requested number.
24. An imaging apparatus as described in claim 23, wherein said processing code number is obtainable from a central location on payment of a fee.
25. An imaging apparatus as described in claim 24, wherein prior to obtaining said processing code number, a computer identifier must be supplied to said central location, said computer identifier uniquely identifying said computer.
26. An imaging apparatus as described in claim 25, wherein said processing code number also includes a code corresponding to said computer identifier, and wherein said authentication software will not increase said available number of uses unless said processing code number includes said code corresponding to said computer identifier.
27. An imaging apparatus as described in claims 25 or 26, wherein said computer identifier is generated by said authentication software using serial identification numbers obtained by said authentication software from said computer.
28. An imaging apparatus as described in claim 27, wherein said serial identification numbers are obtained from said computer's mother board, said computer's central processing unit, or said computer's hard drive.
29. An imaging apparatus as described in claim 1 , including a barrier located between said cameras and the object, said barrier including holes through which said cameras can view the object.
30. An imaging apparatus as described in claim 29, wherein said barrier is made from a uniform-coloured, non-reflective material
31. An imaging apparatus as described in claims 29 or 30, wherein said barrier is a curtain.
32. An imaging apparatus as described in claims 29, 30 or 31 , wherein said barrier is made of metal or wood.
33. An imaging apparatus as described in claim 29, 30, 31 or 32, including hole selecting software for selecting the location of said holes in said captured images and cut and paste software for removing said holes from said captured images.
34. A method for removing background image segments from an image of a scene containing an object, comprising the steps of: capturing a reference image of the scene without the object; capturing a final image of the scene containing the object; comparing said final image to said reference image; and removing said background image segments from said final image that are common to both said final image and said reference image, thereby leaving only an image of the object in the scene.
35. The method of claim 34, including the additional step of replacing said removed background image segments with a desired colour or a selected photograph.
36. The method of claims 34 or 35, wherein said comparison of said final image to said reference image and said removal of said image segments is done on a pixel-by-pixel basis.
37. The method of claim 36, wherein said image comparison and said background removal is based on said pixel location and colour, whereby said pixels having the same location and colour on said reference image and said final image are removed from said final image.
38. The method of claim 37, including the step of adjusting the colour of said final image to match the colour of said reference image before comparing said final image to said reference image for purposes of background removal.
39. The method of claims 35, 36, or 37, including the step of using edge detection software to detect the edges of the image of the object after the step of removing said background and prior to the step of replacing said background with said desired colour or said selected photograph.
40. A method for activating the use of an imaging apparatus connected to a computer for the purpose of collecting a processing fee for each use of the imaging apparatus, comprising the steps of: providing the computer with authentication software to control the use of the imaging apparatus; requiring payment of said processing fee in exchange for providing an authorization number; and requiring that said authorization number be entered into said authentication software before said authentication software will permit said imaging apparatus to capture images.
41. The method of claim 40, wherein said authorization number is obtained from a central location.
42. The method of claim 40 or 41 , wherein said authentication software will permit the capture and pre-viewing of demonstration images before said authorization number is required.
43. The method of claim 42, wherein said demonstration images include a visible watermark that is removed only when said authorization number is provided to said authentication software.
44. A method for activating the use of an imaging apparatus connected to a computer, for the purpose of collecting a processing fee for each use of the imaging apparatus, comprising the steps of: providing the computer with authentication software to control the use of the imaging apparatus; providing said authentication software with a counter for indicating an available number of uses of the imaging apparatus; requiring payment of a processing fee for each said available number of uses indicated by said counter; causing said counter to be reduced by one each time said imaging apparatus is used to capture an image; and causing said authentication software to deactivate said imaging apparatus when said counter reaches zero.
45. The method of claim 44, wherein said authentication software will permit demonstration images containing a visible watermark to be captured and pre- viewed before said available number of uses is reduced by one, and wherein said authentication software will not reduce said available number of uses by . one until said watermark is removed from said demonstration image.
46. The method of claims 44 and 45, wherein said available number of uses can be increased by providing a processing code number to said authentication software, said processing code number including codes for instructing said authentication software to increase said available number of uses by a requested number.
47. The method of claim 46, wherein said processing code number is obtainable from a central location on payment of said processing fee corresponding to said requested number of uses.
48. The method of claim 47, wherein prior to obtaining said processing code number, a computer identifier must be supplied to said central location, said computer identifier uniquely identifying said computer.
49. The method of claim 48, wherein said processing code number also includes a code corresponding to said computer identifier, and wherein said authentication software will not increase said available number of uses unless said processing code number includes said code corresponding to said computer identifier.
,
50. The method of claims 48 or 49, wherein said computer identifier is generated by said authentication software using serial identification numbers obtained by said authentication software from said computer.
51. The method of claim 50, wherein said serial identification numbers are obtained from said computer's mother board, said computer's central processing unit, or said computer's hard drive.
52. A method for adjusting an image of a scene captured by a camera comprising the steps of: pre-viewing the image of the scene to be captured by the camera and recording a desired central co-ordinate for the scene; recording the difference between said desired central co-ordinate for the scene and the centre of the pre-viewed image from the camera; instructing the camera to capture an image of the scene; and using said difference to adjust said captured image of the scene by placing the centre of said captured image at said desired central co-ordinate.
53. The method of claim 52, wherein said desired central co-ordinate is selected by locating a uniformly shaped object in the scene at said desired central coordinate and selecting the centre of said uniformly shaped object in said previewed image.
54. The method of claims 52 and 53, including the further step of cropping out any image segments of the scene that were moved outside the area of said captured image as the result of placing the centre of said captured image at said desired central co-ordinate.
55. A method for acquiring multiple inward-looking images of a three-dimensional object comprising the steps of: locating the object within an array of multiple cameras, each camera viewing the object from a different angle; connecting said cameras to a computer; providing camera drivers to individually communicate with each of said cameras to request that each of said cameras capture and send an image of the object to said computer; and distinguishing and matching up each of said captured images with each of said cameras used to capture said images.
56. The method of claim 55, including the further step of assembling, organising and exporting said images in accordance with predetermined protocols for display in a virtual reality viewer.
57. The method of claim 55, wherein each of said camera drivers are modified by assigning each of said camera drivers a camera driver identifier corresponding to a camera identifier provided by each of said corresponding cameras and stored in a system hardware tree of an operating system of said computer when each of said corresponding cameras was first connected to said computer, wherein said camera driver identifiers permit each of said camera drivers to individually communicate with each of said corresponding cameras and to identify and distinguish said captured images received from said cameras.
58. The method of claim 55, including the further step of using background removal software to remove background image segments from said captured images and replace said background image segments with a desired background colour or a selected photograph.
59. The method of claim 55, including the further step of using dejittering software to adjust said captured images so that the centre of each said captured image from each of said cameras is located at a desired central location for the object within the scene.
60. The method of claim 55, including the further step of using camera selection software to specify which ones of said cameras are to be activated to capture and transmit said images of the object to said computer and to specify the order in which said cameras are to be activated for image capture and transmittal.
61. The method of claim 55, including the further step of using morphing software to create intermediate composite images from said images captured by adjacent ones of said cameras, said intermediate composite images representing simulated views of said object from locations between adjacent ones of said cameras.
62. The method of claim 55, including the further step of requiring that an authorization number be provided to authentication software before said images can be captured by said cameras.
63. The method of claim 62, wherein said authentication software will permit demonstration images of said object bearing a watermark to be captured and pre-viewed, before said authorization number is required.
64. The method of claim 55, including the further step of including authentication software on said computer, said authentication software having a counter indicating an available number of uses of said imaging apparatus, said available number of uses being reduced by one each time images are captured by said cameras for export to a virtual reality viewer, until said available number of uses equals zero, whereupon said imaging apparatus is inactivated.
65. The method of claim 64, wherein said authentication software will permit a demonstration image of said object to be captured, and pre-viewed, before said available number of uses is reduced by one, said demonstration image including a visible watermark, and wherein said available number of uses is not reduced by one until said watermark is removed from said demonstration image.
66. The method of claim 65, wherein said available number of uses can be increased by providing a processing code number to said authentication software, said processing code number including codes for commanding said authentication software to increase said available number of uses by a requested number.
PCT/CA2001/001604 2000-11-16 2001-11-16 Inward-looking imaging system WO2002041127A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2002223329A AU2002223329A1 (en) 2000-11-16 2001-11-16 Inward-looking imaging system
CA002429236A CA2429236A1 (en) 2000-11-16 2001-11-16 Inward-looking imaging system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CA2,326,087 2000-11-16
CA002326087A CA2326087A1 (en) 2000-11-16 2000-11-16 Inward-looking imaging system

Publications (2)

Publication Number Publication Date
WO2002041127A2 true WO2002041127A2 (en) 2002-05-23
WO2002041127A3 WO2002041127A3 (en) 2002-09-06

Family

ID=4167686

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2001/001604 WO2002041127A2 (en) 2000-11-16 2001-11-16 Inward-looking imaging system

Country Status (3)

Country Link
AU (1) AU2002223329A1 (en)
CA (1) CA2326087A1 (en)
WO (1) WO2002041127A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7817162B2 (en) 2008-02-11 2010-10-19 University Of Northern Iowa Research Foundation Virtual blasting system for removal of coating and/or rust from a virtual surface
US7839416B2 (en) 2006-03-10 2010-11-23 University Of Northern Iowa Research Foundation Virtual coatings application system
US7839417B2 (en) 2006-03-10 2010-11-23 University Of Northern Iowa Research Foundation Virtual coatings application system
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
EP3392744A1 (en) * 2017-04-17 2018-10-24 INTEL Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2562722B (en) * 2017-05-18 2019-07-31 George Gould Daniel Vehicle imaging apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993006691A1 (en) * 1991-09-18 1993-04-01 David Sarnoff Research Center, Inc. Video merging employing pattern-key insertion
US5729471A (en) * 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US5917937A (en) * 1997-04-15 1999-06-29 Microsoft Corporation Method for performing stereo matching to recover depths, colors and opacities of surface elements
WO1999042924A1 (en) * 1998-02-20 1999-08-26 Intel Corporation Automatic update of camera firmware

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10173798A (en) * 1996-12-11 1998-06-26 Atsumi Electron Corp Ltd Image transmitter and security system using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993006691A1 (en) * 1991-09-18 1993-04-01 David Sarnoff Research Center, Inc. Video merging employing pattern-key insertion
US5729471A (en) * 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US5917937A (en) * 1997-04-15 1999-06-29 Microsoft Corporation Method for performing stereo matching to recover depths, colors and opacities of surface elements
WO1999042924A1 (en) * 1998-02-20 1999-08-26 Intel Corporation Automatic update of camera firmware

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 1998, no. 11, 30 September 1998 (1998-09-30) & JP 10 173798 A (ATSUMI ELECTRON CORP LTD), 26 June 1998 (1998-06-26) *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7839416B2 (en) 2006-03-10 2010-11-23 University Of Northern Iowa Research Foundation Virtual coatings application system
US7839417B2 (en) 2006-03-10 2010-11-23 University Of Northern Iowa Research Foundation Virtual coatings application system
US7817162B2 (en) 2008-02-11 2010-10-19 University Of Northern Iowa Research Foundation Virtual blasting system for removal of coating and/or rust from a virtual surface
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US9378584B2 (en) 2012-05-23 2016-06-28 Glasses.Com Inc. Systems and methods for rendering virtual try-on products
US9235929B2 (en) 2012-05-23 2016-01-12 Glasses.Com Inc. Systems and methods for efficiently processing virtual 3-D data
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9311746B2 (en) 2012-05-23 2016-04-12 Glasses.Com Inc. Systems and methods for generating a 3-D model of a virtual try-on product
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US10147233B2 (en) 2012-05-23 2018-12-04 Glasses.Com Inc. Systems and methods for generating a 3-D model of a user for a virtual try-on product
EP3392744A1 (en) * 2017-04-17 2018-10-24 INTEL Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching
CN108737724A (en) * 2017-04-17 2018-11-02 英特尔公司 The system and method for capturing and showing for 360 videos
US10623634B2 (en) 2017-04-17 2020-04-14 Intel Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching
US11019263B2 (en) 2017-04-17 2021-05-25 Intel Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching
EP4002068A1 (en) * 2017-04-17 2022-05-25 INTEL Corporation Systems and methods for 360 video capture and display based on eye tracking including gaze based warnings and eye accommodation matching

Also Published As

Publication number Publication date
WO2002041127A3 (en) 2002-09-06
CA2326087A1 (en) 2002-05-16
AU2002223329A1 (en) 2002-05-27

Similar Documents

Publication Publication Date Title
US10630899B2 (en) Imaging system for immersive surveillance
CN104321803B (en) Image processing apparatus, image processing method and program
EP1355277B1 (en) Three-dimensional computer modelling
WO2021012854A1 (en) Exposure parameter acquisition method for panoramic image
US20090251421A1 (en) Method and apparatus for tactile perception of digital images
US20090128568A1 (en) Virtual viewpoint animation
CN105163158A (en) Image processing method and device
CN110035321B (en) Decoration method and system for online real-time video
US9648271B2 (en) System for filming a video movie
WO2009064904A1 (en) 3d textured objects for virtual viewpoint animations
WO2009064895A1 (en) Fading techniques for virtual viewpoint animations
EP1556736A1 (en) Image capture and and display and method for generating a synthesized image
WO2012082127A1 (en) Imaging system for immersive surveillance
WO2009064893A2 (en) Line removal and object detection in an image
GB2440993A (en) Providing control using a wide angle image capture means
WO2009064902A1 (en) Updating background texture for virtual viewpoint antimations
CN101582959A (en) Intelligent multi-angle digital display system and display method
WO2002041127A2 (en) Inward-looking imaging system
EP2642446A2 (en) System and method of estimating page position
KR20190133867A (en) System for providing ar service and method for generating 360 angle rotatable image file thereof
CA2429236A1 (en) Inward-looking imaging system
KR20180070082A (en) Vr contents generating system
CN107430841A (en) Message processing device, information processing method, program and image display system
Insley et al. Using video to create avators in virtual reality
CN101292516A (en) System and method for capturing visual data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2429236

Country of ref document: CA

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP