WO1996008791A1 - Image manipulation - Google Patents

Image manipulation Download PDF

Info

Publication number
WO1996008791A1
WO1996008791A1 PCT/GB1995/002189 GB9502189W WO9608791A1 WO 1996008791 A1 WO1996008791 A1 WO 1996008791A1 GB 9502189 W GB9502189 W GB 9502189W WO 9608791 A1 WO9608791 A1 WO 9608791A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
scene
depth
image
distance
Prior art date
Application number
PCT/GB1995/002189
Other languages
French (fr)
Inventor
Paul Grace
Sean Broughton
Original Assignee
Paul Grace
Sean Broughton
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Paul Grace, Sean Broughton filed Critical Paul Grace
Priority to GB9612554A priority Critical patent/GB2298990B/en
Priority to EP95931338A priority patent/EP0729623A1/en
Publication of WO1996008791A1 publication Critical patent/WO1996008791A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • This invention relates to image manipulation.
  • the invention is concerned with a system which permits the manipulation of two-dimensional images in such a way as to take into account three-dimensional effects.
  • the animated object is required to appear to pass in front of or behind objects in the real scene these or real objects should be obscured and revealed in accordance with their position in the implied third dimension of the recorded image.
  • the animated image should also be accordingly obscured or revealed.
  • the real scene captured for example on film, or video tape has no measure of depth, other than implied perspective, it takes a skilled artist to cut out some of the foreground objects, and insert the animated object between the background plane and the foreground object.
  • the next stage will be to combine this computer generated, animated sequence in with real film footage, i.e. video or cinematographic shoots of real objects, people and so forth. If the animated toothpaste tube is to be the top layer of the combined sequence, this can be achieved easily. Matting instructions are easily defined to replace film image by computer animated image across a frame wherever the computer animated image has a non-zero value. Of course, the whole advertisement will comprise a series of frames with the tube moving between a number of positions in all three dimensions.
  • the matting technique is limited in the degree of realism that it can produce.
  • the flying toothpaste tube may need to appear to be on such a trajectory as to go behind some objects, but in front of others.
  • Real situations can be far more complex than this simplistic example, as there could be many tens (if not hundreds) of levels of 'foreground' and 'background' .
  • the positioning of the animated image in the implied third dimension of a recorded image is problematic with traditional techniques, because the real scene has been recorded onto a two-dimensional medium for example film or video, and the depth information has been lost.
  • US-A-5,283,640 there is disclosed a system for producing and receiving three-dimensional images.
  • a laser projector and a pair of sensors are used to triangulate the distance from the camera to objects in a scene.
  • the depth information is used to instantaneously produce two images of the scene on respective television screens which correspond to the view of the human left and right eye. That system thus produces a binocular view of the scene, but is not concerned with the storing and manipulation of two- dimensional data, taking depth into account.
  • the laser projector and sensors might be usable to create signals which are converted into distance data, although that is not suggested.
  • a method of recording and manipulating image data in which there are recorded (a) two-dimensional image data for a scene and (b) distance data representing the respective distances from a reference of a plurality of objects at different positions in the scene; and in which the image data is manipulated taking into account the distance data.
  • the scene will be a real scene and the distance data will be calculated by suitable means, although the scene could be created artificially and distance data assigned to objects.
  • the image data will be stored as a series of frames and there will be a corresponding serious of 'frames' of data from distance scans across the scene for each frame, containing the distance readings at various points in the scene.
  • the distance may be measured from a reference point, line or surface. It may be most convenient for there to be measurement relative to the focal plane of the camera recording the image of the scene.
  • an instrument to provide the distance data in the case of a real scene. This can be achieved by a separate recording instrument, for example a two-dimensional infra red or ultrasonic sound scanner.
  • This scanner during one frame time of the film or video recording medium will send out a raster scan of infra red or ultrasonic sound. By timing the delay of return for each effective position of the scan, a depth measurement can be obtained.
  • any 'signal' outside the visible spectrum of the film or video and the audible spectrum of the associated sound is acceptable.
  • this may include ultraviolet, radar wavelengths or sub-sonic sound.
  • the radar wavelength a particularly preferred range is the microwave range.
  • One method of constructing the 'ultrasound' scanner can be from a rotating sound source, mounted on a vertical axis. As the directional pulsed sound emitter rotates past the scene to be recorded, it 'scans' a horizontal line, by 'firing' pulses at each pixel to be scanned. By sampling the time delay of the returning of each of these pulses, a 'distance' measure is derived. This can be coded as a digital value, and stored at the relevant pixel address for that line. By 'nodding' the top of the vertical axis towards the scene, different (but lower) scan lines can be described. This can be repeated until a complete frame has been described. Similar methods can be incorporated with infra red light, instead of ultrasound.
  • Edge detection techniques such as Laplacian operators, Sobel Operators, and other techniques disclosed in Pratt's textbook 'Digital Image Processing' can be used. This will allow the analysis of the recorded scene into 'objects'. As long as each object has one 'depth' point, the system will work. If there is more than one depth measurement point per object, these can be averaged to form an 'average object depth'.
  • these can ranked to find a maximum object depth and a minimum object depth.
  • This 'depth plane' signal can be recorded as a low resolution image, synchronous with the main image recording.
  • Such an 'image' can be synchronised with the main image using techniques that would be used to synchronise multiple camera shoots, or to synchronise sound. These techniques include the recording of a 'timecode' on all of the material shot. This is .a simple technique where a unique timecode value is supplied, in parallel, to all recording media in a shoot. This timecode is incremented by one count on each successive frame. Thus all of the recording media capture the same unique code for each given frame.
  • the depth recording medium will probably (but not necessarily) be digital, in which case the digital number recorded will be a measure of the 'depth' or 'distance' from the camera.
  • the allocation of depth or distance per digital level can be on a linear, logarithmic, or non-linear (look up) basis, depending on the resolution of depth recording required at particular distance.
  • the calibration of depth versus digital recorded level can be via a 'depth slate' . This consists of a 'calibration' frame, shot at the start of a scene. In this calibration frame, a series of numbered 'marker' targets are recorded, and their distances from the camera measured accurately with a tape measure. When the digital level recorded on the depth recording medium for the target is read off, it can be compared to the distance displayed on the target and thus a correlation between the digital level on the recording medium and the actual distance can be derived.
  • the 'depth' recording can be edited with the picture, just as when sound is recorded separately from pictures, but using timecode, the resultant edited picture has corresponding sound.
  • the 'depth' information can be used for the process of artificially 'relighting' the scene. This can be done because we have not only the 'order' of depth of objects in a scene (e.g. cat is in front of car which is in front of brick wall) , but the actual depth dimensions
  • the invention may take many forms, including for example the provision of a camera and an ultrasonic or infra red scanner. There may be changes to the systems particularly described.
  • Figure la is a schematic representation of an animated image of a toothpaste tube
  • Figure lb is a schematic representation of a recorded image of a real scene
  • Figure lc is a schematic representation of a composite image of the images of Figures la and lb showing simple matting of the toothpaste tube 'on top' of the recorded image
  • Figure Id is a schematic representation of a composite image of the images of Figures la and lb showing complex matting of the toothpaste tube in accordance with the invention
  • Figure 2a is a schematic representation of the fine grid of video or film pixels
  • Figure 2b is a schematic representation of the coarse grid of the scanned depth pixels
  • Figure 3 is a schematic representation of a system suitable for carrying out the invention.
  • a system suitable for carrying out the invention comprises a film or video camera 6 which supplies image data to an image recording medium 5 such as video tape or cinematographic film.
  • the image recording medium may be stored within the video or film camera or may be in a separate device such as a video recorder.
  • a timecode generator 9 supplies a timecode to both the image recording medium 5 and the depth recording medium 8 for subsequent synchronisation of the data.
  • Other synchronisation codes which could be used include Filmcode, Xeycode, Aaton-code, and Arri-code.
  • Reference numeral 4 of Figure 3 indicates the real scene which is to be recorded by the system.
  • the camera 6 films' the scene in video or film resolution.
  • the depth scanner 7 measures the distance of the objects comprising the scene from some arbitrary reference plane.
  • the image resolution will be significantly higher than the depth scan resolution as shown in Figure 2b.
  • a video scan according to the European Recommendation 601 Standard, consists of 720 picture elements scanned along each of 576 active lines.
  • a typical 'sonic' or 'infra red' depth scan will probably only need a resolution of typically 70 x 50.
  • the depth scan will be calibrated using a 'depth slate'.
  • the depth slate is a frame of image data in which are recorded a series of marker targets positioned in the scene at known distances from the depth scanner.
  • the 'marker' targets can have on them a 'blackboard' region, where the distance can be written in chalk.
  • a more sophisticated version could consist of LEDs (light emitting diodes) which could display the distance as typed in from an operator's console.
  • the depth data is in the form of a low resolution 'image' of the scene with pixel values corresponding to the distance of an object, or part of an object, from the depth scanner 7.
  • suitable media for the recording of such an image include the Sony Betacam tape recording medium. Whilst this has the capability to record full bandwidth image signals, low bandwidth can be accomplished by repeating picture elements and lines to "make up" the right number of pixels and lines required by such a recording medium.
  • DAT digital audio tape
  • the important criteria in the choice of medium is that it supports timecode.
  • Fig. la shows the image of the toothpaste tube 3
  • Fig. lb shows a real filmed background comprising object 1 and 2
  • Fig. lc shows the toothpaste tube superimposed on the background.
  • object 1 is behind object 2.
  • Fig. Id shows what may be required.
  • the toothpaste tube appears to be positioned in front of object 1 but behind object 2.
  • the system shown in Figure 3 will be used to record an image of the background scene, as represented by Figure lb, together with the associated depth data.
  • Edge detection techniques will be used to define each object of the scene in the image and to associate one or more depth values, as calibrated using the depth slate, to each object. It is then possible to compare the magnitude of the 'depth' of the trajectory of the toothpaste tube with the 'depth' of each object (e.g. distance from a reference plane) .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of recording and manipulating image data in which two-dimensional image data relating to a real scene (4) is recorded onto an image recording medium (5) using a film or video camera (6). Distance data relating to the positions of objects in the scene (4) is also recorded onto a depth recording medium (8) using a depth scanner (7). The distance data and image data are synchronised using a timecode recording on both media produced by a timecode generator (9). The image is subsequently manipulated taking into account the distance data.

Description

6/08791 PCI7GB9S/02189
IMAGE MANIPULATION
This invention relates to image manipulation. In particular, the invention is concerned with a system which permits the manipulation of two-dimensional images in such a way as to take into account three-dimensional effects.
When a scene is recorded as a video or film image, information relating to the position of objects within the scene is lost because the video or film image is two-dimensional whereas the scene has a third dimension, or depth, although some perception of depth is implied in the image by perspective.
For a long time, this loss of depth information has not presented a problem in film or video production because images could only be created by either recording real scenes or animating drawn or computer-generated pictures. However, in the field of producing television or cinematographic material the advent of powerful image compositing systems such as the 'FLAME' system from Discrete Logic has pushed forward the frontiers of computer generated and controlled graphics. In doing so, new problems have been created. Systems such as the 'FLAME' allow recordings of real scenes to be combined with animation to form composite images in which the animation appears to have been present in the real scene at the time of recordal. This was pioneered in such feature films as the Disney production 'Who Framed Roger Rabbit' .
Such an effect is relatively easily achieved if the animation is to appear in the very foreground of the recorded image, because the animated image may be "overlaid" on to the recorded image, in a technique known as "matting", so as to obscure all parts of the recorded image which would have been behind the animated object in the real scene. Such a technique, whilst producing some degree of realism, does not satisfy all of the requirements for realism. To achieve this, it is necessary to combine the animated image with the implied depth of the recorded image.
If the animated object is required to appear to pass in front of or behind objects in the real scene these or real objects should be obscured and revealed in accordance with their position in the implied third dimension of the recorded image. The animated image should also be accordingly obscured or revealed.
Because the real scene captured for example on film, or video tape has no measure of depth, other than implied perspective, it takes a skilled artist to cut out some of the foreground objects, and insert the animated object between the background plane and the foreground object.
It is useful to consider as an example of this problem a typical creative job for which a 'FLAME'-type system may be used. The job in hand is to be a commercial to promote, for example, a brand of toothpaste. As often in these advertisements, the toothpaste tube comes to life and flies around the room. Conventional computer graphic techniques are used to generate 'key frames' of the animation according to the artists' instructions as to how the toothpaste tube should look, and how it should move. A powerful computer system of, for example, the 'FLAME' system (in this case a large multi¬ processor ONYX computer from Silicon Graphics Incorporated) can easily be programmed to perform-the many millions of calculations necessary to animate this sequence. 6/08791 PC17GB9502189
- 3 -
The next stage will be to combine this computer generated, animated sequence in with real film footage, i.e. video or cinematographic shoots of real objects, people and so forth. If the animated toothpaste tube is to be the top layer of the combined sequence, this can be achieved easily. Matting instructions are easily defined to replace film image by computer animated image across a frame wherever the computer animated image has a non-zero value. Of course, the whole advertisement will comprise a series of frames with the tube moving between a number of positions in all three dimensions.
Without the intervention of an artist as described above, the matting technique is limited in the degree of realism that it can produce. For example, in real life the flying toothpaste tube may need to appear to be on such a trajectory as to go behind some objects, but in front of others. Real situations can be far more complex than this simplistic example, as there could be many tens (if not hundreds) of levels of 'foreground' and 'background' . However, the positioning of the animated image in the implied third dimension of a recorded image is problematic with traditional techniques, because the real scene has been recorded onto a two-dimensional medium for example film or video, and the depth information has been lost.
Another problem caused by the loss of depth information is that of relighting. Scenes are 'lit' for filming or image recording to achieve the. required artistic effect. Because image capture currently reduces a natural three- dimensional scene to a two-dimensional representation, the lighting cannot be manipulated after capture. If the recorded scene does not meet the director's requirements, he has no alternative but to reshpot the scene, which is time consuming and expensive. What is required is a method of image capture that allows the 'flexibility to alter the scene lighting after capture. The reason that this cannot be done at present is that there is loss of the depth information that would be needed to calculate shadows in an artificial 'relighting' exercise.
In US-A-5,283,640 there is disclosed a system for producing and receiving three-dimensional images. In addition to a camera, a laser projector and a pair of sensors are used to triangulate the distance from the camera to objects in a scene. The depth information is used to instantaneously produce two images of the scene on respective television screens which correspond to the view of the human left and right eye. That system thus produces a binocular view of the scene, but is not concerned with the storing and manipulation of two- dimensional data, taking depth into account. The laser projector and sensors might be usable to create signals which are converted into distance data, although that is not suggested.
According to the present invention there is provided a method of recording and manipulating image data in which there are recorded (a) two-dimensional image data for a scene and (b) distance data representing the respective distances from a reference of a plurality of objects at different positions in the scene; and in which the image data is manipulated taking into account the distance data.
In general the scene will be a real scene and the distance data will be calculated by suitable means, although the scene could be created artificially and distance data assigned to objects. In general the image data will be stored as a series of frames and there will be a corresponding serious of 'frames' of data from distance scans across the scene for each frame, containing the distance readings at various points in the scene. The distance may be measured from a reference point, line or surface. It may be most convenient for there to be measurement relative to the focal plane of the camera recording the image of the scene.
There will be a need for an instrument to provide the distance data in the case of a real scene. This can be achieved by a separate recording instrument, for example a two-dimensional infra red or ultrasonic sound scanner.
This scanner, during one frame time of the film or video recording medium will send out a raster scan of infra red or ultrasonic sound. By timing the delay of return for each effective position of the scan, a depth measurement can be obtained.
Of course, it would be possible to use various systems to generate the distance data, and not simply an ultrasonic or infra red scanner. In fact any 'signal' outside the visible spectrum of the film or video and the audible spectrum of the associated sound is acceptable. Thus this may include ultraviolet, radar wavelengths or sub-sonic sound. Of the radar wavelength, a particularly preferred range is the microwave range.
It will not be necessary to scan the 'depth' raster at anywhere near such fine resolution as the image scan, because real images consist of a finite number of discrete objects, all of which have a nearly constant depth. Thus a forest can be thought of as a finite number of trees, each of which is a two-dimensional object. Whilst this 'perception' is not perfect,.,it is considerably better than thinking of a forest as a totally flat 'backdrop' which is realistically all that a conventional image compositing systems can achieve.
One method of constructing the 'ultrasound' scanner can be from a rotating sound source, mounted on a vertical axis. As the directional pulsed sound emitter rotates past the scene to be recorded, it 'scans' a horizontal line, by 'firing' pulses at each pixel to be scanned. By sampling the time delay of the returning of each of these pulses, a 'distance' measure is derived. This can be coded as a digital value, and stored at the relevant pixel address for that line. By 'nodding' the top of the vertical axis towards the scene, different (but lower) scan lines can be described. This can be repeated until a complete frame has been described. Similar methods can be incorporated with infra red light, instead of ultrasound.
Edge detection techniques, such as Laplacian operators, Sobel Operators, and other techniques disclosed in Pratt's textbook 'Digital Image Processing' can be used. This will allow the analysis of the recorded scene into 'objects'. As long as each object has one 'depth' point, the system will work. If there is more than one depth measurement point per object, these can be averaged to form an 'average object depth'.
Alternatively, these can ranked to find a maximum object depth and a minimum object depth.
This 'depth plane' signal can be recorded as a low resolution image, synchronous with the main image recording. Such an 'image' can be synchronised with the main image using techniques that would be used to synchronise multiple camera shoots, or to synchronise sound. These techniques include the recording of a 'timecode' on all of the material shot. This is .a simple technique where a unique timecode value is supplied, in parallel, to all recording media in a shoot. This timecode is incremented by one count on each successive frame. Thus all of the recording media capture the same unique code for each given frame.
The depth recording medium will probably (but not necessarily) be digital, in which case the digital number recorded will be a measure of the 'depth' or 'distance' from the camera. The allocation of depth or distance per digital level can be on a linear, logarithmic, or non-linear (look up) basis, depending on the resolution of depth recording required at particular distance. The calibration of depth versus digital recorded level can be via a 'depth slate' . This consists of a 'calibration' frame, shot at the start of a scene. In this calibration frame, a series of numbered 'marker' targets are recorded, and their distances from the camera measured accurately with a tape measure. When the digital level recorded on the depth recording medium for the target is read off, it can be compared to the distance displayed on the target and thus a correlation between the digital level on the recording medium and the actual distance can be derived.
When film or video sequences are edited, the 'depth' recording can be edited with the picture, just as when sound is recorded separately from pictures, but using timecode, the resultant edited picture has corresponding sound.
The 'depth' information can be used for the process of artificially 'relighting' the scene. This can be done because we have not only the 'order' of depth of objects in a scene (e.g. cat is in front of car which is in front of brick wall) , but the actual depth dimensions
(e.g. the cat is 1.86 metres in front of the car, which is 0.59 metres from the brick wall). This information, together with operator input of the position of the light source, and its directional characteristics, allows correct calculation of shadows from conventional ray tracing techniques (i.e. implementing the Newtonian laws of optics) .
It will be appreciated that the invention may take many forms, including for example the provision of a camera and an ultrasonic or infra red scanner. There may be changes to the systems particularly described.
An embodiment of the invention will now be described, by way of example only, and with reference to the accompanying drawings in which:
Figure la is a schematic representation of an animated image of a toothpaste tube;
Figure lb is a schematic representation of a recorded image of a real scene; Figure lc is a schematic representation of a composite image of the images of Figures la and lb showing simple matting of the toothpaste tube 'on top' of the recorded image;
Figure Id is a schematic representation of a composite image of the images of Figures la and lb showing complex matting of the toothpaste tube in accordance with the invention;
Figure 2a is a schematic representation of the fine grid of video or film pixels; Figure 2b is a schematic representation of the coarse grid of the scanned depth pixels; and
Figure 3 is a schematic representation of a system suitable for carrying out the invention.
A system suitable for carrying out the invention,-as shown in Figure 3, comprises a film or video camera 6 which supplies image data to an image recording medium 5 such as video tape or cinematographic film. The image recording medium may be stored within the video or film camera or may be in a separate device such as a video recorder. A depth scanner 7, such as an infra red scanner, ultrasound scanner or preferably a microwave scanner, supplies depth data to a depth recording medium 8. A timecode generator 9 supplies a timecode to both the image recording medium 5 and the depth recording medium 8 for subsequent synchronisation of the data. Other synchronisation codes which could be used include Filmcode, Xeycode, Aaton-code, and Arri-code.
Reference numeral 4 of Figure 3 indicates the real scene which is to be recorded by the system. The camera 6 'films' the scene in video or film resolution.
Simultaneously, and in synchronous, the depth scanner 7 measures the distance of the objects comprising the scene from some arbitrary reference plane.
As indicated in Figure 2a, the image resolution will be significantly higher than the depth scan resolution as shown in Figure 2b. For example, a video scan, according to the European Recommendation 601 Standard, consists of 720 picture elements scanned along each of 576 active lines. A typical 'sonic' or 'infra red' depth scan will probably only need a resolution of typically 70 x 50.
Initially, the depth scan will be calibrated using a 'depth slate'. The depth slate is a frame of image data in which are recorded a series of marker targets positioned in the scene at known distances from the depth scanner. In the simplest form, the 'marker' targets can have on them a 'blackboard' region, where the distance can be written in chalk. A more sophisticated version could consist of LEDs (light emitting diodes) which could display the distance as typed in from an operator's console.
The depth data is in the form of a low resolution 'image' of the scene with pixel values corresponding to the distance of an object, or part of an object, from the depth scanner 7. Typical suitable media for the recording of such an image include the Sony Betacam tape recording medium. Whilst this has the capability to record full bandwidth image signals, low bandwidth can be accomplished by repeating picture elements and lines to "make up" the right number of pixels and lines required by such a recording medium. Alternatively, DAT (digital audio tape) format media could be used. The important criteria in the choice of medium is that it supports timecode.
It is useful now to consider the application of the invention to the example mentioned earlier, involving a toothpaste tube. Suppose that the artistic director wishes to make the toothpaste tube fly through a background scene; he wishes it to travel on a given trajectory, at a given depth. Fig. la shows the image of the toothpaste tube 3, Fig. lb shows a real filmed background comprising object 1 and 2 and Fig. lc shows the toothpaste tube superimposed on the background. In the real scene corresponding to the images of Figs. lb to Id object 1 is behind object 2. Fig. Id shows what may be required. In this example the toothpaste tube appears to be positioned in front of object 1 but behind object 2.
Firstly, the system shown in Figure 3 will be used to record an image of the background scene, as represented by Figure lb, together with the associated depth data. Edge detection techniques will be used to define each object of the scene in the image and to associate one or more depth values, as calibrated using the depth slate, to each object. It is then possible to compare the magnitude of the 'depth' of the trajectory of the toothpaste tube with the 'depth' of each object (e.g. distance from a reference plane) . Thus one can automatically matte the toothpaste tube in front of objects that are further away from the camera than the toothpaste tube, and behind objects that are nearer to the camera than the toothpaste tube, as shown in Figure Id.

Claims

CLAIMS •
1. A method of recording and manipulating image data, in which there are recorded (a) two-dimensional image data for a scene and (b) distance data representing the respective distances from a reference of a plurality of objects at different positions in the scene; and in which the image data is manipulated taking into account the distance data.
2. A method as claimed in claim 1, in which the distance data is recorded simultaneously with the image data.
3. A method as claimed in claim 1 or 2, in which data is also recorded for synchronising the image data and distance data.
4. A method as claimed in any preceding claim, in which the distance data is recorded at a lower two- dimensional resolution than the image data.
5. A method as claimed in any preceding claim, in which the distance data is obtained using an ultrasound, infra red or microwave transceiver.
6. A method as claimed in any preceding claim, in which during the manipulation step objects are identified in the image data and are assigned distance values.
7. A method as claimed in any preceding claim, in which the manipulation of the image data comprises replacing original image data for said scene with new image data corresponding to the existence of an additional object in said scene, said new image data taking into account the distance data.
8. A method as claimed in any preceding claim, in which the manipulation of the image data comprises replacing original image data with new image data corresponding to the effect of a light source on said scene, said new image data taking into account the distance data.
PCT/GB1995/002189 1994-09-15 1995-09-15 Image manipulation WO1996008791A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB9612554A GB2298990B (en) 1994-09-15 1995-09-15 Image manipulation
EP95931338A EP0729623A1 (en) 1994-09-15 1995-09-15 Image manipulation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9418610.3 1994-09-15
GB9418610A GB9418610D0 (en) 1994-09-15 1994-09-15 Image manipulation

Publications (1)

Publication Number Publication Date
WO1996008791A1 true WO1996008791A1 (en) 1996-03-21

Family

ID=10761378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1995/002189 WO1996008791A1 (en) 1994-09-15 1995-09-15 Image manipulation

Country Status (3)

Country Link
EP (1) EP0729623A1 (en)
GB (2) GB9418610D0 (en)
WO (1) WO1996008791A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1156298A2 (en) * 2000-05-15 2001-11-21 Weimatic GmbH Method and apparatus for the detection of three-dimensional objects
WO2003100703A3 (en) * 2002-05-28 2004-05-27 Casio Computer Co Ltd Composite image output apparatus and composite image delivery apparatus
DE102008042333B4 (en) * 2007-09-28 2012-01-26 Omron Corp. Three-dimensional measuring apparatus and method for adjusting an image pickup apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEE AND MYERS: "3-D PERSPECTIVE VIEW OF SONAR IMAGES", ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 31 October 1988 (1988-10-31) - 2 November 1988 (1988-11-02), SAN JOSE CA US, pages 73 - 77, XP000130237 *
SUENAGA AND WATANABE: "A METHOD FOR THE SYCHRONIZED ACQUISITION OF CYLINDRICAL RANGE AND COLOR DATA", IEICE TRANSACTIONS, vol. E74, no. 10, TOKIO JP, pages 3407 - 3416, XP000279320 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1156298A2 (en) * 2000-05-15 2001-11-21 Weimatic GmbH Method and apparatus for the detection of three-dimensional objects
EP1156298A3 (en) * 2000-05-15 2002-06-19 Weimatic GmbH Method and apparatus for the detection of three-dimensional objects
WO2003100703A3 (en) * 2002-05-28 2004-05-27 Casio Computer Co Ltd Composite image output apparatus and composite image delivery apparatus
KR100710508B1 (en) * 2002-05-28 2007-04-24 가시오게산키 가부시키가이샤 Composite image output apparatus, composite image output method, and recording medium
KR100710512B1 (en) * 2002-05-28 2007-04-24 가시오게산키 가부시키가이샤 Composite image delivery apparatus, composite image delivery method and recording medium
KR100752870B1 (en) * 2002-05-28 2007-08-29 가시오게산키 가부시키가이샤 Composite image output apparatus, composite image output method and recording medium
US7787028B2 (en) 2002-05-28 2010-08-31 Casio Computer Co., Ltd. Composite image output apparatus and composite image delivery apparatus
EP2357795A3 (en) * 2002-05-28 2013-05-15 Casio Computer Co., Ltd. Composite image output apparatus and composite image delivery apparatus
DE102008042333B4 (en) * 2007-09-28 2012-01-26 Omron Corp. Three-dimensional measuring apparatus and method for adjusting an image pickup apparatus
US8340355B2 (en) 2007-09-28 2012-12-25 Omron Corporation Three-dimensional measurement instrument, image pick-up apparatus and adjusting method for such an image pickup apparatus

Also Published As

Publication number Publication date
GB9612554D0 (en) 1996-08-14
GB9418610D0 (en) 1994-11-02
GB2298990B (en) 1998-12-09
EP0729623A1 (en) 1996-09-04
GB2298990A (en) 1996-09-18

Similar Documents

Publication Publication Date Title
US5963247A (en) Visual display systems and a system for producing recordings for visualization thereon and methods therefor
US6335765B1 (en) Virtual presentation system and method
US6724386B2 (en) System and process for geometry replacement
US9041899B2 (en) Digital, virtual director apparatus and method
US9160938B2 (en) System and method for generating three dimensional presentations
US7852370B2 (en) Method and system for spatio-temporal video warping
Matsuyama et al. 3D video and its applications
US20060165310A1 (en) Method and apparatus for a virtual scene previewing system
US4738522A (en) Method and apparatus for coordinated super imposition of images in a visual display
US20030174285A1 (en) Method and apparatus for producing dynamic imagery in a visual medium
US11514654B1 (en) Calibrating focus/defocus operations of a virtual display based on camera settings
JP2000503177A (en) Method and apparatus for converting a 2D image into a 3D image
US20220245870A1 (en) Real time production display of composited images with use of mutliple-source image data
US11615755B1 (en) Increasing resolution and luminance of a display
US11328436B2 (en) Using camera effect in the generation of custom synthetic data for use in training an artificial intelligence model to produce an image depth map
WO1996008791A1 (en) Image manipulation
GB2399248A (en) Projection of supplementary image data onto a studio set
CA2191711A1 (en) Visual display systems and a system for producing recordings for visualization thereon and methods therefor
Braham The digital backlot [cinema special effects]
KR102654323B1 (en) Apparatus, method adn system for three-dimensionally processing two dimension image in virtual production
GB2586831A (en) Three-Dimensional Content Generation using real-time video compositing
CN114641980A (en) Reconstruction of occluded views of a captured image using arbitrarily captured input
Mirpour A Comparison of 3D Camera Tracking Software
Brown-Simmons Computer animation within videospace

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): GB JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 1995931338

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1995931338

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1995931338

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1995931338

Country of ref document: EP