WO2001067749A2 - Camera pose estimation - Google Patents

Camera pose estimation Download PDF

Info

Publication number
WO2001067749A2
WO2001067749A2 PCT/US2001/007099 US0107099W WO0167749A2 WO 2001067749 A2 WO2001067749 A2 WO 2001067749A2 US 0107099 W US0107099 W US 0107099W WO 0167749 A2 WO0167749 A2 WO 0167749A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene
pose
images
model
Prior art date
Application number
PCT/US2001/007099
Other languages
French (fr)
Other versions
WO2001067749A3 (en
Inventor
Harpreet Singh Sawhney
Rakesh Kumar
Steve Hsu
Supun Samarasekera
Original Assignee
Sarnoff Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sarnoff Corporation filed Critical Sarnoff Corporation
Priority to AU2001250802A priority Critical patent/AU2001250802A1/en
Priority to EP01924119A priority patent/EP1297691A2/en
Publication of WO2001067749A2 publication Critical patent/WO2001067749A2/en
Publication of WO2001067749A3 publication Critical patent/WO2001067749A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • G01S5/163Determination of attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention is directed toward the domain of image processing, in particular toward the creation and manipulation of three dimensional scene models and virtual images of the scene seen from arbitrary viewpoints.
  • That constraint must come from physical measurements like GPS, or surveyed landmarks, or from a prior scene model with shape and/or texture information.
  • the problem of loss of metric accuracy is particularly acute for analysis and control of a remote scene in which use of such constraint indicia is not practical, for example to control a remote vehicle on Mars or underground.
  • the present invention is embodied in a method of accurately estimating a pose of a camera within a scene using a three dimensional model of the scene.
  • the exemplary method begins by generating an initial estimate of the camera pose. Next, a set of relevant features of the three-dimensional model based on the estimate of the pose is selected. A virtual projection of this set of relevant features as seen from the estimated pose is then created. The virtual projection of the set of relevant features is matched to features of the received image and matching errors between the features of the image and the features of the projection are measured. The estimate of the pose is then updated to reduce the matching errors.
  • a second embodiment of the present invention is method of refining a three dimensional model of a scene using an image of the scene taken by a camera, which may have an unknown pose. First the image is compared to the three dimensional model of the scene to generate an estimate of the camera pose. The three dimensional model of the scene is then updated based on data from the image and the estimated pose.
  • Another embodiment of the present invention is a method of accurately estimating a position of remote vehicle using a three dimensional model and an image from a camera having a known orientation relative to the remote vehicle.
  • the image is compared to the three dimensional model of the scene to generate an initial estimate of the pose.
  • a set of relevant features of the three dimensional model are selected and matched to features of the image. Matching errors are then measured and used to update the estimate of the pose.
  • the position of the remote vehicle is determined using the estimated pose and the orientation of the camera.
  • the three dimensional model may then be updated based on data from the image and the estimate of the pose.
  • Yet another embodiment of the present invention is a method of refining a three dimensional model of a scene containing an object using a plurality of images of the scene.
  • a first image and a second image are compared to the three dimensional model of the scene to generate first and second pose estimates.
  • the first image and the second image are compared to one another to generate relative pose constraints.
  • the first pose and the second pose are updated based on these relative pose constraints.
  • Two sets of relevant features of the three-dimensional model corresponding to the first pose and the second pose are selected. These sets of relevant features are then matched to features in the first and second images, and two sets of matching errors are measured.
  • the position estimate of the object within the three-dimensional model is then updated to reduce the two sets of matching errors.
  • the present invention is also embodied in a method of creating a textured three-dimensional model of a scene using a plurality of images of the scene.
  • First a polyhedral model the scene including a plurality of polygonal surfaces is created. Each polygonal surface in the polyhedral model is larger than a predetermined size.
  • One polygonal surface is then separated into a plurality of portions. For a selected portion, a subset of images containing that portion of the polygonal surface is identified. The corresponding section of the selected image is then projected onto the selected portion of the polygonal surface in the textured three-dimensional model as a local color map.
  • the present invention is also embodied in a video flashlight method of creating a dynamic sequence of virtual images of a scene using a dynamically updated three dimensional model of the scene.
  • the three dimensional model is updated using a video sequence of images of the scene as described with regard to the second embodiment of the present invention. Meanwhile a viewpoint of a virtual image is selected.
  • the virtual image is then created by projecting the dynamic three-dimensional model onto the virtual viewpoint.
  • Figure 1 is a top plan drawing of several cameras viewing a scene.
  • Figure 2 is a perspective drawing of a polyhedral model of the scene in
  • Figure 3 is a drawing illustrating selection of relevant features from the polyhedral model of Figure 2.
  • Figure 4 is a flowchart illustrating an embodiment of the present invention to accurately estimate the pose of an image of a scene using a three dimensional model of the scene.
  • Figure 5 is a flowchart illustrating an embodiment of the present invention to update a three dimensional model of a scene using an image of the scene with an unknown pose.
  • Figure 6 is a perspective drawing illustrating the use of epipolar geometry to determine the relative pose of two images.
  • Figure 7 is a flowchart illustrating an embodiment of the present invention to update a three dimensional model of a scene using a pair of images of the scene with unknown poses.
  • Figure 8 is a top plan drawing illustrating a remote vehicle with a mounted camera viewing the scene from Figure 1.
  • Figure 9 is a flowchart illustrating an embodiment of the present invention to accurately estimate the position of a remote vehicle within a scene using a three dimensional model of the scene and an image of the scene.
  • Figure 10 is a perspective drawing of a surface from Figure 2 illustrating the separation of the surface into approximately equal sized portions used in an exemplary hybrid three dimensional model according to the present invention.
  • Figure 11 is a flowchart illustrating an embodiment of the present invention to create a hybrid three-dimensional model of a scene.
  • Figure 12 is a perspective drawing of a scene illustrating an exemplary video flashlight method of the present invention.
  • Figure 13 is a flowchart illustrating an embodiment of the present invention to create virtual images of a scene and a dynamic three-dimensional model of the scene using a video flashlight method.
  • the present invention embodies systems and methods for processing one or more video streams, for the purposes of progressively constructing and refining a 3D- scene model, geolocating objects, and enhanced visualization of dynamic scenes.
  • One embodiment of the present invention is a progressive strategy to model construction and refinement, starting with a coarse model and incrementally refining it with information derived from freely moving video cameras, thereby increasing the spatial fidelity and temporal currentness of the model.
  • Figure 1 is a top plan drawing showing several cameras 108, which may be stationary or may be in motion.
  • the cameras view an exemplary scene 100, containing three objects, cube 102, star prism 104, and cylinder 106.
  • Figure 2 is a perspective drawing of an exemplary polyhedral model of scene 100 from Figure 1. All objects in this model are all represented as polyhedra composed of planar polygonal surfaces 200, even a curved object, such as cylinder 106. To accomplish such a polyhedral representation, the modeled representation of cylinder 106 contains artificial edge boundaries 204. Also shown in Figure 2 are dotted lines 202, which represent edge boundaries that are hidden in the illustrated view of the model. While this model is shown as an image of a physical scene, it is contemplated that the model may be a mathematical representation of the scene, stored in a data file.
  • a first exemplary embodiment of the present invention is a method of automatic pose estimation and registration of video to an existing polyhedral scene model, which may or may not have texture information suitable for registration.
  • the method of this embodiment may be used to estimate camera pose with respect to textured or untextured models, by optimizing the fit between line segments projected from model to image and the gradients of image intensity. This method may be combined with interframe prediction to continuously track a video stream.
  • FIG. 4 is a flowchart illustrating this exemplary embodiment of the present invention.
  • the method begins with a three dimensional model of the scene, 400, such as that shown in Figure 2.
  • the first step, 402 is to generate an initial, rough estimate of the pose of an image by comparing the image to the three dimension model of the scene. Numerous methods may be used to obtain this initial estimate.
  • a user can set the pose for the first frame. Physical measurements from position/attitude sensors mounted on the camera platform may be another source.
  • the estimated pose of the previous frame may be used as the initial pose, or the pose can be predicted from the previous frame's estimated pose based on assumed motion. Interframe alignment may also be used to assist with prediction and with overcoming ambiguity.
  • One method for determining an initial pose estimate may be to extract corresponding 2D feature points from a pair of successive images using an optical flow technique.
  • flow estimation may be initialized with the 2D projective motion field estimated by a global parametric motion algorithm. Methods for flow estimation of this type are described in U.S. Patent number 5,629,988, entitled "SYSTEM AND METHOD FOR ELECTRONIC IMAGE STABILIZATION".
  • each selected feature point is mapped into its corresponding 3D point in the world view by using the current estimated pose and the current shape model.
  • the initial pose estimate is then determined from the set of 3D and 2D correspondences. This may be done, for example, using the Random Sample Consensus (RANSAC) method of robust least squares fitting.
  • RANSAC Random Sample Consensus
  • a set of relevant features is selected from the three dimensional model based on the estimate of the pose, step 404.
  • the existing model may be considered a collection of untextured polygonal faces. Face edges in the model imply discontinuities in surface normal and/or material properties in the actual 3D scene, which generally induce brightness edges in an image. Although the subject invention is described in terms of brightness edges, it is contemplated that other types of edges may be used such as those defined by changes in color or texture on an appropriately colored and/or textured model. Because they are relatively easy to identify, an exemplary alignment method selects 3D line segments from the model as relevant features.
  • Figure 3 is a drawing illustrating selection of relevant features 302 from the polyhedral model of Figure 2.
  • Figure 3 is drawn to have the same pose as that used for the perspective drawing of Figure 2.
  • occluded fragments of model features, dotted lines 202 in Figure 2 may be deleted by known methods, such as a Z-buffer hidden line removal process.
  • multiple local minima in pose estimation may be caused by closely spaced edges in either the model or the image even if they correspond to brightness edges in the image. Therefore, once the possible relevant features of the model have been identified, a projection of these features as they appear in the estimated pose is created, step 406 in Figure 4, and projected model features are culled out if any similarly oriented projected model edge lies near it. Projected model features may also be culled if image texture analysis reveals the presence of multiple image edges within a predetermined distance. This culling may be dynamically adapted to the current level of uncertainty, so that as pose becomes better known, more edges can be included for increased pose estimation accuracy.
  • Application-dependent heuristics for this culling may be applied, for example, keeping only model edges common to exactly two faces, keeping edges only at large dihedral angles, ignoring edges which are too close together, and ignoring edges near and on the ground (terrain faces may be annotated in the given model).
  • the dotted lines 300 represent features which have been ignored in this exemplary pose as too close to other lines. Also missing are the artificial edge boundaries 204 from the model shown in Figure 2, as the their dihedral angles have been determined to be too small.
  • step 418 While the pose, and hence occlusions, may change during the optimization process, it may suffice to cull model edges once when initializing the optimization and afterwards only if the pose changes significantly.
  • This determination based on the change in the estimated pose in the exemplary method of Figure 4, is shown as step 418. If the pose of the camera has changed by more than a threshold amount since the features were selected from the model, step 418 jumps to step 404 to select a new set of features.
  • the process of pose refinement desirably leads toward smaller changes in the estimated pose. Therefore, often the estimate of pose will quickly become approximately correct. In this instance, step 418 branches to step 406 and same set of relevant features will be used throughout the remaining pose refinement.
  • step 418 branches to step 406 and same set of relevant features will be used throughout the remaining pose refinement.
  • the virtual projection of the relevant features is matched to brightness features in the image, step 408. Additionally, color and/or texture edges may be used to generate image features, not just brightness edges. Matching errors are generated to indicate the goodness of the fit, step 410. The estimated pose is then updated based on these matching errors, step 412.
  • the matching error calculation may be accomplished by many methods, including a number of image processing methods used for image warping and mosaic construction.
  • One difficulty that may arise is that an image may have brightness edges that are close together, causing local minima in the objective function. Also, if the given polyhedral model does not agree with veridical scene structure, no pose can simultaneously align all edge features well. Projected model features may get stuck at the wrong image edge, preventing progress towards the correct pose.
  • One exemplary method is to use any of a number " of discrete search schemes. Perturbing the pose estimate, either at the initial estimate, step 402, or during refinement, step 412, is one method that may increase the likelihood of finding the global minimum. These perturbations in pose estimate may be generated by explicitly perturbing the pose parameters. Alternatively, they may be generated in a data-driven fashion, as in RANSAC, wherein pose hypotheses are estimated from many randomly chosen subsets of the model features.
  • interframe registration may be used to yield constraints on relative poses of neighboring frames, which, in turn, may allow poses of a collection of frames to be determined unambiguously. If a set of nearby frames collectively views a larger set of edges than any single frame, simultaneously exploiting image-to-model constraints in multiple frames may help to disambiguate pose.
  • An exemplary embodiment of the present invention aligns the three dimensional model to a mosaic, virtual wide-angle image, from a virtual rigid rig of cameras. Suppose that a pose estimate has been predicted for one frame, but that this pose estimate is ambiguous due to a paucity of relevant model edges. For each of the remaining frames, interframe prediction may be used to obtain pose estimates. These pose estimates may then be treated as a virtual rigid rig of cameras. The total image-to-model error of the mosaic, virtual wide-angle image, created from these frames is then minimized. This method is particularly useful for scenes in which the features are relatively static.
  • Mismatched features may also occur because the attributes being measured are not sufficiently discriminating.
  • a projected model feature may be matched equally well to an image edge of equal length, an image edge of longer length, or a series of short image edges.
  • Many different methods for matching image features tuned to scale and orientation are known in the art.
  • One method that may be used is pyramid processing. Both the image and the model may be processed into a pyramid of resolution levels. As the estimated pose becomes more accurate (i.e. the matching errors decrease), the resolution level, and number of features retained, may be increased.
  • the matching errors may also be optimized with respect to the estimated pose via steepest descent.
  • the current pose estimate may be iteratively incremented by moving in the gradient direction in the space of an orthogonalized incremental pose parameter. In that space, a unit step in any direction causes the RMS change in line segment points of image features to be 1 pixel.
  • One of the properties that may be desirable for the matching errors is a degree of robustness against outliers. Desirably this robustness includes both poorly fitting lines and model clutter, i.e. a spurious model feature that has no true matching image feature. If the projected feature lies far from any image feature, it contributes little to the calculation of the gradient of estimated pose. To reduce the chance that a projected model line segment, whether an artifact of model clutter or a valid feature, would be attracted to a falsely matching image feature, dissimilarly oriented model and image edges may be ignored by using an oriented energy image tuned to the angle of the model line.
  • an oriented energy image including the oriented energy at a selected angle may be computed as follows.
  • the source image is differentiated in the selected direction and smoothed in a perpendicular direction.
  • the magnitude of the latter is smoothed in the selected direction.
  • This method responds to both step edges and impulses.
  • the orientation may be quantized to multiples of 45° and scaled to powers of 2, as in a pyramid.
  • An exemplary method to increase the capture range and speed convergence is to vary the scale of the energy image from coarse to fine resolution during the optimization process.
  • the scale may desirably be tuned for each model line, commensurate with the expected amount of uncertainty in the location of predicted feature with respect to the actual corresponding image edge.
  • the orientation tuning may be very broad in the oriented energy pyramid described above.
  • the solution may be to add more discriminating feature attributes to the representation.
  • the measure of the matching errors may be improved, for example, by using sharper orientation tuning.
  • the camera may only see a small part of the model at a time.
  • there may be individual images that contain little texture or few model edges, so that matching features may be difficult and the pose may be undtermined.
  • One method to overcome this difficulty is to combine several images to form a mosaic that is warped to the coordinate system of the current image. This composite image may be used in the algorithm, in place of the individual image, to determine the pose of the camera at the position of the current image.
  • steps 406, 408, 410, and 412 may be repeated until a set number of iterations have occurred, step 414, or until the matching errors fall below a set criteria, step 416 (alternatively, the matching score, if used, may exceed a predetermined level).
  • Figure 5 is a flowchart illustrating an embodiment of the present invention to update a three dimensional model of a scene using an image of the scene with an unknown pose.
  • This method begins with several steps that are similar to those described above with regard to Figure 4.
  • the embodiment illustrating in Figure 5 begins with a three dimensional model of the scene, 500.
  • This starting model may be very rudimentary and limited in its scope. As images are aligned and information from these images used to update the model, the detail and range of the model may both be increased.
  • the image is compared to the three dimensional model of the scene to generate an initial estimate of the pose, step 502.
  • a set of relevant features are selected from the three dimensional model based on this pose estimate, step 504.
  • the relevant features of the model are then projected and matched to features in the image to generate matching errors, step 506.
  • the estimated pose is updated based on the matching errors, step 508.
  • step 512 may query if the matching score exceeds a predetermined level.
  • Step 514 determines whether the pose estimate indicates that the pose of the camera has changed so much from the initial pose in which the current relevant features were selected that it may be desirable to reselect a new set of relevant model features to be used for continued pose refinement.
  • the data from the image is used to update the three dimensional model, step 516.
  • This update may include: the refinement of object placement within the scene; the additions of textural or fine structural detail to surfaces of objects; temporal updating of dynamic processes, such as changing illumination, or objects in motion; and the spatial extension of the scene being modeled.
  • a number of methods to project image data onto a corresponding three dimensional model, such as a simple ray tracing method, are known in the art and may be used in step 516. For many of these methods it is desirable to use two, or more, images simultaneously, particularly to assist with refining object placement and identifying objects in motion.
  • Figure 6 is a perspective drawing illustrating a plane in the scene, 600, and the poses of two images, 602 and 604, of the scene, both of which include object 606.
  • the epipole is shown as the line O - O'.
  • the epipolar geometry of a scene may also be used to place constraints on the relative pose of images 602 and 604 even if the location of object 606 within the scene is not known.
  • plane plus parallax methods may be used to generate relative pose constraints between images. A detailed discussion of some of these methods is provided in U.S. Patent number 6,192,145, entitled “METHOD AND APPARATUS FOR THREE-DIMENSIONAL SCENE PROCESSING USING PARALLAX GEOMETRY OF PAIRS OF POINTS” .
  • Figure 7 is a flowchart illustrating another exemplary embodiment of the present invention. This exemplary method is similar to that previously described with regard to Figure 5, but uses multiple images of the scene with unknown poses to update a three dimensional model of a scene. Although the exemplary method shown in Figure 7 only employs two images at a time, it is contemplated that three or more images may be simultaneously used in this embodiment. The images may be successive images from a moving source, or they may be concurrently generated images from multiple sources.
  • the parallel paths in Figure 7, such as steps 702 and 704 represent parallel processes, which may occurs as multi-threaded tasks within the image processing hardware of the present invention. It may be desirable for these tasks to be performed concurrently by parallel processors to increase processing speed.
  • this embodiment begins with a three dimension model of a scene, 700, and the first and second images are compared to the three dimensional model of the scene, steps 702 and 704 respectively, to generate pose estimates.
  • pose estimates may be initial pose estimates, or may be refined as described in previous embodiments of the present invention. If the pose estimated are refined, it may be desirable for the matching error criteria to be less precise than in previous embodiments.
  • the first and second image are compared to one another, step 706, to generate constraints on the relative poses of the two images.
  • Relative pose constraints can also be generated due to the known motion of a single camera, or known spatial relationships between multiple cameras.
  • the pose estimates are then updated to accommodate the relative pose constraints, step 708. While it is often desirable to minimize the individual image to model mismatches when the pose estimates are being updated, this update should result in pose estimates that fit within the relative pose constraints generated in step 706.
  • sets of relevant features from the three dimensional model, corresponding to the first image and the second image are selected based on the updated poses, steps 710 and 712 respectively. These sets of relevant features may be selected by any of the previously disclosed methods.
  • the corresponding set of relevant features is next projected to the estimated pose of the first image, matched to the first image and first image matching errors are measured, step 714. The same procedure is followed for the second image in step 716.
  • the three dimensional model is updated using the matching errors from both images to improve the position estimate of objects within the model, step 718.
  • the matching errors are jointly minimized with respect to position estimate for both images to refine the placement of known objects in the scene model. Further iterations may be used to improve the model refinement, and possibly the estimated camera poses as well. These iterations may start by once again estimating the image poses, steps 702 and 704, or with the selection of sets of relevant model features, steps 710 and 712.
  • refinement of object placement may be extended to more general deformations by using an additional set of parameters to describe known sizes and relationships of parts of objects.
  • This application of the present invention involves using current video from a camera mounted on a remote vehicle to locate the vehicle relative to objects in a three dimensional model.
  • An additional application for the present invention may be to extend and refine a three dimensional model of an area as the area is explored by the remote vehicle possibly beyond the range of the scene originally covered by the model.
  • Figure 8 is a top plan drawing of remote vehicle 800, with mounted camera
  • Figure 9 is a flowchart illustrating an exemplary embodiment of the present invention to accurately estimate the position of a remote vehicle within a scene using a three dimensional model of the scene and an image of the scene. As previously described, the method begins with a three dimension model and an image from a camera having an orientation relative to the remote vehicle, 900 that can be determined.
  • the next seven steps, 902, 904, 906, 908, 910, 912, and 914, involve estimating and refining the camera pose with respect to the three dimensional model. These steps are the same as steps 502, 504, 506, 508, 510, 512, and 514 in the flowchart of Figure 5.
  • the position and orientation of the remote vehicle relative to the scene in the three dimensional model may be determined, step 916. Because the mounted camera may be adjusted to rotate, tilt or zoom, parameters controlling these functions may need to be examined to determine the orientation of the camera relative to the remote vehicle.
  • the pose of the camera may be used independently of the position and orientation of the vehicle to update the model.
  • the three dimensional model is updated based on data from the image and its estimated pose, step 918, similarly to step 516 in Figure 5.
  • the camera orientation, and possibly its position, relative to the remote vehicle are determined, step 920 to fix the position of the vehicle. The process begins again at step 902 when another image is taken.
  • an exemplary embodiment of the present invention may be used for detailed refinement of a three dimensional model, including definition of local surface shape and texture.
  • the true shape of the surfaces in the scene may differ from the planar surfaces given by the given polyhedral model, even after refinement of object placement.
  • the existing objects may have changed, and objects may be added or deleted.
  • Some scene objects, especially natural objects like trees, may be absent from the model, or unsatisfactorily represented, because they are hard to represent using simple polyhedra. This can be clearly seen for the polyhedral model of cylinder 106 in Figure 2.
  • an untextured model such as the previous mentioned polyhedral model
  • the brightness of an object in the scene may be approximated as being constant, independent of the viewpoint, and each surface pixel may be assigned a color value that is most typical of all the given images.
  • polygonal surfaces of a polyhedral three- dimensional model may be separated into approximately equally sized portions. This may allow for better mapping of surface detail from a series of images onto the model, particularly for surfaces that are only partially visible in some of the images.
  • Figure 10 shows a perspective drawing of surface 200, from Figure 2, which has been separated into approximately equal sized portions 1000. Surface detail 1002 for one portion, which may be added to the polyhedral model, is also shown. By adding surface detail to a polyhedral model of the scene in this manner, a hybrid three-dimensional model of the scene may be created.
  • FIG 11 is a flowchart illustrating an exemplary embodiment of the present invention to create such a hybrid three dimensional model of a scene using a series of images of the scene.
  • the first step is to create a polyhedral model the scene, step 1100.
  • the polyhedral model desirably includes a plurality of polygonal surfaces, which are larger than a predetermined size for surface portions. Any standard technique for creating the model may be used.
  • the rough shape and placement of objects in the polyhedral model may be refined according to the techniques of either of the flowcharts in Figures 5 and 7 using previous, possibly low resolution, images of the scene and/or the series of images to be employed for adding surface detail to the model. It is also important to determine the camera poses of the series of images. The poses may be known, or may be determined by one of the methods described above.
  • a polygonal face is selected from the polyhedral model for determination of surface details, step 1102.
  • the selection may be predetermined to sweep the analysis from one side of the model to its opposite side, or it may be based on a determination of scene areas in the images where the modeled shape appears to differ significantly from the existing model.
  • the latter selection method is most useful when refinement of surface details is desirably to be performed only in those areas of the model that deviate from the images by an amount greater than a certain threshold.
  • the last step is to convert the resulting parallax magnitude to metric height.
  • the estimated shape may be very accurate; however, occlusion and deocclusion areas cannot be well represented using a depth map with respect to one reference view.
  • a smaller batch of images may produce a shape estimate good enough to align and synthesize viewpoints in the vicinity of the source data.
  • an alternative may be to use the parallax estimation methods to independently estimate and represent shape within a series of overlapping batches of images, each with its own reference view. Each such estimate would be stored in a hybrid scene representation consisting of a polyhedral model plus view-dependent local shape variations.
  • step 1104 in Figure 11 is to separate the selected polygonal surface into portions of approximately the predetermined size for surface portions. This predetermined size may be as small as a single element within the model, or it may be as large as the entire polygonal surface. Next one of these portions is selected, step 1106. Z-buffering may be used to detect and discard those images which are occluded by some other face, leaving a subset of images which contain the selected surface portion, step 1108.
  • X is mapped to every image in the subset, using object placement parameters for the surface, camera pose parameters and, if available, the previously estimated height maps.
  • One method is to combine the color values from the pixels in the subset of images corresponding to X. This may be done by averaging the color values, blending the color values or performing another mathematical function on the color values of the various images in the subset.
  • the color and brightness at corresponding points in different images may not be identical, due to camera gain variations and non-Lambertian surface materials. Abruptly switching between different source images while computing the appearance of a single face may cause seams to appear in the texture map. Accordingly, it may be desirable to use pixel values from a single image to map texture onto a given surface of the model, if possible.
  • step 1114 any artifacts caused by using multiple images to generate the texture maps of the portions of a single surface may be mitigated through multi-resolution blending of the portions, step 1116, as taught in U.S. Patent number 6,075,905, entitled "METHOD AND APPARATUS FOR MOSAIC IMAGE CONSTRUCTION" .
  • This step may also be employed when a single image is used to map texture onto all of the portions of a given surface of the model.
  • the foregoing method to create a surface texture map of a hybrid three dimensional model may be used whether the scene is initially modeled by planar faces alone or by a model of planar faces plus shape variations.
  • texture may be recovered for each local reference view. The result would be a 3D-scene model with view-dependent shape and texture.
  • One exemplary method to synthesize a new image from a novel viewpoint may be to interpolate the shape and texture of two or more nearby reference views closest to the desired viewpoint, according to the teachings of U.S. Patent Application number 08/917402, entitled “METHOD AND SYSTEM FOR RENDERING AND COMBINING IMAGES TO FORM A SYNTHESIZED VIEW OF A SCENE CONTAINING IMAGE INFORMATION FROM A SECOND IMAGE" .
  • Figure 12 is a perspective drawing of a scene, 1200, illustrating an exemplary video flashlights method of the present invention, which may be employed to synthesize an image from a novel viewpoint using a dynamic three dimensional model, such as those previously described with regard to the flowcharts of Figures 5, 1, and 11.
  • the exemplary scene, 1200 is being filmed by video cameras 1202, which may be in motion. At any given time, images from these cameras contain only a portion of the scene, 1204.
  • Figure 13 is a flowchart illustrating an exemplary embodiment of the present invention to create virtual images of a scene and a dynamic three-dimensional model of the scene using this video flashlight method.
  • This exemplary method involves two concurrent processes. Both of these processes start from a preexisting three dimensional model of the scene, 1300. The first of these processes, updating the model using the incoming video data, is exemplified in Figure 13 as steps 1302, 1304, 1306 and 1308. Any of the methods previously described above for updating a three dimensional model may be used. This dynamic updating process is continuous, so that the model may contain the most currently available data.
  • the second process is the creation of output image sequence(s) based on the dynamic three-dimensional model.
  • This process involves the selection of a virtual viewpoint, step 1310, and projection of the current model onto the selected viewpoint, step 1312.
  • the viewpoint selection may be controlled manually, set to track a feature, or object, in the model, or may follow a predetermined pattern.
  • Projection of a three dimensional model to form an image as seen from selected point may be performed by a number of well known methods, such as Z-buffering.
  • An exemplary embodiment of the present invention registers all video frames to the model so that images from several cameras at the same time instant can be projected onto the model, like flashlights illuminating the scene, which may then be rendered for any user-selected viewpoint.
  • images from several cameras at the same time instant can be projected onto the model, like flashlights illuminating the scene, which may then be rendered for any user-selected viewpoint.
  • it may become easy to interpret the imagery and the dynamic events taking place in all streams at once.
  • the three dimensional model may be constructed to be devoid of moving objects and/or objects that are difficult to model onto polyhedra. These objects may appear in the video flashlights. In a virtual imaging system of this design, it may be desirable to make control of the video flashlight(s) responsive to movement of the virtual viewpoint(s).
  • Such computer-readable media include; integrated circuits, magnetic and optical storage media, as well as audiofrequency, radio frequency, and optical carrier waves.

Abstract

The present invention is embodied in a video flashlight method. This method creates virtual images of a scene using a dynamically updated three-dimensional model of the scene and at least one video sequence of images. An estimate of the camera pose is generated by comparing a present image to the three-dimensional model. Next, relevant features of the model are selected based on the estimated pose. The relevant features are then virtually projected onto the estimated pose and matched to features of the image. Matching errors are measured between the relevant features of the virtual projection and the features of the image. The estimated pose is then updated to reduce these matching errors. The model is also refined with updated information from the image. Meanwhile, a viewpoint for a virtual image is selected. The virtual image is then created by projecting the dynamically updated three-dimensional model onto the selected virtual viewpoint.

Description

A METHOD OF POSE ESTIMATION AND MODEL REFINEMENT FOR VIDEO REPRESENTATION OF A THREE DIMENSIONAL SCENE
[0001] This application claims the benefit of the filing date of the Provisional application 60/187,557 filed March 7, 2000, the contents of which are incorporated herein by reference.
[0002] The present invention is directed toward the domain of image processing, in particular toward the creation and manipulation of three dimensional scene models and virtual images of the scene seen from arbitrary viewpoints.
BACKGROUND OF THE INVENTION
[0003] Tremendous progress in the computational capability of integrated electronics and increasing sophistication in the algorithms for smart video processing has lead to special effects wizardry, which creates spectacular images and otherworldly fantasies. It is also bringing advanced video and image analysis applications into the mainstream. Furthermore, video cameras are becoming ubiquitous. Video CMOS cameras costing only a few dollars are already being built into cars, portable computers and even toys. Cameras are being embedded everywhere, in all variety of products and systems just as microprocessors are.
[0004] At the same time, increasing bandwidth on the Internet and other delivery media has brought widespread use of camera systems to provide live video imagery of remote locations. In order to provide coverage of live video imagery of a remote site, it is often desirable to create representations of the environment to allow realistic viewer movement through the site. The environment consists of static parts (building, roads, trees, etc.) and dynamic parts (people, cars, etc.). The geometry of the static parts of the environment can be modeled offline using a number of well-established techniques. None of these techniques has yet provided a completely automatic solution for modeling relatively complex environments, but because the static parts do not change, offline, non- real time, interactive modeling may suffice for some applications. A number of commercially available systems (GLMX, PhotoModeler, etc.) provide interactive tools for modeling environments and objects.
[0005] For arbitrary 3D scenes, various modeling approaches have been proposed, such as image-based rendering, light fields, volume rendering, and superquadrics plus shape variations. For general modeling of static scenes, site models are known to provide a viable option. In the traditional graphics pipeline based rendering, scene and object models stored as polygonal models and scene graphs are rendered using Z-buffering and texture mapping. The complexity of such rendering is dependent on the complexity of the scene. Standard graphics pipeline hardware has been optimized for high performance rendering.
[0006] However, current site models do not include appearance representations that capture the changing appearance of the scene. Also these methods generally seek to create a high quality model from scratch, which in practice necessitates constrained motion, instrumented cameras, or interactive techniques to obtain accurate pose and structure. The pose of the camera defines both its position and orientation. The dynamic components of a scene cannot, by definition, be modeled once and for all. Even for the static parts, the appearance of the scene changes due to varying illumination and shadows, and through modifications to the environment. For maintaining up-to-date appearance of the static parts of the scene, videos provide a cost-effective and viable source of current information about the scene, but unless the cameras are fixed, the issue of obtaining accurate pose information remains.
[0007] Between a pair of real cameras, virtual viewpoints may be created by tweening images from the two nearest cameras. Optical flow methods are commonly used by themselves to create tweened images. Unfortunately, the use of only traditional optical flow methods can lead to several problems in creating a tweened image. Particularly difficult are the resolution of large motions, especially of thin structures, for example the swing of a baseball bat; and occlusion/deocclusions, for example between a person's hands and body. The body of work on structure from motion may be pertinent to 3D-scene modeling. Purely image-driven methods, however, tend to drift away from metric accuracy over extended image sequences, because there is no constraint to tie down the estimated structure to the coordinate system of the real world. That constraint must come from physical measurements like GPS, or surveyed landmarks, or from a prior scene model with shape and/or texture information. The problem of loss of metric accuracy is particularly acute for analysis and control of a remote scene in which use of such constraint indicia is not practical, for example to control a remote vehicle on Mars or underground.
SUMMARY OF THE INVENTION
[0009] The present invention is embodied in a method of accurately estimating a pose of a camera within a scene using a three dimensional model of the scene. The exemplary method begins by generating an initial estimate of the camera pose. Next, a set of relevant features of the three-dimensional model based on the estimate of the pose is selected. A virtual projection of this set of relevant features as seen from the estimated pose is then created. The virtual projection of the set of relevant features is matched to features of the received image and matching errors between the features of the image and the features of the projection are measured. The estimate of the pose is then updated to reduce the matching errors.
[0010] A second embodiment of the present invention is method of refining a three dimensional model of a scene using an image of the scene taken by a camera, which may have an unknown pose. First the image is compared to the three dimensional model of the scene to generate an estimate of the camera pose. The three dimensional model of the scene is then updated based on data from the image and the estimated pose.
[0011] Another embodiment of the present invention is a method of accurately estimating a position of remote vehicle using a three dimensional model and an image from a camera having a known orientation relative to the remote vehicle. The image is compared to the three dimensional model of the scene to generate an initial estimate of the pose. A set of relevant features of the three dimensional model are selected and matched to features of the image. Matching errors are then measured and used to update the estimate of the pose. The position of the remote vehicle is determined using the estimated pose and the orientation of the camera. The three dimensional model may then be updated based on data from the image and the estimate of the pose.
[0012] Yet another embodiment of the present invention is a method of refining a three dimensional model of a scene containing an object using a plurality of images of the scene. First, a first image and a second image are compared to the three dimensional model of the scene to generate first and second pose estimates. Then, the first image and the second image are compared to one another to generate relative pose constraints. The first pose and the second pose are updated based on these relative pose constraints. Two sets of relevant features of the three-dimensional model corresponding to the first pose and the second pose are selected. These sets of relevant features are then matched to features in the first and second images, and two sets of matching errors are measured. The position estimate of the object within the three-dimensional model is then updated to reduce the two sets of matching errors.
[0013] The present invention is also embodied in a method of creating a textured three-dimensional model of a scene using a plurality of images of the scene. First a polyhedral model the scene including a plurality of polygonal surfaces is created. Each polygonal surface in the polyhedral model is larger than a predetermined size. One polygonal surface is then separated into a plurality of portions. For a selected portion, a subset of images containing that portion of the polygonal surface is identified. The corresponding section of the selected image is then projected onto the selected portion of the polygonal surface in the textured three-dimensional model as a local color map.
[0014] The present invention is also embodied in a video flashlight method of creating a dynamic sequence of virtual images of a scene using a dynamically updated three dimensional model of the scene. The three dimensional model is updated using a video sequence of images of the scene as described with regard to the second embodiment of the present invention. Meanwhile a viewpoint of a virtual image is selected. The virtual image is then created by projecting the dynamic three-dimensional model onto the virtual viewpoint.
BRIEF DESCRIPTION OF THE FIGURES
[0015] Figure 1 is a top plan drawing of several cameras viewing a scene.
[0016] Figure 2 is a perspective drawing of a polyhedral model of the scene in
Figure 1.
[0017] Figure 3 is a drawing illustrating selection of relevant features from the polyhedral model of Figure 2.
[0018] Figure 4 is a flowchart illustrating an embodiment of the present invention to accurately estimate the pose of an image of a scene using a three dimensional model of the scene. [0019] Figure 5 is a flowchart illustrating an embodiment of the present invention to update a three dimensional model of a scene using an image of the scene with an unknown pose.
[0020] Figure 6 is a perspective drawing illustrating the use of epipolar geometry to determine the relative pose of two images.
[0021] Figure 7 is a flowchart illustrating an embodiment of the present invention to update a three dimensional model of a scene using a pair of images of the scene with unknown poses.
[0022] Figure 8 is a top plan drawing illustrating a remote vehicle with a mounted camera viewing the scene from Figure 1.
[0023] Figure 9 is a flowchart illustrating an embodiment of the present invention to accurately estimate the position of a remote vehicle within a scene using a three dimensional model of the scene and an image of the scene.
[0024] Figure 10 is a perspective drawing of a surface from Figure 2 illustrating the separation of the surface into approximately equal sized portions used in an exemplary hybrid three dimensional model according to the present invention.
[0025] Figure 11 is a flowchart illustrating an embodiment of the present invention to create a hybrid three-dimensional model of a scene.
[0026] Figure 12 is a perspective drawing of a scene illustrating an exemplary video flashlight method of the present invention.
[0027] Figure 13 is a flowchart illustrating an embodiment of the present invention to create virtual images of a scene and a dynamic three-dimensional model of the scene using a video flashlight method.
DETAILED DESCRIPTION OF THE INVENTION
[0028] The present invention embodies systems and methods for processing one or more video streams, for the purposes of progressively constructing and refining a 3D- scene model, geolocating objects, and enhanced visualization of dynamic scenes. One embodiment of the present invention is a progressive strategy to model construction and refinement, starting with a coarse model and incrementally refining it with information derived from freely moving video cameras, thereby increasing the spatial fidelity and temporal currentness of the model.
[0029] Figure 1 is a top plan drawing showing several cameras 108, which may be stationary or may be in motion. The cameras view an exemplary scene 100, containing three objects, cube 102, star prism 104, and cylinder 106.
[0030] Figure 2 is a perspective drawing of an exemplary polyhedral model of scene 100 from Figure 1. All objects in this model are all represented as polyhedra composed of planar polygonal surfaces 200, even a curved object, such as cylinder 106. To accomplish such a polyhedral representation, the modeled representation of cylinder 106 contains artificial edge boundaries 204. Also shown in Figure 2 are dotted lines 202, which represent edge boundaries that are hidden in the illustrated view of the model. While this model is shown as an image of a physical scene, it is contemplated that the model may be a mathematical representation of the scene, stored in a data file.
[0031] A first exemplary embodiment of the present invention is a method of automatic pose estimation and registration of video to an existing polyhedral scene model, which may or may not have texture information suitable for registration. The method of this embodiment may be used to estimate camera pose with respect to textured or untextured models, by optimizing the fit between line segments projected from model to image and the gradients of image intensity. This method may be combined with interframe prediction to continuously track a video stream.
[0032] Figure 4 is a flowchart illustrating this exemplary embodiment of the present invention. The method begins with a three dimensional model of the scene, 400, such as that shown in Figure 2. The first step, 402, is to generate an initial, rough estimate of the pose of an image by comparing the image to the three dimension model of the scene. Numerous methods may be used to obtain this initial estimate. In an interactive system, a user can set the pose for the first frame. Physical measurements from position/attitude sensors mounted on the camera platform may be another source. Alternatively, when processing a sequence of frames, the estimated pose of the previous frame may be used as the initial pose, or the pose can be predicted from the previous frame's estimated pose based on assumed motion. Interframe alignment may also be used to assist with prediction and with overcoming ambiguity. One method for determining an initial pose estimate may be to extract corresponding 2D feature points from a pair of successive images using an optical flow technique. In order to support large displacements, flow estimation may be initialized with the 2D projective motion field estimated by a global parametric motion algorithm. Methods for flow estimation of this type are described in U.S. Patent number 5,629,988, entitled "SYSTEM AND METHOD FOR ELECTRONIC IMAGE STABILIZATION". Next, each selected feature point is mapped into its corresponding 3D point in the world view by using the current estimated pose and the current shape model. The initial pose estimate is then determined from the set of 3D and 2D correspondences. This may be done, for example, using the Random Sample Consensus (RANSAC) method of robust least squares fitting. Even though the current pose and shape model may include significant errors, these errors should be reduced in the process of refining the image to model alignment.
[0033] Next a set of relevant features is selected from the three dimensional model based on the estimate of the pose, step 404. The existing model may be considered a collection of untextured polygonal faces. Face edges in the model imply discontinuities in surface normal and/or material properties in the actual 3D scene, which generally induce brightness edges in an image. Although the subject invention is described in terms of brightness edges, it is contemplated that other types of edges may be used such as those defined by changes in color or texture on an appropriately colored and/or textured model. Because they are relatively easy to identify, an exemplary alignment method selects 3D line segments from the model as relevant features.
[0034] Figure 3 is a drawing illustrating selection of relevant features 302 from the polyhedral model of Figure 2. Figure 3 is drawn to have the same pose as that used for the perspective drawing of Figure 2. Given an initial pose estimate, occluded fragments of model features, dotted lines 202 in Figure 2, may be deleted by known methods, such as a Z-buffer hidden line removal process.
[0035] Because many models of objects, such as buildings, may be constructed for various applications besides pose estimation (e.g. architecture), many model edges will not induce brightness edges in the image. Even after occluded edge fragments are suppressed, a model may contain clutter such as edges occluded by unmodeled objects, edges of structures that are semantically but not geometrically significant, edges between faces meeting at a shallow angle. Pose estimation may be adversely affected if too much model clutter happens to fall close to image edges. Therefore, it is important to cull the edges of the polyhedral model to keep those most likely to induce brightness edges.
[0036] Additionally, multiple local minima in pose estimation may be caused by closely spaced edges in either the model or the image even if they correspond to brightness edges in the image. Therefore, once the possible relevant features of the model have been identified, a projection of these features as they appear in the estimated pose is created, step 406 in Figure 4, and projected model features are culled out if any similarly oriented projected model edge lies near it. Projected model features may also be culled if image texture analysis reveals the presence of multiple image edges within a predetermined distance. This culling may be dynamically adapted to the current level of uncertainty, so that as pose becomes better known, more edges can be included for increased pose estimation accuracy.
[0037] Application-dependent heuristics for this culling may be applied, for example, keeping only model edges common to exactly two faces, keeping edges only at large dihedral angles, ignoring edges which are too close together, and ignoring edges near and on the ground (terrain faces may be annotated in the given model). In Figure 3 the dotted lines 300 represent features which have been ignored in this exemplary pose as too close to other lines. Also missing are the artificial edge boundaries 204 from the model shown in Figure 2, as the their dihedral angles have been determined to be too small.
[0038] While the pose, and hence occlusions, may change during the optimization process, it may suffice to cull model edges once when initializing the optimization and afterwards only if the pose changes significantly. This determination, based on the change in the estimated pose in the exemplary method of Figure 4, is shown as step 418. If the pose of the camera has changed by more than a threshold amount since the features were selected from the model, step 418 jumps to step 404 to select a new set of features. The process of pose refinement desirably leads toward smaller changes in the estimated pose. Therefore, often the estimate of pose will quickly become approximately correct. In this instance, step 418 branches to step 406 and same set of relevant features will be used throughout the remaining pose refinement. [0039] It is noted that, though the embodiments of the present invention mostly disclose the measurement and reduction of matching errors, one of skill in the art may easily practice the present invention by, instead, measuring and increasing matching scores without substantially altering the invention.
[0040] Next, the virtual projection of the relevant features is matched to brightness features in the image, step 408. Additionally, color and/or texture edges may be used to generate image features, not just brightness edges. Matching errors are generated to indicate the goodness of the fit, step 410. The estimated pose is then updated based on these matching errors, step 412. The matching error calculation may be accomplished by many methods, including a number of image processing methods used for image warping and mosaic construction.
[0041] In the case of an already-textured model, constructed from a previous set of images, the matching of image intensity patterns in the image to the textured model may be accomplished following the teachings of U.S. Patent Application number 09/075462, entitled "METHOD AND APPARATUS FOR PERFORMING GEO-SPATIAL REGISTRATION" and U.S. Provisional Application number 60/141460, entitled "GEO- SPATIAL REGISTRATION USING EUCLIDEAN REPRESENTATION" .
[0042] One difficulty that may arise is that an image may have brightness edges that are close together, causing local minima in the objective function. Also, if the given polyhedral model does not agree with veridical scene structure, no pose can simultaneously align all edge features well. Projected model features may get stuck at the wrong image edge, preventing progress towards the correct pose.
[0043] Several approaches to mitigate this problem exist. One exemplary method is to use any of a number "of discrete search schemes. Perturbing the pose estimate, either at the initial estimate, step 402, or during refinement, step 412, is one method that may increase the likelihood of finding the global minimum. These perturbations in pose estimate may be generated by explicitly perturbing the pose parameters. Alternatively, they may be generated in a data-driven fashion, as in RANSAC, wherein pose hypotheses are estimated from many randomly chosen subsets of the model features.
[0044] For video sequences, interframe registration may be used to yield constraints on relative poses of neighboring frames, which, in turn, may allow poses of a collection of frames to be determined unambiguously. If a set of nearby frames collectively views a larger set of edges than any single frame, simultaneously exploiting image-to-model constraints in multiple frames may help to disambiguate pose. An exemplary embodiment of the present invention aligns the three dimensional model to a mosaic, virtual wide-angle image, from a virtual rigid rig of cameras. Suppose that a pose estimate has been predicted for one frame, but that this pose estimate is ambiguous due to a paucity of relevant model edges. For each of the remaining frames, interframe prediction may be used to obtain pose estimates. These pose estimates may then be treated as a virtual rigid rig of cameras. The total image-to-model error of the mosaic, virtual wide-angle image, created from these frames is then minimized. This method is particularly useful for scenes in which the features are relatively static.
[0045] Mismatched features may also occur because the attributes being measured are not sufficiently discriminating. For example, a projected model feature may be matched equally well to an image edge of equal length, an image edge of longer length, or a series of short image edges. Many different methods for matching image features tuned to scale and orientation are known in the art. One method that may be used is pyramid processing. Both the image and the model may be processed into a pyramid of resolution levels. As the estimated pose becomes more accurate (i.e. the matching errors decrease), the resolution level, and number of features retained, may be increased.
[0046] The matching errors may also be optimized with respect to the estimated pose via steepest descent. The natural representation of an infinitesimal change in the pose parameters as a translation and a cross product for small rotation, however, unweighted steepest descent tends to work slowly when the energy surface is highly anisotropic, and may get stuck in local minima. To solve this problem, the current pose estimate may be iteratively incremented by moving in the gradient direction in the space of an orthogonalized incremental pose parameter. In that space, a unit step in any direction causes the RMS change in line segment points of image features to be 1 pixel.
[0047] One of the properties that may be desirable for the matching errors is a degree of robustness against outliers. Desirably this robustness includes both poorly fitting lines and model clutter, i.e. a spurious model feature that has no true matching image feature. If the projected feature lies far from any image feature, it contributes little to the calculation of the gradient of estimated pose. To reduce the chance that a projected model line segment, whether an artifact of model clutter or a valid feature, would be attracted to a falsely matching image feature, dissimilarly oriented model and image edges may be ignored by using an oriented energy image tuned to the angle of the model line.
[0048] Many different methods for generating a measure of local edge strength within an image tuned to scale and orientation are known. In an embodiment of the present invention, an oriented energy image including the oriented energy at a selected angle may be computed as follows. The source image is differentiated in the selected direction and smoothed in a perpendicular direction. The magnitude of the latter is smoothed in the selected direction. This method responds to both step edges and impulses. For computational efficiency, the orientation may be quantized to multiples of 45° and scaled to powers of 2, as in a pyramid.
[0049] Unfortunately, the robustness of this method may mean that, if the error in initial estimate of pose causes many model line segments to be projected far from their corresponding image features, the steepest descent may not move towards the true pose. An exemplary method to increase the capture range and speed convergence is to vary the scale of the energy image from coarse to fine resolution during the optimization process. The scale may desirably be tuned for each model line, commensurate with the expected amount of uncertainty in the location of predicted feature with respect to the actual corresponding image edge.
[0050] It has been found that the capture range of coarse-to-fine pose estimation may be reasonably large for low complexity images, and limited only by the presence of multiple local minima in matching errors when the image has lots of edges. Thus, it may be desirable to exploit interframe constraints to ensure stable tracking over a sequence, such as by using prediction to initialize estimated pose in the correct minimum basin, or by multiframe alignment. Intelligent selection of a subset of model features is also critical.
[0051] Also, the orientation tuning may be very broad in the oriented energy pyramid described above. The solution may be to add more discriminating feature attributes to the representation. The measure of the matching errors may be improved, for example, by using sharper orientation tuning. [0052] Alternatively, in an outdoor site modeling application, the camera may only see a small part of the model at a time. Thus, there may be individual images that contain little texture or few model edges, so that matching features may be difficult and the pose may be undtermined. One method to overcome this difficulty is to combine several images to form a mosaic that is warped to the coordinate system of the current image. This composite image may be used in the algorithm, in place of the individual image, to determine the pose of the camera at the position of the current image.
[0053] The process of pose refinement shown in steps 406, 408, 410, and 412 may be repeated until a set number of iterations have occurred, step 414, or until the matching errors fall below a set criteria, step 416 (alternatively, the matching score, if used, may exceed a predetermined level).
[0054] Figure 5 is a flowchart illustrating an embodiment of the present invention to update a three dimensional model of a scene using an image of the scene with an unknown pose. This method begins with several steps that are similar to those described above with regard to Figure 4. As in the exemplary embodiments described with regard to Figure 4, the embodiment illustrating in Figure 5 begins with a three dimensional model of the scene, 500. This starting model may be very rudimentary and limited in its scope. As images are aligned and information from these images used to update the model, the detail and range of the model may both be increased.
[0055] Still following the outline of Figure 4, the image is compared to the three dimensional model of the scene to generate an initial estimate of the pose, step 502. A set of relevant features are selected from the three dimensional model based on this pose estimate, step 504. The relevant features of the model are then projected and matched to features in the image to generate matching errors, step 506. Next, the estimated pose is updated based on the matching errors, step 508. Once again as previously described, the pose refinement process, shown in Figure 5 as steps 504, 506, and 508, may be repeated until a set number of iterations have occurred, step 510, or until the matching errors fall below a set criteria, step 512 (alternatively, if an embodiment employing a matching score is used, step 512 may query if the matching score exceeds a predetermined level). Step 514 determines whether the pose estimate indicates that the pose of the camera has changed so much from the initial pose in which the current relevant features were selected that it may be desirable to reselect a new set of relevant model features to be used for continued pose refinement.
[0056] Once the pose estimate is completed, as evidenced by a 'yes' answer to the query of either step 510 or step 512, the data from the image is used to update the three dimensional model, step 516. This update may include: the refinement of object placement within the scene; the additions of textural or fine structural detail to surfaces of objects; temporal updating of dynamic processes, such as changing illumination, or objects in motion; and the spatial extension of the scene being modeled. A number of methods to project image data onto a corresponding three dimensional model, such as a simple ray tracing method, are known in the art and may be used in step 516. For many of these methods it is desirable to use two, or more, images simultaneously, particularly to assist with refining object placement and identifying objects in motion. Figure 6 is a perspective drawing illustrating a plane in the scene, 600, and the poses of two images, 602 and 604, of the scene, both of which include object 606. Significant work has been done using the epipolar geometry shown in Figure 6 for image processing and mosaic construction. In this Figure, the epipole is shown as the line O - O'. The epipolar geometry of a scene may also be used to place constraints on the relative pose of images 602 and 604 even if the location of object 606 within the scene is not known. Additionally, plane plus parallax methods may be used to generate relative pose constraints between images. A detailed discussion of some of these methods is provided in U.S. Patent number 6,192,145, entitled "METHOD AND APPARATUS FOR THREE-DIMENSIONAL SCENE PROCESSING USING PARALLAX GEOMETRY OF PAIRS OF POINTS" .
[0058] Figure 7 is a flowchart illustrating another exemplary embodiment of the present invention. This exemplary method is similar to that previously described with regard to Figure 5, but uses multiple images of the scene with unknown poses to update a three dimensional model of a scene. Although the exemplary method shown in Figure 7 only employs two images at a time, it is contemplated that three or more images may be simultaneously used in this embodiment. The images may be successive images from a moving source, or they may be concurrently generated images from multiple sources. The parallel paths in Figure 7, such as steps 702 and 704, represent parallel processes, which may occurs as multi-threaded tasks within the image processing hardware of the present invention. It may be desirable for these tasks to be performed concurrently by parallel processors to increase processing speed.
[0059] As in previously described embodiments of the present invention, this embodiment begins with a three dimension model of a scene, 700, and the first and second images are compared to the three dimensional model of the scene, steps 702 and 704 respectively, to generate pose estimates. These pose estimates may be initial pose estimates, or may be refined as described in previous embodiments of the present invention. If the pose estimated are refined, it may be desirable for the matching error criteria to be less precise than in previous embodiments.
[0060] Next, the first and second image are compared to one another, step 706, to generate constraints on the relative poses of the two images. A number of methods exist to generate relative pose constraints as described above with respect to Figure 6, such as optical flow, RANSAC, epipolar geometry calculations, and plane plus parallax. Relative pose constraints can also be generated due to the known motion of a single camera, or known spatial relationships between multiple cameras. The pose estimates are then updated to accommodate the relative pose constraints, step 708. While it is often desirable to minimize the individual image to model mismatches when the pose estimates are being updated, this update should result in pose estimates that fit within the relative pose constraints generated in step 706.
[0061] Next, sets of relevant features from the three dimensional model, corresponding to the first image and the second image, are selected based on the updated poses, steps 710 and 712 respectively. These sets of relevant features may be selected by any of the previously disclosed methods. The corresponding set of relevant features is next projected to the estimated pose of the first image, matched to the first image and first image matching errors are measured, step 714. The same procedure is followed for the second image in step 716.
[0062] The three dimensional model is updated using the matching errors from both images to improve the position estimate of objects within the model, step 718. The matching errors are jointly minimized with respect to position estimate for both images to refine the placement of known objects in the scene model. Further iterations may be used to improve the model refinement, and possibly the estimated camera poses as well. These iterations may start by once again estimating the image poses, steps 702 and 704, or with the selection of sets of relevant model features, steps 710 and 712.
[0063] The use of more than two images at a time may also increase the accuracy of object placement. To help overcome errors in estimating object placement when the estimates of camera pose are inaccurate, or vice versa, parameters may be estimated jointly in multiple frames, akin to bundle adjustment in structure from motion. For further stability, inconsistency of poses with respect to interframe alignments should also be penalized, e.g. uncertainties in the relative pose constraints of corresponding points in frame pairs may be used to weight the importance of matching errors for various images.
[0064] This is a 3D generalization of the method disclosed in U.S. Patent number
6,078,701, entitled "METHOD AND APPARATUS FOR PERFORMING LOCAL TO GLOBAL MULTI-FRAME ALIGNMENT TO CONSTRUCT MOSAIC IMAGES".
[0065] In addition, refinement of object placement may be extended to more general deformations by using an additional set of parameters to describe known sizes and relationships of parts of objects.
[0066] One possible use of the present invention is illustrated in Figures 8 and 9.
This application of the present invention involves using current video from a camera mounted on a remote vehicle to locate the vehicle relative to objects in a three dimensional model. An additional application for the present invention may be to extend and refine a three dimensional model of an area as the area is explored by the remote vehicle possibly beyond the range of the scene originally covered by the model. Numerous applications in robotics exist for such methods. These methods may be particularly useful for the exploration of places in which other location techniques, such as GPS, are not practical, including underground, deep-sea, and interplanetary exploration.
[0067] Figure 8 is a top plan drawing of remote vehicle 800, with mounted camera
802, viewing the scene from Figure 1. In this exemplary sketch, the camera is turned so that at least a portion of each object in the scene, cube 102, star prism 104, and cylinder 106, is visible (shown as surfaces 804). Although both the remote vehicle and the camera may be independently moveable, the position and angular orientation of camera 802 relative to remote vehicle 800 are assumed to be known, or at least measurable. [0068] Figure 9 is a flowchart illustrating an exemplary embodiment of the present invention to accurately estimate the position of a remote vehicle within a scene using a three dimensional model of the scene and an image of the scene. As previously described, the method begins with a three dimension model and an image from a camera having an orientation relative to the remote vehicle, 900 that can be determined. The next seven steps, 902, 904, 906, 908, 910, 912, and 914, involve estimating and refining the camera pose with respect to the three dimensional model. These steps are the same as steps 502, 504, 506, 508, 510, 512, and 514 in the flowchart of Figure 5.
[0069] Once the pose of the camera mounted on the vehicle has been accurately estimated with respect to the scene, the position and orientation of the remote vehicle relative to the scene in the three dimensional model may be determined, step 916. Because the mounted camera may be adjusted to rotate, tilt or zoom, parameters controlling these functions may need to be examined to determine the orientation of the camera relative to the remote vehicle.
[0070] The pose of the camera may be used independently of the position and orientation of the vehicle to update the model. At step 918, the three dimensional model is updated based on data from the image and its estimated pose, step 918, similarly to step 516 in Figure 5. Finally, the camera orientation, and possibly its position, relative to the remote vehicle are determined, step 920 to fix the position of the vehicle. The process begins again at step 902 when another image is taken.
[0071] In addition to the large-scale model refinement of object placement previously discussed with regard to Figure 7, an exemplary embodiment of the present invention may be used for detailed refinement of a three dimensional model, including definition of local surface shape and texture. The true shape of the surfaces in the scene may differ from the planar surfaces given by the given polyhedral model, even after refinement of object placement. For example, between the time of model construction and acquisition of the current video, the existing objects may have changed, and objects may be added or deleted. Some scene objects, especially natural objects like trees, may be absent from the model, or unsatisfactorily represented, because they are hard to represent using simple polyhedra. This can be clearly seen for the polyhedral model of cylinder 106 in Figure 2. [0072] In another exemplary embodiment of the present invention, an untextured model, such as the previous mentioned polyhedral model, may be populated with pixels from a series of images of the scene, such as video sequences, using the estimated camera poses and corresponding estimated object placements. The brightness of an object in the scene may be approximated as being constant, independent of the viewpoint, and each surface pixel may be assigned a color value that is most typical of all the given images.
[0073] As part of this embodiment polygonal surfaces of a polyhedral three- dimensional model may be separated into approximately equally sized portions. This may allow for better mapping of surface detail from a series of images onto the model, particularly for surfaces that are only partially visible in some of the images. Figure 10 shows a perspective drawing of surface 200, from Figure 2, which has been separated into approximately equal sized portions 1000. Surface detail 1002 for one portion, which may be added to the polyhedral model, is also shown. By adding surface detail to a polyhedral model of the scene in this manner, a hybrid three-dimensional model of the scene may be created.
[0074] Figure 11 is a flowchart illustrating an exemplary embodiment of the present invention to create such a hybrid three dimensional model of a scene using a series of images of the scene. The first step is to create a polyhedral model the scene, step 1100. The polyhedral model desirably includes a plurality of polygonal surfaces, which are larger than a predetermined size for surface portions. Any standard technique for creating the model may be used. The rough shape and placement of objects in the polyhedral model may be refined according to the techniques of either of the flowcharts in Figures 5 and 7 using previous, possibly low resolution, images of the scene and/or the series of images to be employed for adding surface detail to the model. It is also important to determine the camera poses of the series of images. The poses may be known, or may be determined by one of the methods described above.
[0076] Next a polygonal face is selected from the polyhedral model for determination of surface details, step 1102. The selection may be predetermined to sweep the analysis from one side of the model to its opposite side, or it may be based on a determination of scene areas in the images where the modeled shape appears to differ significantly from the existing model. The latter selection method is most useful when refinement of surface details is desirably to be performed only in those areas of the model that deviate from the images by an amount greater than a certain threshold.
[0077] At this point, it may be desirable to estimate and represent a refined surface shape as a height map associated with the selected planar surface. Given the previously determined camera poses, these height maps may be estimated by dense 3D estimation techniques from two or more observed images. The outcome of this process is a hybrid shape representation that augments the polyhedral model with local shape variation.
[0078] Several relevant techniques are previously disclosed: U.S. Patent number
5,963,664, entitled "METHOD AND SYSTEM FOR IMAGE COMBINATION USING A PARALLAX BASED TECHNIQUE", U.S. Patent number 5,259,040, entitled "METHOD FOR DETERMINING SENSOR MOTION AND SCENE STRUCTURE AND IMAGE PROCESSING SYSTEM THEREFOR", and U.S. Patent Application number 09/384118, entitled "METHOD AND APPARATUS FOR PROCESSING IMAGES". First, a batch of images is co-registered using the previously determined poses and shape model. Next, using change detection, regions are segmented out where the image alignment is consistently incorrect. In the detected change regions, the residual misalignment may be estimated, i.e. flow constrained along the epipolar lines. By progressing from coarse to fine image scale, such an estimation algorithm can handle a large amount of parallax, which may occur in areas of significant discrepancy between the given shape model and the actual 3D structure of the scene. Finally, the last step is to convert the resulting parallax magnitude to metric height.
[0079] In the cited patents and applications, it is recommended to sum up the image intensity errors over all pixels, between each inspection frame and a single designated reference frame. However, if the batch contains frames taken from widely disparate viewpoints, the appearance change may be large even if well aligned. Therefore, it may be desirable to sum up intensity errors only between adjacent images, or consecutive frames, if the series of images are a video sequence. A larger subset of images, or frames, may also be used, but it is often desirable for this subset to be selected to have viewpoints within a predetermined range.
[0080] If the batch of images used in the foregoing computation covers a sufficiently large range of views, the estimated shape may be very accurate; however, occlusion and deocclusion areas cannot be well represented using a depth map with respect to one reference view. In practice, a smaller batch of images may produce a shape estimate good enough to align and synthesize viewpoints in the vicinity of the source data. It is contemplated that an alternative may be to use the parallax estimation methods to independently estimate and represent shape within a series of overlapping batches of images, each with its own reference view. Each such estimate would be stored in a hybrid scene representation consisting of a polyhedral model plus view-dependent local shape variations.
[0081] Whether a height map of the local surface shape has been constructed or not, a separate texture map is next constructed for each polyhedral face in the model. The local height map may assist in this construction, but is not a necessary step. The first step of this process, step 1104 in Figure 11, is to separate the selected polygonal surface into portions of approximately the predetermined size for surface portions. This predetermined size may be as small as a single element within the model, or it may be as large as the entire polygonal surface. Next one of these portions is selected, step 1106. Z-buffering may be used to detect and discard those images which are occluded by some other face, leaving a subset of images which contain the selected surface portion, step 1108.
[0082] To determine the color value for a portion X on the surface, X is mapped to every image in the subset, using object placement parameters for the surface, camera pose parameters and, if available, the previously estimated height maps. One method, possibly the simplest, is to combine the color values from the pixels in the subset of images corresponding to X. This may be done by averaging the color values, blending the color values or performing another mathematical function on the color values of the various images in the subset.
[0083] These methods of combining the color values ignore the possibly unequal quality of the images. The highest resolution, and most frontal view, of the face gives the most information about the surface appearance. Image resolution (e.g. in pixels/meter) can be assessed by the smaller singular value, μ, of the Jacobian dUi/dX|, where X| is measured in a 2D face-aligned coordinate system and Ui is the position in the iΛ image corresponding to X. Thus, the image having the maximum value of μ may be selected at step 1110, and its color value(s) projected from pixel(s) Ui to portion X, step 1112.
[0084] The color and brightness at corresponding points in different images may not be identical, due to camera gain variations and non-Lambertian surface materials. Abruptly switching between different source images while computing the appearance of a single face may cause seams to appear in the texture map. Accordingly, it may be desirable to use pixel values from a single image to map texture onto a given surface of the model, if possible.
[0085] Alternatively, once it has been determined that all of the portions of a given surface have been mapped, step 1114, any artifacts caused by using multiple images to generate the texture maps of the portions of a single surface may be mitigated through multi-resolution blending of the portions, step 1116, as taught in U.S. Patent number 6,075,905, entitled "METHOD AND APPARATUS FOR MOSAIC IMAGE CONSTRUCTION" . This step may also be employed when a single image is used to map texture onto all of the portions of a given surface of the model.
[0086] The foregoing method to create a surface texture map of a hybrid three dimensional model may be used whether the scene is initially modeled by planar faces alone or by a model of planar faces plus shape variations. In the generalized representation using view-dependent local shape, texture may be recovered for each local reference view. The result would be a 3D-scene model with view-dependent shape and texture.
[0087] One exemplary method to synthesize a new image from a novel viewpoint may be to interpolate the shape and texture of two or more nearby reference views closest to the desired viewpoint, according to the teachings of U.S. Patent Application number 08/917402, entitled "METHOD AND SYSTEM FOR RENDERING AND COMBINING IMAGES TO FORM A SYNTHESIZED VIEW OF A SCENE CONTAINING IMAGE INFORMATION FROM A SECOND IMAGE" .
[0088] Figure 12 is a perspective drawing of a scene, 1200, illustrating an exemplary video flashlights method of the present invention, which may be employed to synthesize an image from a novel viewpoint using a dynamic three dimensional model, such as those previously described with regard to the flowcharts of Figures 5, 1, and 11. The exemplary scene, 1200, is being filmed by video cameras 1202, which may be in motion. At any given time, images from these cameras contain only a portion of the scene, 1204.
[0089] It has been previously demonstrated that current video images of a semi- urban environment can be aligned in near real-time to site models. The textured models can then be rendered using graphics pipeline processors. A visual metaphor for this process of combining models with videos is that of video flashlights 'illuminating' portions 1204 of the model. The multiple camera views at a given time instant may be considered as video flashlights capturing the appearance of the scene from their respective viewpoints. The multiple appearances are coherently combined with the model to provide multiple users the ability to navigate through the environment while viewing the current appearance as derived from the video flashlights.
[0090] Figure 13 is a flowchart illustrating an exemplary embodiment of the present invention to create virtual images of a scene and a dynamic three-dimensional model of the scene using this video flashlight method.
[0091] This exemplary method involves two concurrent processes. Both of these processes start from a preexisting three dimensional model of the scene, 1300. The first of these processes, updating the model using the incoming video data, is exemplified in Figure 13 as steps 1302, 1304, 1306 and 1308. Any of the methods previously described above for updating a three dimensional model may be used. This dynamic updating process is continuous, so that the model may contain the most currently available data.
[0092] The second process is the creation of output image sequence(s) based on the dynamic three-dimensional model. This process involves the selection of a virtual viewpoint, step 1310, and projection of the current model onto the selected viewpoint, step 1312. The viewpoint selection may be controlled manually, set to track a feature, or object, in the model, or may follow a predetermined pattern. Projection of a three dimensional model to form an image as seen from selected point may be performed by a number of well known methods, such as Z-buffering.
[0093] Surveillance provides one possible application of this embodiment of the present invention. In a surveillance application using simultaneously deployed moving cameras, it may be difficult for a human operator to fuse and interpret real-time video streams displayed on separate viewing screens. The relationship of the streams to the larger environment is not evident from the images, which may be unstable and narrow in field of view. Ideally, a visualization should portray the world as if the user were actually looking at the live scene, decoupled from the paths of the cameras that are collecting the imagery.
[0094] An exemplary embodiment of the present invention registers all video frames to the model so that images from several cameras at the same time instant can be projected onto the model, like flashlights illuminating the scene, which may then be rendered for any user-selected viewpoint. In this context of the scene model, it may become easy to interpret the imagery and the dynamic events taking place in all streams at once.
[0095] It is contemplated that the three dimensional model may be constructed to be devoid of moving objects and/or objects that are difficult to model onto polyhedra. These objects may appear in the video flashlights. In a virtual imaging system of this design, it may be desirable to make control of the video flashlight(s) responsive to movement of the virtual viewpoint(s).
[0096] Additionally, it is contemplated that the methods previously described may be carried out within a general purpose computer system instructed to perform these functions by means of a computer-readable medium. Such computer-readable media include; integrated circuits, magnetic and optical storage media, as well as audiofrequency, radio frequency, and optical carrier waves.
[0097] The embodiments of the present invention have been described with regard to polyhedral and hybrid three-dimensional models since they are appropriate for urban scenes dominated by planar structures, a present domain of application. Also, this format is well supported by software tools and graphics hardware. It is understood that those skilled in the art may find it advantageous to use other three dimensional models with these methods. Such use does not depart from the scope of the present invention. In the same vein, it will be understood by those skilled in the art that many modifications and variations may be made to the foregoing preferred embodiment without substantially altering the invention.

Claims

What is Claimed:
1 1. A method for accurately estimating a pose of a camera within a
2 scene using a three dimension model of the scene, comprising the steps of:
3 (a) generating an initial estimate of the pose;
4 (b) selecting a set of relevant features of the three
5 dimensional model based on the initial estimate of the pose;
6 (c) creating a virtual projection of the set of relevant
7 features responsive to the initial estimate of the pose;
8 (d) matching a plurality of features of an image received
9 from the camera to the virtual projection of the set of relevant 0 features and measuring a plurality of matching errors; and 1 . (e) updating the estimate of the pose to reduce the 2 plurality of matching errors .
1 2. A method for accurately estimating a pose of an image of a scene
2 using a three dimension model of the scene, comprising the steps of:
3 (a) performing a pyramid decomposition of the image to
4 generate a set of pyramid levels of the image;
5 (b) selecting one pyramid level from the set of pyramid
6 levels;
7 (c) generating an estimate of the pose using the selected S pyramid level;
9 (d) selecting a set of relevant features of the three 0 dimensional model based on the estimate of the pose; 1 (e) creating a virtual projection of the set of relevant 2 features responsive to the estimate of the pose; 3 (f) matching a plurality of features of the selected 4 pyramid level to the virtual projection of the set of relevant features 5 and measuring a plurality of matching errors; 6 (g) updating the estimate of the pose to reduce the 7 plurality of matching errors; and 8 (h) repeating to steps (e), (f), and (g) using the updated 9 estimate of the pose until the plurality of matching errors are less than predetermined matching criteria which is responsive to the selected pyramid level. 3. A method for refining a three dimensional model of a scene using an image of the scene taken by a camera having an unknown pose, comprising the steps of: (a) comparing the image to the three dimension model of the scene to generate an estimate of the pose; and (b) updating the three dimensional model of the scene based on data from the image and the estimate of the pose. 4. A method for accurately estimating a position of remote vehicle using a three dimension model and an image from a camera having a known orientation relative to the remote vehicle, comprising the steps of: (a) comparing the image to the three dimension model of the scene to generate an estimate of the pose; (b) selecting a set of relevant features of the three dimensional model based on the estimate of the pose; (c) matching a plurality of features of the image to the set of relevant features and measuring a plurality of matching errors; (d) updating the estimate of the pose based on the plurality of matching errors; and (e) determining the position of the remote vehicle based on the estimate of the pose and the orientation of the camera. 5. A method for refining a three dimension model of a scene containing an object using a plurality of images of the scene, each image including the object, comprising the steps of: (a) comparing a first image of the plurality of images to the three dimension model of the scene to generate an estimate of a first viewpoint corresponding to the first image; (b) " comparing a second image of the plurality of images to the three dimension model of the scene to generate an estimate of a second viewpoint corresponding to the second image; (c) selecting a first set of relevant features of the three dimensional model based on the first viewpoint; (d) matching a plurality of first features of the first image to the first set of relevant features and measuring a plurality of first matching errors; (e) selecting a second set of relevant features of the three dimensional model based on the second viewpoint; (f) matching a plurality of second features of the second image to the second set of relevant features and measuring a plurality of second matching errors; and (g) updating a position estimate of the object within the three dimensional model of the scene based on the plurality of first matching errors and the plurality of second matching errors. 6. A method for refining a three dimension model of a scene containing an object using a plurality of images of the scene, each image including the object, comprising the steps of: (a) selecting a subset of images from the plurality of images of the scene, the subset of frames containing at least two of the images; (b) determining a plurality of approximate relative viewpoints of the subset of images; (c) comparing each image in the subset of images to the three dimensional model to generate a subset of estimated viewpoints corresponding to the subset of images, the subset of estimated viewpoints constrained by the plurality of approximate relative viewpoints; (d) selecting a set of relevant features of the three dimensional model corresponding to each estimated viewpoint; (e) matching a plurality of features of the each image in the subset of images to the corresponding set of relevant features and measuring a plurality of matching errors; and (f) updating a position estimate of the object within the three dimensional model of the scene based on the plurality of matching errors. 7. A method for creating a hybrid three dimension model of a scene using a plurality of images of the scene, comprising the steps of: (a) creating a polyhedral model the scene including at least one polygonal surface; (b) determining a first set of images containing at least a section of a first polygonal surface; and (c) comparing a plurality of images selected from the first set of images to generate a local surface shape map corresponding to the first polygonal surface. 8. A method for creating a textured three dimension model of a scene using a plurality of images of the scene, comprising the steps of: (a) creating a polyhedral model the scene including at least one polygonal surface; (b) identifying at least one portion of the one polygonal surface; (c) determining a first subset of images containing the at least one portion of the one polygonal surface; (d) selecting at least one selected image of the first subset of images; (e) projecting a corresponding section of each of the at least one selected image onto the at least one portion of the one polygonal surface of the textured three dimension model as a local color map; (f) repeating steps (c), (d), and (e) for each remaining portion of the one polygonal surface of the polyhedral model 9. A method for creating a dynamic sequence of virtual images of a scene using a dynamically updated three dimension model of the scene, comprising the steps of: (a) updating the three dimension model using a video sequence of images of the scene including the steps of; (al) determining a present viewpoint of a present image of the video sequence of images; (a2) determining a relevant portion of the three dimensional model corresponding to the present image of the video sequence of images; (a3) updating the relevant portion of the three dimensional model by projecting the present image onto the relevant portion of the three dimension model; (b) selecting a first virtual viewpoint of a first virtual image of the dynamic sequence of virtual images; (c) creating the first virtual image by projecting the dynamic three dimensional model onto the first virtual viewpoint; and (d) repeating steps (b) and (c) for each remaining virtual image of the dynamic sequence of virtual images. 10. A computer readable medium adapted to instruct a general purpose computer to update a three dimensional model of a scene using the three dimensional model of the scene, an image received from a camera having an unknown pose, the method comprising the steps of: (a) generating an estimate of the pose; (b) selecting a set of relevant features of the three dimensional model based on the estimate of the pose; (c) creating a virtual projection of the set of relevant features responsive to the estimate of the pose; (d) matching a plurality of features of an image received from the camera to the virtual projection of the set of relevant features and measuring a plurality of matching errors; (e) updating the estimate of the pose to reduce the plurality of matching errors; and (f) updating the three dimensional model of the scene based on data from the image and the estimate of the pose. 11. An automatic three-dimensional model updating apparatus for accurately estimating a point of view of an image of a scene, relative to a three- dimensional model of the scene, and updating the three-dimensional model comprising: (a) estimating means for providing an estimate of the point of view of the image; (b) relevant feature selecting means for selecting a set of relevant features of the three dimensional model based on the estimate of the point of view; (c) virtual projection means for creating a virtual projection of the set of relevant features responsive to the estimate of the point of view; (d) matching means for matching a plurality of features of the image to the virtual projection of the set of relevant features; (e) measurement means for measuring a plurality of matching errors; (f) point of view refinement means for updating the estimate of the point of view to reduce the plurality of matching errors; and (g) model refinement means, responsive to the estimated point of view and to the image, for updating the three-dimensional model.
PCT/US2001/007099 2000-03-07 2001-03-07 Camera pose estimation WO2001067749A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001250802A AU2001250802A1 (en) 2000-03-07 2001-03-07 Camera pose estimation
EP01924119A EP1297691A2 (en) 2000-03-07 2001-03-07 Camera pose estimation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18755700P 2000-03-07 2000-03-07
US60/187,557 2000-03-07

Publications (2)

Publication Number Publication Date
WO2001067749A2 true WO2001067749A2 (en) 2001-09-13
WO2001067749A3 WO2001067749A3 (en) 2003-01-23

Family

ID=22689450

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/007099 WO2001067749A2 (en) 2000-03-07 2001-03-07 Camera pose estimation

Country Status (4)

Country Link
US (1) US6985620B2 (en)
EP (1) EP1297691A2 (en)
AU (1) AU2001250802A1 (en)
WO (1) WO2001067749A2 (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1449180A2 (en) * 2001-11-02 2004-08-25 Sarnoff Corporation Method and apparatus for providing immersive surveillance
WO2005124694A1 (en) * 2004-06-21 2005-12-29 Totalförsvarets Forskningsinstitut Device and method for presenting an image of the surrounding world
EP1627358A2 (en) * 2003-03-11 2006-02-22 Sarnoff Corporation Method and apparatus for determining camera pose from point correspondences
US7259778B2 (en) 2003-07-01 2007-08-21 L-3 Communications Corporation Method and apparatus for placing sensors using 3D models
WO2008031369A1 (en) * 2006-09-15 2008-03-20 Siemens Aktiengesellschaft System and method for determining the position and the orientation of a user
US7633520B2 (en) 2003-06-19 2009-12-15 L-3 Communications Corporation Method and apparatus for providing a scalable multi-camera distributed video processing and visualization surveillance system
CN101516040B (en) * 2008-02-20 2011-07-06 华为终端有限公司 Video matching method, device and system
EP2347370A1 (en) * 2008-10-08 2011-07-27 Strider Labs, Inc. System and method for constructing a 3d scene model from an image
EP2542994A2 (en) * 2010-03-02 2013-01-09 Crown Equipment Limited Method and apparatus for simulating a physical environment to facilitate vehicle operation and task completion
US9188982B2 (en) 2011-04-11 2015-11-17 Crown Equipment Limited Method and apparatus for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner
WO2015181827A1 (en) * 2014-05-28 2015-12-03 Elbit Systems Land & C4I Ltd. Method and system for image georegistration
US9206023B2 (en) 2011-08-26 2015-12-08 Crown Equipment Limited Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
WO2016053067A1 (en) * 2014-10-03 2016-04-07 Samsung Electronics Co., Ltd. 3-dimensional model generation using edges
WO2016203731A1 (en) * 2015-06-17 2016-12-22 Mitsubishi Electric Corporation Method for reconstructing 3d scene as 3d model
WO2017124663A1 (en) * 2016-01-21 2017-07-27 杭州海康威视数字技术股份有限公司 Three-dimensional surveillance system, and rapid deployment method for same
US9987752B2 (en) 2016-06-10 2018-06-05 Brain Corporation Systems and methods for automatic detection of spills
US10001780B2 (en) 2016-11-02 2018-06-19 Brain Corporation Systems and methods for dynamic route planning in autonomous navigation
EP3343506A1 (en) * 2016-12-28 2018-07-04 Thomson Licensing Method and device for joint segmentation and 3d reconstruction of a scene
US10016896B2 (en) 2016-06-30 2018-07-10 Brain Corporation Systems and methods for robotic behavior around moving bodies
US10241514B2 (en) 2016-05-11 2019-03-26 Brain Corporation Systems and methods for initializing a robot to autonomously travel a trained route
US10274325B2 (en) 2016-11-01 2019-04-30 Brain Corporation Systems and methods for robotic mapping
US10282849B2 (en) 2016-06-17 2019-05-07 Brain Corporation Systems and methods for predictive/reconstructive visual object tracker
US10293485B2 (en) 2017-03-30 2019-05-21 Brain Corporation Systems and methods for robotic path planning
US10377040B2 (en) 2017-02-02 2019-08-13 Brain Corporation Systems and methods for assisting a robotic apparatus
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
CN111369660A (en) * 2020-03-02 2020-07-03 中国电子科技集团公司第五十二研究所 Seamless texture mapping method for three-dimensional model
US10723018B2 (en) 2016-11-28 2020-07-28 Brain Corporation Systems and methods for remote operating and/or monitoring of a robot
GB2581403A (en) * 2019-02-13 2020-08-19 Perceptual Robotics Ltd Pose optimisation, mapping, and localisation techniques
CN111739068A (en) * 2020-05-06 2020-10-02 西安电子科技大学 Light field camera relative pose estimation method
US10852730B2 (en) 2017-02-08 2020-12-01 Brain Corporation Systems and methods for robotic mobile platforms
CN112584120A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion method
CN112584060A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion system
CN113284231A (en) * 2021-06-10 2021-08-20 中国水利水电第七工程局有限公司 Tower crane modeling method based on multi-dimensional information
US11827351B2 (en) 2018-09-10 2023-11-28 Perceptual Robotics Limited Control and navigation systems
US11886189B2 (en) 2018-09-10 2024-01-30 Perceptual Robotics Limited Control and navigation systems, pose optimization, mapping, and localization techniques

Families Citing this family (168)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040075738A1 (en) * 1999-05-12 2004-04-22 Sean Burke Spherical surveillance system architecture
US7620909B2 (en) 1999-05-12 2009-11-17 Imove Inc. Interactive image seamer for panoramic images
US7027642B2 (en) * 2000-04-28 2006-04-11 Orametrix, Inc. Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects
US9892606B2 (en) 2001-11-15 2018-02-13 Avigilon Fortress Corporation Video surveillance system employing video primitives
US8711217B2 (en) 2000-10-24 2014-04-29 Objectvideo, Inc. Video surveillance system employing video primitives
US20050162515A1 (en) * 2000-10-24 2005-07-28 Objectvideo, Inc. Video surveillance system
US8564661B2 (en) 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US7424175B2 (en) 2001-03-23 2008-09-09 Objectvideo, Inc. Video segmentation using statistical pixel modeling
US7509241B2 (en) * 2001-07-06 2009-03-24 Sarnoff Corporation Method and apparatus for automatically generating a site model
US20030012410A1 (en) * 2001-07-10 2003-01-16 Nassir Navab Tracking and pose estimation for augmented reality using real features
JP4573085B2 (en) * 2001-08-10 2010-11-04 日本電気株式会社 Position and orientation recognition device, position and orientation recognition method, and position and orientation recognition program
JP4195382B2 (en) * 2001-10-22 2008-12-10 ユニバーシティ オブ サウザーン カリフォルニア Tracking system expandable with automatic line calibration
US7073158B2 (en) * 2002-05-17 2006-07-04 Pixel Velocity, Inc. Automated system for designing and developing field programmable gate arrays
US20040136590A1 (en) * 2002-09-20 2004-07-15 Albert-Jan Brouwer Means of partitioned matching and selective refinement in a render, match, and refine iterative 3D scene model refinement system through propagation of model element identifiers
EP1567988A1 (en) * 2002-10-15 2005-08-31 University Of Southern California Augmented virtual environments
EP1552724A4 (en) * 2002-10-15 2010-10-20 Korea Electronics Telecomm Method for generating and consuming 3d audio scene with extended spatiality of sound source
US7317812B1 (en) * 2002-11-15 2008-01-08 Videomining Corporation Method and apparatus for robustly tracking objects
IL155034A0 (en) * 2003-03-23 2004-06-20 M A M D Digital Data Proc Syst Automatic aerial digital photography and digital data processing systems
US7956889B2 (en) * 2003-06-04 2011-06-07 Model Software Corporation Video surveillance system
EP3190546A3 (en) * 2003-06-12 2017-10-04 Honda Motor Co., Ltd. Target orientation estimation using depth sensing
CA2540084A1 (en) * 2003-10-30 2005-05-12 Nec Corporation Estimation system, estimation method, and estimation program for estimating object state
GB2411532B (en) * 2004-02-11 2010-04-28 British Broadcasting Corp Position determination
WO2005081178A1 (en) * 2004-02-17 2005-09-01 Yeda Research & Development Co., Ltd. Method and apparatus for matching portions of input images
WO2005086089A1 (en) * 2004-03-03 2005-09-15 Nec Corporation Object posture estimation/correlation system, object posture estimation/correlation method, and program for the same
EP1607716A3 (en) 2004-06-18 2012-06-20 Topcon Corporation Model forming apparatus and method, and photographing apparatus and method
US20070288141A1 (en) * 2004-06-22 2007-12-13 Bergen James R Method and apparatus for visual odometry
JP4368767B2 (en) * 2004-09-08 2009-11-18 独立行政法人産業技術総合研究所 Abnormal operation detection device and abnormal operation detection method
US7342586B2 (en) * 2004-09-13 2008-03-11 Nbor Corporation System and method for creating and playing a tweening animation using a graphic directional indicator
US7460689B1 (en) * 2004-09-15 2008-12-02 The United States Of America As Represented By The Secretary Of The Army System and method of detecting, recognizing, and tracking moving targets
KR100579135B1 (en) * 2004-11-11 2006-05-12 한국전자통신연구원 Method for capturing convergent-type multi-view image
KR20060063265A (en) * 2004-12-07 2006-06-12 삼성전자주식회사 Method and apparatus for processing image
US20060210169A1 (en) * 2005-03-03 2006-09-21 General Dynamics Advanced Information Systems, Inc. Apparatus and method for simulated sensor imagery using fast geometric transformations
KR100999206B1 (en) * 2005-04-18 2010-12-07 인텔 코오퍼레이션 Method and apparartus of three-dimensional road layout estimation from video sequences by tracking pedestrians and computer readable recording medium therefore
US7706603B2 (en) * 2005-04-19 2010-04-27 Siemens Corporation Fast object detection for augmented reality systems
US7725727B2 (en) 2005-06-01 2010-05-25 International Business Machines Corporation Automatic signature generation for content recognition
US7720257B2 (en) * 2005-06-16 2010-05-18 Honeywell International Inc. Object tracking system
US7565029B2 (en) * 2005-07-08 2009-07-21 Seiko Epson Corporation Method for determining camera position from two-dimensional images that form a panorama
US7460730B2 (en) * 2005-08-04 2008-12-02 Microsoft Corporation Video registration and image sequence stitching
CA2622327A1 (en) * 2005-09-12 2007-03-22 Carlos Tapang Frame by frame, pixel by pixel matching of model-generated graphics images to camera frames for computer vision
US7760911B2 (en) * 2005-09-15 2010-07-20 Sarnoff Corporation Method and system for segment-based optical flow estimation
US8467904B2 (en) * 2005-12-22 2013-06-18 Honda Motor Co., Ltd. Reconstruction, retargetting, tracking, and estimation of pose of articulated systems
IL172995A (en) * 2006-01-05 2011-07-31 Gadi Royz Method and apparatus for making a virtual movie for use in exploring a site
US8509563B2 (en) * 2006-02-02 2013-08-13 Microsoft Corporation Generation of documents from images
EP2005748B1 (en) * 2006-04-13 2013-07-10 Curtin University Of Technology Virtual observer
CN101443789B (en) * 2006-04-17 2011-12-28 实物视频影像公司 video segmentation using statistical pixel modeling
US8924021B2 (en) * 2006-04-27 2014-12-30 Honda Motor Co., Ltd. Control of robots from human motion descriptors
JP4215781B2 (en) * 2006-06-16 2009-01-28 独立行政法人産業技術総合研究所 Abnormal operation detection device and abnormal operation detection method
JP4603512B2 (en) * 2006-06-16 2010-12-22 独立行政法人産業技術総合研究所 Abnormal region detection apparatus and abnormal region detection method
US8340349B2 (en) * 2006-06-20 2012-12-25 Sri International Moving target detection in the presence of parallax
US20080036864A1 (en) * 2006-08-09 2008-02-14 Mccubbrey David System and method for capturing and transmitting image data streams
GB0616293D0 (en) * 2006-08-16 2006-09-27 Imp Innovations Ltd Method of image processing
JP4429298B2 (en) * 2006-08-17 2010-03-10 独立行政法人産業技術総合研究所 Object number detection device and object number detection method
WO2008036354A1 (en) 2006-09-19 2008-03-27 Braintech Canada, Inc. System and method of determining object pose
US20080131029A1 (en) * 2006-10-10 2008-06-05 Coleby Stanley E Systems and methods for visualizing and measuring real world 3-d spatial data
US20080151049A1 (en) * 2006-12-14 2008-06-26 Mccubbrey David L Gaming surveillance system and method of extracting metadata from multiple synchronized cameras
WO2008076942A1 (en) * 2006-12-15 2008-06-26 Braintech Canada, Inc. System and method of identifying objects
EP1959668A3 (en) * 2007-02-19 2009-04-22 Seiko Epson Corporation Information processing method, information processing apparatus, and program
GB2459602B (en) * 2007-02-21 2011-09-21 Pixel Velocity Inc Scalable system for wide area surveillance
JP5248806B2 (en) * 2007-04-25 2013-07-31 キヤノン株式会社 Information processing apparatus and information processing method
JP5538667B2 (en) * 2007-04-26 2014-07-02 キヤノン株式会社 Position / orientation measuring apparatus and control method thereof
IL182799A (en) * 2007-04-26 2014-11-30 Nir Avrahami Method for estimating the pose of a ptz camera
US20090086023A1 (en) * 2007-07-18 2009-04-02 Mccubbrey David L Sensor system including a configuration of the sensor as a virtual sensor device
US20090066693A1 (en) * 2007-09-06 2009-03-12 Roc Carson Encoding A Depth Map Into An Image Using Analysis Of Two Consecutive Captured Frames
US8170287B2 (en) * 2007-10-26 2012-05-01 Honda Motor Co., Ltd. Real-time self collision and obstacle avoidance
WO2009058693A1 (en) * 2007-11-01 2009-05-07 Honda Motors Co., Ltd. Real-time self collision and obstacle avoidance using weighting matrix
CN101903885A (en) * 2007-12-18 2010-12-01 皇家飞利浦电子股份有限公司 Consistency metric based image registration
US9098766B2 (en) * 2007-12-21 2015-08-04 Honda Motor Co., Ltd. Controlled human pose estimation from depth image streams
US20090202102A1 (en) * 2008-02-08 2009-08-13 Hermelo Miranda Method and system for acquisition and display of images
TWI381719B (en) * 2008-02-18 2013-01-01 Univ Nat Taiwan Full-frame video stabilization with a polyline-fitted camcorder path
US20090245691A1 (en) * 2008-03-31 2009-10-01 University Of Southern California Estimating pose of photographic images in 3d earth model using human assistance
US8401276B1 (en) * 2008-05-20 2013-03-19 University Of Southern California 3-D reconstruction and registration
FR2932351B1 (en) * 2008-06-06 2012-12-14 Thales Sa METHOD OF OBSERVING SCENES COVERED AT LEAST PARTIALLY BY A SET OF CAMERAS AND VISUALIZABLE ON A REDUCED NUMBER OF SCREENS
US20100017033A1 (en) * 2008-07-18 2010-01-21 Remus Boca Robotic systems with user operable robot control terminals
JP2012503817A (en) * 2008-09-25 2012-02-09 テレ アトラス ベスローテン フエンノートシャップ Method and composition for blurring an image
US20120075296A1 (en) * 2008-10-08 2012-03-29 Strider Labs, Inc. System and Method for Constructing a 3D Scene Model From an Image
US8559699B2 (en) * 2008-10-10 2013-10-15 Roboticvisiontech Llc Methods and apparatus to facilitate operations in image based systems
WO2010045271A1 (en) 2008-10-14 2010-04-22 Joshua Victor Aller Target and method of detecting, identifying, and determining 3-d pose of the target
JP5290864B2 (en) * 2009-05-18 2013-09-18 キヤノン株式会社 Position and orientation estimation apparatus and method
TWI411870B (en) * 2009-07-21 2013-10-11 Teco Elec & Machinery Co Ltd Stereo image generating method and system
US20110115909A1 (en) * 2009-11-13 2011-05-19 Sternberg Stanley R Method for tracking an object through an environment across multiple cameras
US20110153338A1 (en) * 2009-12-17 2011-06-23 Noel Wayne Anderson System and method for deploying portable landmarks
US8635015B2 (en) * 2009-12-17 2014-01-21 Deere & Company Enhanced visual landmark for localization
US8224516B2 (en) * 2009-12-17 2012-07-17 Deere & Company System and method for area coverage using sector decomposition
US20110175918A1 (en) * 2010-01-21 2011-07-21 Cheng-Yun Karen Liu Character animation control interface using motion capure
WO2011140178A1 (en) * 2010-05-04 2011-11-10 Bae Systems National Security Solutions Inc. Inverse stereo image matching for change detection
US8818025B2 (en) * 2010-08-23 2014-08-26 Nokia Corporation Method and apparatus for recognizing objects in media content
US9160784B2 (en) 2010-10-15 2015-10-13 Hanwha Techwin Co., Ltd. Remote management system, remote management method, and monitoring server
US8692827B1 (en) * 2011-01-24 2014-04-08 Google Inc. Carving buildings from a three-dimensional model, and applications thereof
US9053455B2 (en) 2011-03-07 2015-06-09 Ricoh Company, Ltd. Providing position information in a collaborative environment
US9086798B2 (en) 2011-03-07 2015-07-21 Ricoh Company, Ltd. Associating information on a whiteboard with a user
US8698873B2 (en) * 2011-03-07 2014-04-15 Ricoh Company, Ltd. Video conferencing with shared drawing
US8881231B2 (en) 2011-03-07 2014-11-04 Ricoh Company, Ltd. Automatically performing an action upon a login
US9716858B2 (en) 2011-03-07 2017-07-25 Ricoh Company, Ltd. Automated selection and switching of displayed information
AU2011362799B2 (en) * 2011-03-18 2016-02-25 Apple Inc. 3D streets
US8810640B2 (en) * 2011-05-16 2014-08-19 Ut-Battelle, Llc Intrinsic feature-based pose measurement for imaging motion compensation
US9746988B2 (en) * 2011-05-23 2017-08-29 The Boeing Company Multi-sensor surveillance system with a common operating picture
FR2976107B1 (en) * 2011-05-30 2014-01-03 Commissariat Energie Atomique METHOD FOR LOCATING A CAMERA AND 3D RECONSTRUCTION IN A PARTIALLY KNOWN ENVIRONMENT
US8442308B2 (en) * 2011-08-04 2013-05-14 Cranial Technologies, Inc Method and apparatus for preparing image representative data
US8760513B2 (en) 2011-09-30 2014-06-24 Siemens Industry, Inc. Methods and system for stabilizing live video in the presence of long-term image drift
US9524555B2 (en) * 2011-12-12 2016-12-20 Beihang University Method and computer program product of the simultaneous pose and points-correspondences determination from a planar model
US9396577B2 (en) * 2012-02-16 2016-07-19 Google Inc. Using embedded camera parameters to determine a position for a three-dimensional model
US9141197B2 (en) * 2012-04-16 2015-09-22 Qualcomm Incorporated Interacting with a device using gestures
JP6044293B2 (en) * 2012-11-19 2016-12-14 株式会社Ihi 3D object recognition apparatus and 3D object recognition method
US20140168204A1 (en) * 2012-12-13 2014-06-19 Microsoft Corporation Model based video projection
US9251582B2 (en) 2012-12-31 2016-02-02 General Electric Company Methods and systems for enhanced automated visual inspection of a physical asset
JP6370038B2 (en) * 2013-02-07 2018-08-08 キヤノン株式会社 Position and orientation measurement apparatus and method
US9612211B2 (en) 2013-03-14 2017-04-04 General Electric Company Methods and systems for enhanced tip-tracking and navigation of visual inspection devices
US9406137B2 (en) 2013-06-14 2016-08-02 Qualcomm Incorporated Robust tracking using point and line features
CN105659170B (en) * 2013-06-27 2019-02-12 Abb瑞士股份有限公司 For transmitting the method and video communication device of video to remote user
US8818081B1 (en) * 2013-10-16 2014-08-26 Google Inc. 3D model updates using crowdsourced video
US9613298B2 (en) 2014-06-02 2017-04-04 Microsoft Technology Licensing, Llc Tracking using sensor data
US10679407B2 (en) * 2014-06-27 2020-06-09 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for modeling interactive diffuse reflections and higher-order diffraction in virtual environment scenes
US9977644B2 (en) 2014-07-29 2018-05-22 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene
GB201414144D0 (en) * 2014-08-08 2014-09-24 Imagination Tech Ltd Relightable texture for use in rendering an image
US10180734B2 (en) 2015-03-05 2019-01-15 Magic Leap, Inc. Systems and methods for augmented reality
US10838207B2 (en) 2015-03-05 2020-11-17 Magic Leap, Inc. Systems and methods for augmented reality
WO2016141373A1 (en) 2015-03-05 2016-09-09 Magic Leap, Inc. Systems and methods for augmented reality
US9665937B2 (en) * 2015-05-15 2017-05-30 Adobe Systems Incorporated Incremental global non-rigid alignment of three-dimensional scans
US10708571B2 (en) 2015-06-29 2020-07-07 Microsoft Technology Licensing, Llc Video frame processing
US10260862B2 (en) * 2015-11-02 2019-04-16 Mitsubishi Electric Research Laboratories, Inc. Pose estimation using sensors
US10347052B2 (en) * 2015-11-18 2019-07-09 Adobe Inc. Color-based geometric feature enhancement for 3D models
WO2017093034A1 (en) * 2015-12-01 2017-06-08 Brainlab Ag Method and apparatus for determining or predicting the position of a target
KR20180090355A (en) 2015-12-04 2018-08-10 매직 립, 인코포레이티드 Recirculation systems and methods
DE102015016045B8 (en) * 2015-12-11 2017-09-14 Audi Ag Satellite-based determination of a motor vehicle in a covered area
US9996936B2 (en) * 2016-05-20 2018-06-12 Qualcomm Incorporated Predictor-corrector based pose detection
US20170359561A1 (en) * 2016-06-08 2017-12-14 Uber Technologies, Inc. Disparity mapping for an autonomous vehicle
EP3494549A4 (en) 2016-08-02 2019-08-14 Magic Leap, Inc. Fixed-distance virtual and augmented reality systems and methods
US10630962B2 (en) * 2017-01-04 2020-04-21 Qualcomm Incorporated Systems and methods for object location
US10812936B2 (en) 2017-01-23 2020-10-20 Magic Leap, Inc. Localization determination for mixed reality systems
US10248744B2 (en) 2017-02-16 2019-04-02 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for acoustic classification and optimization for multi-modal rendering of real-world scenes
JP7055815B2 (en) 2017-03-17 2022-04-18 マジック リープ, インコーポレイテッド A mixed reality system that involves warping virtual content and how to use it to generate virtual content
CA3054617A1 (en) 2017-03-17 2018-09-20 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
KR102366781B1 (en) * 2017-03-17 2022-02-22 매직 립, 인코포레이티드 Mixed reality system with color virtual content warping and method for creating virtual content using same
GB2561368B (en) * 2017-04-11 2019-10-09 Nokia Technologies Oy Methods and apparatuses for determining positions of multi-directional image capture apparatuses
CN107396046A (en) * 2017-07-20 2017-11-24 武汉大势智慧科技有限公司 A kind of stereoscopic monitoring system and method based on the true threedimensional model of oblique photograph
US10347008B2 (en) * 2017-08-14 2019-07-09 Trimble Inc. Self positioning camera system to 3D CAD/BIM model
AU2017225023A1 (en) * 2017-09-05 2019-03-21 Canon Kabushiki Kaisha System and method for determining a camera pose
CN109559271B (en) * 2017-09-26 2023-02-28 富士通株式会社 Method and device for optimizing depth image
EP3477595A1 (en) * 2017-10-31 2019-05-01 Thomson Licensing Method and apparatus for selecting a surface in a light field, and corresponding computer program product
US10967862B2 (en) 2017-11-07 2021-04-06 Uatc, Llc Road anomaly detection for autonomous vehicle
JP7132730B2 (en) * 2018-03-14 2022-09-07 キヤノン株式会社 Information processing device and information processing method
EP3543903B1 (en) * 2018-03-22 2021-08-11 Canon Kabushiki Kaisha Image processing apparatus and method, and storage medium storing instruction
US10276075B1 (en) * 2018-03-27 2019-04-30 Christie Digital System USA, Inc. Device, system and method for automatic calibration of image devices
US11494938B2 (en) 2018-05-15 2022-11-08 Northeastern University Multi-person pose estimation using skeleton prediction
US11032664B2 (en) 2018-05-29 2021-06-08 Staton Techiya, Llc Location based audio signal message processing
CN117711284A (en) 2018-07-23 2024-03-15 奇跃公司 In-field subcode timing in a field sequential display
EP3827299A4 (en) 2018-07-23 2021-10-27 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
MX2021007337A (en) * 2018-12-19 2021-07-15 Fraunhofer Ges Forschung Apparatus and method for reproducing a spatially extended sound source or apparatus and method for generating a bitstream from a spatially extended sound source.
US11774983B1 (en) * 2019-01-02 2023-10-03 Trifo, Inc. Autonomous platform guidance systems with unknown environment mapping
WO2020144784A1 (en) * 2019-01-09 2020-07-16 株式会社Fuji Image processing device, work robot, substrate inspection device, and specimen inspection device
WO2020202163A1 (en) * 2019-04-02 2020-10-08 Buildots Ltd. Determining position of an image capture device
US10997747B2 (en) 2019-05-09 2021-05-04 Trimble Inc. Target positioning with bundle adjustment
US11002541B2 (en) 2019-07-23 2021-05-11 Trimble Inc. Target positioning with electronic distance measuring and bundle adjustment
US11080861B2 (en) 2019-05-14 2021-08-03 Matterport, Inc. Scene segmentation using model subtraction
US10930012B2 (en) * 2019-05-21 2021-02-23 International Business Machines Corporation Progressive 3D point cloud segmentation into object and background from tracking sessions
US11335021B1 (en) * 2019-06-11 2022-05-17 Cognex Corporation System and method for refining dimensions of a generally cuboidal 3D object imaged by 3D vision system and controls for the same
US11605177B2 (en) 2019-06-11 2023-03-14 Cognex Corporation System and method for refining dimensions of a generally cuboidal 3D object imaged by 3D vision system and controls for the same
US11138760B2 (en) * 2019-11-06 2021-10-05 Varjo Technologies Oy Display systems and methods for correcting drifts in camera poses
CN110910338A (en) * 2019-12-03 2020-03-24 煤炭科学技术研究院有限公司 Three-dimensional live-action video acquisition method, device, equipment and storage medium
US11297116B2 (en) * 2019-12-04 2022-04-05 Roblox Corporation Hybrid streaming
US11272164B1 (en) * 2020-01-17 2022-03-08 Amazon Technologies, Inc. Data synthesis using three-dimensional modeling
US20210358173A1 (en) * 2020-05-11 2021-11-18 Magic Leap, Inc. Computationally efficient method for computing a composite representation of a 3d environment
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN114640785A (en) * 2020-12-16 2022-06-17 华为技术有限公司 Site model updating method and system
EP4036859A1 (en) * 2021-01-27 2022-08-03 Maxar International Sweden AB A system and method for providing improved geocoded reference data to a 3d map representation
WO2022241574A1 (en) * 2021-05-20 2022-11-24 Eigen Innovations Inc. Texture mapping to polygonal models for industrial inspections
CN114257786B (en) * 2021-12-16 2023-04-07 珠海格力电器股份有限公司 Monitoring method and device, intelligent cat eye and storage medium
WO2023180456A1 (en) * 2022-03-23 2023-09-28 Basf Se Method and apparatus for monitoring an industrial plant

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850469A (en) * 1996-07-09 1998-12-15 General Electric Company Real time tracking of camera pose
EP0898245A1 (en) * 1997-08-05 1999-02-24 Canon Kabushiki Kaisha Image processing apparatus
US6018349A (en) * 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5838455A (en) * 1919-05-11 1998-11-17 Minolta Co., Ltd. Image processor with image data compression capability
US5644651A (en) * 1995-03-31 1997-07-01 Nec Research Institute, Inc. Method for the estimation of rotation between two frames via epipolar search for use in a three-dimensional representation
EP0838068B1 (en) * 1995-07-10 2005-10-26 Sarnoff Corporation Method and system for rendering and combining images
US6192145B1 (en) * 1996-02-12 2001-02-20 Sarnoff Corporation Method and apparatus for three-dimensional scene processing using parallax geometry of pairs of points
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850469A (en) * 1996-07-09 1998-12-15 General Electric Company Real time tracking of camera pose
US6018349A (en) * 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics
EP0898245A1 (en) * 1997-08-05 1999-02-24 Canon Kabushiki Kaisha Image processing apparatus

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE49387E1 (en) 1991-12-23 2023-01-24 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US7522186B2 (en) 2000-03-07 2009-04-21 L-3 Communications Corporation Method and apparatus for providing immersive surveillance
EP1449180A2 (en) * 2001-11-02 2004-08-25 Sarnoff Corporation Method and apparatus for providing immersive surveillance
EP1449180A4 (en) * 2001-11-02 2008-05-14 Sarnoff Corp Method and apparatus for providing immersive surveillance
EP1627358A4 (en) * 2003-03-11 2011-08-24 Sarnoff Corp Method and apparatus for determining camera pose from point correspondences
EP1627358A2 (en) * 2003-03-11 2006-02-22 Sarnoff Corporation Method and apparatus for determining camera pose from point correspondences
US7633520B2 (en) 2003-06-19 2009-12-15 L-3 Communications Corporation Method and apparatus for providing a scalable multi-camera distributed video processing and visualization surveillance system
US7259778B2 (en) 2003-07-01 2007-08-21 L-3 Communications Corporation Method and apparatus for placing sensors using 3D models
WO2005124694A1 (en) * 2004-06-21 2005-12-29 Totalförsvarets Forskningsinstitut Device and method for presenting an image of the surrounding world
WO2008031369A1 (en) * 2006-09-15 2008-03-20 Siemens Aktiengesellschaft System and method for determining the position and the orientation of a user
CN101516040B (en) * 2008-02-20 2011-07-06 华为终端有限公司 Video matching method, device and system
EP2347370A1 (en) * 2008-10-08 2011-07-27 Strider Labs, Inc. System and method for constructing a 3d scene model from an image
EP2347370A4 (en) * 2008-10-08 2014-05-21 Strider Labs Inc System and method for constructing a 3d scene model from an image
EP2542994A4 (en) * 2010-03-02 2014-10-08 Crown Equipment Ltd Method and apparatus for simulating a physical environment to facilitate vehicle operation and task completion
EP2542994A2 (en) * 2010-03-02 2013-01-09 Crown Equipment Limited Method and apparatus for simulating a physical environment to facilitate vehicle operation and task completion
US9958873B2 (en) 2011-04-11 2018-05-01 Crown Equipment Corporation System for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner
US9188982B2 (en) 2011-04-11 2015-11-17 Crown Equipment Limited Method and apparatus for efficient scheduling for multiple automated non-holonomic vehicles using a coordinated path planner
US9206023B2 (en) 2011-08-26 2015-12-08 Crown Equipment Limited Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
US9580285B2 (en) 2011-08-26 2017-02-28 Crown Equipment Corporation Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
AU2015265416B2 (en) * 2014-05-28 2017-03-02 Elbit Systems Land & C4I Ltd. Method and system for image georegistration
EP3149698A4 (en) * 2014-05-28 2017-11-08 Elbit Systems Land and C4I Ltd. Method and system for image georegistration
WO2015181827A1 (en) * 2014-05-28 2015-12-03 Elbit Systems Land & C4I Ltd. Method and system for image georegistration
US10204454B2 (en) 2014-05-28 2019-02-12 Elbit Systems Land And C4I Ltd. Method and system for image georegistration
US9846963B2 (en) 2014-10-03 2017-12-19 Samsung Electronics Co., Ltd. 3-dimensional model generation using edges
WO2016053067A1 (en) * 2014-10-03 2016-04-07 Samsung Electronics Co., Ltd. 3-dimensional model generation using edges
CN107690650A (en) * 2015-06-17 2018-02-13 三菱电机株式会社 For the method by 3D scene reconstructions for 3D models
WO2016203731A1 (en) * 2015-06-17 2016-12-22 Mitsubishi Electric Corporation Method for reconstructing 3d scene as 3d model
WO2017124663A1 (en) * 2016-01-21 2017-07-27 杭州海康威视数字技术股份有限公司 Three-dimensional surveillance system, and rapid deployment method for same
US10241514B2 (en) 2016-05-11 2019-03-26 Brain Corporation Systems and methods for initializing a robot to autonomously travel a trained route
US9987752B2 (en) 2016-06-10 2018-06-05 Brain Corporation Systems and methods for automatic detection of spills
US10282849B2 (en) 2016-06-17 2019-05-07 Brain Corporation Systems and methods for predictive/reconstructive visual object tracker
US10016896B2 (en) 2016-06-30 2018-07-10 Brain Corporation Systems and methods for robotic behavior around moving bodies
US10274325B2 (en) 2016-11-01 2019-04-30 Brain Corporation Systems and methods for robotic mapping
US10001780B2 (en) 2016-11-02 2018-06-19 Brain Corporation Systems and methods for dynamic route planning in autonomous navigation
US10723018B2 (en) 2016-11-28 2020-07-28 Brain Corporation Systems and methods for remote operating and/or monitoring of a robot
CN110121733A (en) * 2016-12-28 2019-08-13 交互数字Ce专利控股公司 The method and apparatus of joint segmentation and 3D reconstruct for scene
WO2018122087A1 (en) * 2016-12-28 2018-07-05 Thomson Licensing Method and device for joint segmentation and 3d reconstruction of a scene
EP3343506A1 (en) * 2016-12-28 2018-07-04 Thomson Licensing Method and device for joint segmentation and 3d reconstruction of a scene
US10377040B2 (en) 2017-02-02 2019-08-13 Brain Corporation Systems and methods for assisting a robotic apparatus
US10852730B2 (en) 2017-02-08 2020-12-01 Brain Corporation Systems and methods for robotic mobile platforms
US10293485B2 (en) 2017-03-30 2019-05-21 Brain Corporation Systems and methods for robotic path planning
US11886189B2 (en) 2018-09-10 2024-01-30 Perceptual Robotics Limited Control and navigation systems, pose optimization, mapping, and localization techniques
US11827351B2 (en) 2018-09-10 2023-11-28 Perceptual Robotics Limited Control and navigation systems
GB2581403B (en) * 2019-02-13 2022-11-23 Perceptual Robotics Ltd Pose optimisation, mapping, and localisation techniques
GB2581403A (en) * 2019-02-13 2020-08-19 Perceptual Robotics Ltd Pose optimisation, mapping, and localisation techniques
CN111369660B (en) * 2020-03-02 2023-10-13 中国电子科技集团公司第五十二研究所 Seamless texture mapping method of three-dimensional model
CN111369660A (en) * 2020-03-02 2020-07-03 中国电子科技集团公司第五十二研究所 Seamless texture mapping method for three-dimensional model
CN111739068A (en) * 2020-05-06 2020-10-02 西安电子科技大学 Light field camera relative pose estimation method
CN111739068B (en) * 2020-05-06 2024-03-01 西安电子科技大学 Light field camera relative pose estimation method
CN112584060A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion system
CN112584120A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion method
CN113284231A (en) * 2021-06-10 2021-08-20 中国水利水电第七工程局有限公司 Tower crane modeling method based on multi-dimensional information
CN113284231B (en) * 2021-06-10 2023-06-16 中国水利水电第七工程局有限公司 Tower crane modeling method based on multidimensional information

Also Published As

Publication number Publication date
AU2001250802A1 (en) 2001-09-17
EP1297691A2 (en) 2003-04-02
WO2001067749A3 (en) 2003-01-23
US20010043738A1 (en) 2001-11-22
US6985620B2 (en) 2006-01-10

Similar Documents

Publication Publication Date Title
US6985620B2 (en) Method of pose estimation and model refinement for video representation of a three dimensional scene
US7085409B2 (en) Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
Akbarzadeh et al. Towards urban 3d reconstruction from video
CN105913489B (en) A kind of indoor three-dimensional scenic reconstructing method using plane characteristic
Barnard et al. Computational stereo
Meilland et al. A spherical robot-centered representation for urban navigation
Pirchheim et al. Homography-based planar mapping and tracking for mobile phones
Pollefeys et al. Image-based 3D acquisition of archaeological heritage and applications
US20100045701A1 (en) Automatic mapping of augmented reality fiducials
KR20150013709A (en) A system for mixing or compositing in real-time, computer generated 3d objects and a video feed from a film camera
Lin et al. Development of a virtual reality GIS using stereo vision
Kumar et al. Pose estimation, model refinement, and enhanced visualization using video
CN109613974A (en) A kind of AR household experiential method under large scene
Koch et al. Wide-area egomotion estimation from known 3d structure
Asai et al. 3D modeling of outdoor environments by integrating omnidirectional range and color images
Se et al. Instant scene modeler for crime scene reconstruction
Park et al. Hand-held 3D scanning based on coarse and fine registration of multiple range images
EP1890263A2 (en) Method of pose estimation adn model refinement for video representation of a three dimensional scene
Berger et al. Mixing synthetic and video images of an outdoor urban environment
Cheng et al. Texture mapping 3d planar models of indoor environments with noisy camera poses
Nowicki et al. Experimental verification of a walking robot self-localization system with the Kinect sensor
Pears et al. Mobile robot visual navigation using multiple features
Naikal et al. Image augmented laser scan matching for indoor localization
Boussias-Alexakis et al. Automatic adjustment of wide-base Google Street View panoramas
Rawlinson Design and implementation of a spatially enabled panoramic virtual reality prototype

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2001924119

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2001924119

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP