WO2006009257A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2006009257A1 WO2006009257A1 PCT/JP2005/013505 JP2005013505W WO2006009257A1 WO 2006009257 A1 WO2006009257 A1 WO 2006009257A1 JP 2005013505 W JP2005013505 W JP 2005013505W WO 2006009257 A1 WO2006009257 A1 WO 2006009257A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- spatial composition
- image processing
- processing apparatus
- camera
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/536—Depth or shape recovery from perspective effects, e.g. by using vanishing points
Definitions
- the present invention relates to a technique for generating a stereoscopic image from a still image, and in particular, an object such as a person, an object, an animal, or a building is extracted from the still image, and the depth of the entire still image including the object is extracted.
- the present invention relates to a technique for generating three-dimensional information, which is information indicating. Background art
- Patent Document 1 As a conventional method of obtaining stereoscopic information from a still image, there is a method of generating stereoscopic information in an arbitrary viewpoint direction from still images taken by a plurality of cameras. A method of generating an image at a different viewpoint or line-of-sight direction from that at the time of imaging by extracting stereoscopic information about the image at the time of imaging is shown (for example, see Patent Document 1). This includes left and right image input units for inputting images and a distance calculation unit for calculating object distance information, and includes an image processing circuit for generating an image viewed from an arbitrary viewpoint and line-of-sight direction. Yes. There are Patent Document 2 and Patent Document 3 as conventional technologies having the same meaning, and a highly versatile and image recording / reproducing apparatus for recording a plurality of images and parallax is presented.
- Patent Document 4 discloses a method of recognizing an accurate three-dimensional shape of an object at high speed by imaging an object with at least three different positional forces.
- Patent Document 5 many other documents such as Patent Document 5 are presented.
- Patent Document 6 is a television camera with a fish-eye lens for acquiring a shape of an object without rotating it with a single camera. After taking a series of shots, the background of the shot image is removed to find the vehicle silhouette. The movement trajectory of the ground contact point of the vehicle tire in each image is obtained, and from this, the relative position between the camera viewpoint and the vehicle in each image is obtained. With this relative positional relationship, each silhouette is placed in the projection space, and each silhouette is projected onto the projection space to obtain the shape of the vehicle.
- an epipolar technique is widely known.
- this Patent Document 6 instead of obtaining images of a plurality of viewpoints of an object with a plurality of cameras, a moving object is targeted. 3D information acquisition by obtaining multiple images in time series It is carried out.
- FIG. 1 is a flowchart showing the flow of processing from the generation of stereoscopic information from the still image to the generation of a stereoscopic image in the above-described prior art (of the steps in FIG. 1)
- the step that the inside is represented by the mesh is the step by the user's manual work).
- spatial composition information information representing the spatial composition
- S900 information representing the spatial composition
- the number of vanishing points is determined (S901)
- the position of the vanishing point is adjusted (S902)
- the inclination of the spatial composition is input (S903), and adjusted according to the position and size of the spatial composition. (S904).
- the user inputs a mask image obtained by masking the object (S910), and three-dimensional information is also generated for the mask arrangement and the spatial composition information power (S920).
- the user selects the area where the object is masked (S921) and one side (or one side) of the object is selected (S922), the force that makes contact with the spatial composition (S923: No), if it is non-contact (S923: No), it is input that it is non-contact (S924). If it is in contact (S923: Yes), it will contact! /
- the coordinates of the hitting part are input (S925).
- the above processing is performed on all surfaces of the object (S922 to S926).
- Patent Document 1 Japanese Patent Laid-Open No. 09-009143
- Patent Document 2 Japanese Patent Laid-Open No. 07-049944
- Patent Document 3 Japanese Patent Application Laid-Open No. 07-095621
- Patent Document 4 Japanese Patent Laid-Open No. 09-091436
- Patent Document 5 Japanese Patent Application Laid-Open No. 09-305796
- Patent Document 6 Japanese Patent Application Laid-Open No. 08-043056
- each object in the still image is manually extracted, the background image is manually created, and the drafting spatial information such as the vanishing point is manually created.
- This is a situation where each object is manually mapped to virtual 3D information after being set separately, and there is a problem that 3D information cannot be created easily.
- the present invention solves the above-described conventional problems, and an object of the present invention is to provide an image processing apparatus and the like that can reduce a user's workload when generating stereoscopic information from a still image.
- an image processing apparatus is an image processing apparatus that generates stereoscopic information from a still image, and an image acquisition unit that acquires a still image,
- An object extracting means for extracting an object from the still image;
- a spatial composition specifying means for specifying a spatial composition representing a virtual space including a vanishing point using features of the acquired still image;
- the arrangement of the object tart in the virtual space is determined by associating the extracted object with the determined spatial composition, and the determined arrangement force of the object is generated as a three-dimensional information relating to the object.
- Information generating means is an image processing apparatus that generates stereoscopic information from a still image, and an image acquisition unit that acquires a still image.
- the image processing apparatus further assumes that a camera is assumed in the virtual space, and a viewpoint control unit that moves the position of the camera and an arbitrary position force are captured by the camera.
- An image generating means for generating an image and an image display means for displaying the generated image are provided.
- the viewpoint control means controls the camera to move in a range where the generated stereoscopic information exists.
- the viewpoint control means is further characterized in that the camera controls the object so that it does not exist! [0031] With this configuration, it is possible to avoid collision and passage of an image taken by a camera moving in a virtual space with an object, and to improve the image quality.
- the viewpoint control means is further characterized in that the camera controls to shoot a region where the object indicated by the generated stereoscopic information is present.
- the viewpoint control means further controls the camera to move in the direction of the vanishing point.
- the viewpoint control means further controls the camera so as to advance in the direction of the object indicated by the generated stereoscopic information.
- the object extracting means specifies two or more non-parallel line objects from the extracted objects, and the spatial composition specifying means further includes the two or more specified lines.
- the position of one or more vanishing points is estimated by extending the object, and the spatial composition is identified from the identified two or more linear objects and the position of the estimated vanishing point. It is characterized by.
- the spatial composition specifying means further estimates the vanishing point even outside the still image.
- the image processing apparatus further includes user interface means for receiving an instruction of user power, and the spatial composition specifying means is further specified according to the received instruction of user power. It is characterized by correcting the spatial composition.
- the image processing apparatus further includes a spatial composition template storage unit that stores a spatial composition template that serves as a template for the spatial composition, and the spatial composition specifying unit is configured to store the spatial composition template in the acquired still image. It is also possible to select one spatial composition template from the spatial composition template storage means using the feature and specify the spatial composition using the selected spatial composition template.
- the three-dimensional information generating means further calculates a grounding point where the object touches a ground plane in the spatial composition, and the three-dimensional information when the object exists at the position of the grounding point Is generated.
- the spatial arrangement of objects can be specified more accurately, and the quality of the entire image can be improved.
- the quality of the entire image can be improved. For example, in the case of a photograph that shows a full body image of a human, it is possible to map the human to a more accurate spatial position by calculating the contact point between the human foot and the ground plane.
- the three-dimensional information generation start stage is characterized in that a surface that is outside the object and is in contact with the spatial composition is changed according to the type of the object.
- the ground plane can be changed depending on the type of object, a more realistic spatial arrangement can be obtained, and the quality of the entire image can be improved.
- the three-dimensional information generation start stage further includes at least one of the object or the ground plane when the object is unable to calculate a grounding point in contact with the ground plane of the spatial composition.
- a virtual ground point in contact with the ground plane is calculated by interpolation, extrapolation, or interpolation, and the three-dimensional information when the outside of the object exists at the position of the virtual ground point is generated.
- the three-dimensional information generating means further generates the three-dimensional information by giving a predetermined thickness to the object and arranging the object in a space.
- the three-dimensional information generating means may generate the three-dimensional information by adding image processing for blurring or sharpening the periphery of the object.
- the three-dimensional information generation means may further hide at least one of background data and data of another object that are missing due to being hidden by the shadow of the object. It is characterized by comprising.
- the three-dimensional information generating means is characterized in that the data representing the back surface and the side surface of the object is also composed of the data force of the front surface of the object.
- the three-dimensional information generation means is characterized in that the process related to the object is dynamically changed based on the type of the object.
- the present invention can be realized as an image processing method using characteristic constituent means in the image processing apparatus as steps, or as a program for causing a personal computer or the like to execute these steps. And that program can be widely distributed via recording media such as DVDs and transmission media such as the Internet. Nor.
- the image processing apparatus of the present invention it is possible to reconstruct a 3D information from a photograph (still image) into an image having a depth by a very simple operation with a force that cannot be achieved conventionally. Can do. In addition, by moving and shooting inside a 3D space with a virtual camera, you can enjoy the still images as a moving image without any complicated work. Can provide a way of enjoying.
- FIG. 1 is a flowchart showing the contents of processing for generating stereoscopic information from a still image in the prior art.
- FIG. 2 is a block diagram showing a functional configuration of the image processing apparatus according to the present embodiment.
- FIG. 3 (a) is an example of an original image input to an image acquisition unit according to the present embodiment.
- FIG. 3 (b) is an example of an image obtained by binarizing the original image of FIG. 2 (a). It is a figure which shows an original image and a binarization example.
- FIG. 4 (a) is an example of edge extraction according to the present embodiment.
- Fig. 4 (b) shows an example of extracting a spatial composition according to this embodiment.
- FIG. 4 (c) is a diagram showing an example of a spatial composition confirmation screen according to the present embodiment.
- FIGS. 5 (a) and 5 (b) are diagrams showing an example of a spatial composition extraction template in the first embodiment.
- FIGS. 6 (a) and 6 (b) are diagrams showing an example of an enlarged spatial composition extraction template according to the first embodiment.
- FIG. 7 (a) is a diagram showing an example of object extraction in the first embodiment.
- FIG. 7B is an example of an image obtained by combining the extracted object and the determined spatial composition in the first embodiment.
- FIG. 8 is a diagram showing an example of setting a virtual viewpoint in the first embodiment.
- FIGS. 9 (a) and 9 (b) are diagrams showing a generation example of a viewpoint change image in the first embodiment.
- FIG. 10 is an example of a spatial composition extraction template in the first embodiment (in the case of one vanishing point).
- FIG. 11 is an example of a spatial composition extraction template in the first embodiment (in the case of two vanishing points).
- FIGS. 12 (a) and 12 (b) are examples of a spatial composition extraction template in Embodiment 1 (in the case of including an edge line).
- FIG. 13 is an example of a spatial composition extraction template in Embodiment 1 (in the case of a vertical type including a ridge line).
- FIGS. 14 (a) and 14 (b) are diagrams showing an example of generation of synthetic three-dimensional information in the first embodiment.
- FIG. 15 is a diagram showing an example of changing the viewpoint position in the first embodiment.
- FIG. 16 (a) shows an example of changing the viewpoint position in the first embodiment.
- FIG. 16 (b) is a diagram showing an example of an image common part in the first embodiment.
- FIG. 16 (c) is a diagram showing an example of an image common part in the first embodiment.
- FIG. 17 is a diagram showing a transition example of image display in the first embodiment.
- FIGS. 18 (a) and 18 (b) are diagrams showing an example of camera movement in the first embodiment.
- FIG. 19 is a diagram showing an example of camera movement in the first embodiment.
- FIG. 20 is a flowchart showing a process flow in the spatial composition specifying unit in the first embodiment.
- FIG. 21 is a flowchart showing the flow of processing in the viewpoint control unit in the first embodiment.
- FIG. 22 is a flowchart showing a process flow in the three-dimensional information generation unit in the first embodiment. Explanation of symbols
- FIG. 2 is a block diagram showing a functional configuration of the image processing apparatus according to the present embodiment.
- the image processing apparatus 100 generates stereoscopic information (also referred to as three-dimensional information) from a still image (also referred to as “original image”), generates a new image using the generated stereoscopic information, and generates a stereoscopic image.
- stereoscopic information also referred to as three-dimensional information
- original image also referred to as “original image”
- the image acquisition unit 101 includes a storage device such as a RAM or a memory card, acquires image data of each frame in a still image or a moving image via a digital camera, a scanner, and the like. Perform binary extraction and edge extraction.
- a storage device such as a RAM or a memory card
- the above-described still images or images for each frame in the moving image are collectively referred to as “still images”.
- the spatial composition template storage unit 110 includes a storage device such as a RAM, and stores a spatial composition template used in the spatial composition specification unit 112.
- the “spatial composition template” refers to a framework composed of multiple line forces to represent the depth in a still image, and represents the position of the start and end points of each line segment, and the position of the intersection of the line segments.
- the information includes information such as the reference length for still images.
- the spatial composition user IF unit 111 includes a mouse, a keyboard, a liquid crystal panel, and the like, receives an instruction from the user force, and notifies the spatial composition identification unit 112 of it.
- the spatial composition specifying unit 112 determines a spatial composition (hereinafter also simply referred to as “composition”) for the still image based on the acquired edge information of the still image, object information described later, and the like. In addition, the spatial composition specifying unit 112 selects a spatial composition template from the spatial composition template storage unit 110 as necessary (and modifies the selected spatial composition template as necessary), and converts the spatial composition. Identify. Furthermore, the spatial composition specifying unit 112 may determine or correct the spatial composition with reference to the object extracted by the object extracting unit 122.
- the object template storage unit 120 includes a storage device such as a RAM or a hard disk, and stores object templates, parameters, and the like for extracting the object of the acquired original image.
- the object user IF unit 121 includes a mouse, a keyboard, and the like, and selects a method (template match, -Ural net, color information, etc.) used for extracting an object from a still image, or the above method. Select an object from among the object candidates presented by, select the object itself, or modify the selected object. It accepts user-powered operations when adding positives, templates, and methods for extracting objects.
- a method template match, -Ural net, color information, etc.
- the object extraction unit 122 also extracts an object with a still image force, and specifies information about the object (hereinafter referred to as "object information") such as the position, number, shape, and type of the object. In this case, it is assumed that candidates (for example, people, animals, buildings, plants, etc.) are determined in advance for the objects to be extracted. Furthermore, the object extraction unit 122 refers to the object template stored in the object template storage unit 120 as necessary, and also extracts the object based on the correlation value between each template and the object of the still image. Further, an object may be extracted or the object may be corrected with reference to the spatial composition determined by the spatial composition specifying unit 112.
- the three-dimensional information generation unit 130 includes the spatial composition determined by the spatial composition specifying unit 112, the object information extracted by the object extraction unit 122, instructions received by the user force via the three-dimensional information user IF unit 131, and the like. Based on the above, the three-dimensional information related to the acquired still image is generated. Further, the three-dimensional information generation unit 130 is a microcomputer including a ROM, a RAM, and the like, and controls the entire image processing apparatus 100.
- the three-dimensional information user IF unit 131 includes a mouse, a keyboard, and the like, and changes the three-dimensional information according to an instruction from the user.
- the information correction user IF unit 140 includes a mouse, a keyboard, and the like, receives an instruction from the user, and notifies the information correction unit 141 of the instruction.
- the information correction unit 141 corrects the erroneously extracted object, or corrects the spatial composition and the stereoscopic information that are erroneously specified based on the user instruction received via the information correction user IF unit 140.
- other correction methods include, for example, extraction of objects up to that point, identification of spatial composition, or correction based on a rule base defined based on the generation result of three-dimensional information.
- the three-dimensional information storage unit 150 includes a storage device such as a hard disk, and stores three-dimensional information being created and three-dimensional information generated in the past.
- the three-dimensional information comparison unit 151 compares the whole or a part of the three-dimensional information generated in the past with the whole or a part of the three-dimensional information currently being processed (or processed), When a matching point is confirmed, information for enhancing the three-dimensional information is provided to the three-dimensional information generation unit 130.
- the style / effect template storage unit 160 has a storage device such as a hard disk, and includes programs, data, Memorize the style or template.
- the effect control unit 161 adds an arbitrary effect effect such as a transition effect or a color tone conversion to the new image generated by the image generation unit 170.
- an effect group in a predetermined style may be used to give a sense of unity as a whole.
- the effect control unit 161 adds a new template or the like to the style Z jet template storage unit 160 or edits the referenced template or the like.
- the effect user IF unit 162 includes a mouse, a keyboard, and the like, and notifies the user control unit 161 of an instruction from the user.
- the image generation unit 170 generates an image that three-dimensionally represents the still image based on the three-dimensional information generated by the three-dimensional information generation unit 130. Specifically, a new image derived from a still image is generated using the generated stereoscopic information.
- the 3D image may be schematic, and the camera position and camera orientation that are good may be displayed in the 3D image. Furthermore, the image generation unit 170 generates a new image using separately specified viewpoint information, display effects, and the like.
- the image display unit 171 is a display device such as a liquid crystal panel or a PDP, for example, and presents the image or video generated by the image generation unit 170 to the user.
- the viewpoint change template storage unit 180 stores a viewpoint change template that indicates a predetermined three-dimensional movement of camera work.
- the viewpoint control unit 181 determines the viewpoint position as camera work. At this time, the viewpoint control unit 181 may refer to the viewpoint change template stored in the viewpoint change template storage unit 180. Furthermore, the viewpoint control unit 181 creates, changes, and deletes a viewpoint change template based on a user instruction received via the viewpoint control user IF unit 182.
- the viewpoint control user IF unit 182 includes a mouse, a keyboard, and the like, and notifies the viewpoint control unit 181 of an instruction related to control of the viewpoint position that has received user power.
- the camera work setting image generation unit 190 generates an image when the current camera position force is also viewed as a reference when the user decides the camera work.
- the image processing apparatus 100 can be configured by selecting functional elements according to necessity, not all of which are required (i.e., the department represented as “to” in FIG. 2).
- FIG. 3 (a) is an example of an original image according to the present embodiment.
- Fig. 3 (b) is an example of a binary image obtained by binarizing the original image.
- the spatial composition In order to determine the spatial composition, it is important to roughly extract the spatial composition.
- the main spatial composition hereinafter referred to as “schematic spatial composition”.
- “binarization” is performed in order to extract a schematic spatial composition, and then fitting by template matching is performed.
- the binary key and the template match are merely examples of the method for extracting the approximate spatial composition, and the approximate spatial composition may be extracted using any other method.
- a detailed spatial composition may be extracted directly without extracting a schematic spatial composition.
- the general spatial composition and the detailed spatial composition are collectively referred to as “spatial composition”.
- the image acquisition unit 101 binarizes the original image 201 to obtain a binary image 202, and further, the binary image 202 To obtain an edge extracted image.
- Fig. 4 (a) is an example of edge extraction according to the present embodiment
- Fig. 4 (b) is an example of extracting a spatial composition
- Fig. 4 (c) is for confirming the spatial composition. Is a display example.
- the image acquisition unit 101 After binarization, the image acquisition unit 101 performs edge extraction on the binary key image 202, generates an edge extraction image 301, and outputs it to the spatial composition specifying unit 112 and the object extraction unit 122 To do.
- the spatial composition specifying unit 112 generates a spatial composition using the edge extracted image 301. More specifically, the spatial composition specifying unit 112 extracts two or more non-parallel straight lines from the edge extraction image 301 and generates a “framework” obtained by combining these straight lines. This “framework” is the spatial composition.
- a spatial composition extraction example 302 in Fig. 4 (b) is an example of the spatial composition generated as described above. Further, the spatial composition specifying unit 112 corrects the spatial composition in the spatial composition confirmation image 303 so as to match the content of the original image according to the user instruction received via the spatial composition user IF unit 111.
- the spatial composition confirmation image 303 is an image for confirming the suitability of the spatial composition, and is an image obtained by combining the original image 201 and the spatial composition extraction example 302. Note that the user's instruction received via the spatial composition user IF section 111 also follows when the user makes corrections, applies other spatial composition extraction, or adjusts the spatial composition extraction example 302.
- edge extraction is performed by “binarizing” an original image.
- the present invention is not limited to this method, and an existing image processing method is used. Needless to say, edge extraction may be performed by a combination thereof.
- Existing image processing methods include, but are not limited to, using color information, using luminance information, using orthogonal transformation and wavelet transformation, and using various 1D Z multidimensional filters.
- the spatial composition is not limited to the case where the edge extraction image force is also generated as described above, but “space composition extraction”, which is a template of the spatial composition prepared in advance for extracting the spatial composition. You can decide using the “template”.
- FIGS. 5A and 5B are examples of spatial composition extraction templates.
- the spatial composition specifying unit 112 selects a spatial composition extraction template as shown in FIGS. 5 (a) and 5 (b) from the spatial composition template storage unit 110 as necessary, and combines it with the original image 201 for matching. It is also possible to determine the final spatial composition.
- the spatial composition may be estimated from the arrangement information (information indicating where and what is present). Furthermore, a spatial composition can be determined by arbitrarily combining existing image processing methods such as segmentation (region division), orthogonal transformation, wavelet transformation, color information, and luminance information. As an example, the spatial composition may be determined based on the direction in which the boundary surface of each divided region is directed. In addition, meta information attached to a still image (arbitrary tag information such as EXIF) may be used. For example, it can be used for spatial composition extraction using arbitrary tag information such as “determining whether a vanishing point, which will be described later, is present in the image from the focal length and subject depth”.
- arbitrary tag information such as EXIF
- the spatial composition user IF unit 111 can be used as an interface for performing all input / output desired by the user, such as inputting, modifying or changing a template, inputting, modifying, or changing spatial composition information itself.
- FIGS. 5 (a) and 5 (b) show vanishing points VP410 in each spatial composition extraction template.
- the spatial composition extraction template is not limited to these as described later, but is a template that corresponds to an arbitrary image having depth information (or perceived as having! /).
- a similar arbitrary template can be generated from one template.
- a wall in the back direction
- the spatial composition extraction template like the front back wall 420. It goes without saying that the distance in the back direction of the front back wall 420 can be moved in the same manner as the vanishing point.
- Examples of spatial composition extraction templates include the case where there is one vanishing point, such as the spatial composition extraction template example 401 and the spatial composition extraction template example 402, and the spatial composition extraction template shown in Fig. 11. As shown in Example 1010, when there are two vanishing points (vanishing point 1001, vanishing point 100 2), or the wall composition intersects from two directions as shown in spatial composition extraction template 1110 in Figure 12! In such a case (this is also a 2 vanishing point), if it is a vertical type like the spatial composition extraction template 1210 in FIG. 13, the camera movement example 1700 in FIG. 18 (a) is shown.
- vanishing point is linear, such as the horizon (horizontal line), or if the vanishing point is outside the image range, such as the camera movement example 1750 in Fig. 18 (b), Spatial composition generally used in fields such as CAD and design can be used arbitrarily.
- the enlarged space composition extraction template 520 in Fig. 6 or the enlarged space is used.
- the spatial composition extraction template can be enlarged and used. In this case, vanishing points are set even for images in which the vanishing point is outside the image, such as image range example 501, image range example 502, and image range example 503 in FIGS. 6 (a) and 6 (b). It becomes possible to do.
- any parameter related to the spatial composition such as the position of the vanishing point can be freely changed.
- the spatial composition extraction template 910 in FIG. 10 can respond more flexibly to various spatial compositions by changing the position of the vanishing point 910, the wall height 903 of the front back wall 902, the wall width 904, etc. be able to .
- the spatial composition extraction template 1010 in FIG. 11 shows an example in which the positions of two vanishing points (the vanishing point 1001 and the vanishing point 1002) are arbitrarily moved.
- the parameters of the spatial composition to be changed can be changed for any target in the spatial composition, such as the side wall surface, ceiling surface, and front back wall surface, which are not limited to the vanishing point and the front back wall. it can.
- any state related to the surface such as the inclination of the surface and the position in the spatial arrangement, can be used as a subparameter.
- the change method is not limited to top, bottom, left and right, and deformation such as rotation, morphing, and affine transformation may be performed.
- FIG. 13 shows examples of vanishing points (vanishing points 1202, vanishing points 1201), ridge lines (ridge lines 1203), and ridge line widths (ridge line width 1204) in the case of a vertical spatial composition.
- These spatial composition-related parameters may be set by user-operated operations (for example, designation, selection, correction, registration, etc., though not limited to this) via the spatial composition user IF unit 111. Good.
- FIG. 20 is a flowchart showing the flow of processing until the spatial composition is specified in the spatial composition specifying unit 112.
- the spatial composition specifying unit 112 acquires the edge extraction image 301 from the image acquisition unit 101
- the spatial composition element for example, a non-parallel linear object
- Extract (S100) is acquired from the edge extraction image 301.
- the spatial composition specifying unit 112 calculates vanishing point position candidates (S102).
- the spatial composition specifying unit 112 sets a horizon (S106). Further, if the position of the vanishing point candidate is not in the original image 201 (S108: No), the vanishing point is extrapolated (S110).
- the spatial composition specifying unit 112 creates a spatial composition template including elements constituting the spatial composition centered on the vanishing point (S112), and the created spatial composition template and the template of the spatial composition component Matching (also simply “TM”) is performed (S 114).
- an object extraction method a method used in an existing image processing method or image recognition method can be arbitrarily used. For example, for person extraction, template It can be extracted based on matches, -Ural nets, color information, etc. Segments and areas divided by segmentation and area division can be regarded as objects. If it is a still image in a moving image or a continuous still image, the frame image force object before and after can be extracted.
- the extraction method and the extraction target are not limited to these and are arbitrary.
- the template and parameters for object extraction described above are stored in the object template storage unit 120, and can be read out and used according to the situation. It is also possible to input new templates and parameters to the object template storage unit 120.
- the object user IF unit 121 selects a method for extracting an object (such as a template match, a -Ural net, or color information), selects a candidate object presented as a candidate, It provides an interface to do all the work users want, such as selecting itself, modifying results, adding templates, and adding object extraction methods.
- an object such as a template match, a -Ural net, or color information
- FIG. 7A is a diagram showing the extracted object
- FIG. 7B is an example of an image obtained by combining the extracted object and the determined spatial composition.
- the object extraction example 610 the main person image is extracted from the original image 201 as objects 601, 602, 603, 604, 605, and 606 as objects.
- Depth information synthesis example 611 is a combination of each object and spatial composition.
- the three-dimensional information generation unit 130 can generate the three-dimensional information by arranging the extracted objects in the spatial composition as described above. Note that the three-dimensional information can be input or corrected in accordance with a user instruction received via the three-dimensional information generation user IF unit 131.
- the image generation unit 170 newly sets a virtual viewpoint and generates an image different from the original image.
- FIG. 22 is a flowchart showing the flow of processing in the three-dimensional information generation unit 130 described above. It is a chart.
- the three-dimensional information generation unit 130 generates data relating to the plane in the spatial composition information force (hereinafter referred to as “composition plane data”) (S300).
- composition plane data data relating to the plane in the spatial composition information force
- the three-dimensional information generation unit 130 calculates a contact point between the extracted object (also referred to as “Obj”) and the composition plane (S302), and there is no contact point between the object and the ground plane (S304: No). If there is no further contact with the wall or top surface (S306: No), the position in the space is set assuming that the object is in the foreground (S308). In other cases, contact coordinates are calculated (S310), and the position of the object space is calculated (S312).
- the three-dimensional information generation unit 130 incorporates correction contents related to the object in the information correction unit 141 (S318 to 324), and completes the generation of the three-dimensional information (S326).
- the virtual viewpoint position 701 is considered as the viewpoint position in the space, and the virtual viewpoint direction 7002 is set as the viewpoint direction.
- the virtual viewpoint position 701 and virtual viewpoint direction 702 for the depth information synthesis example 810 (same as depth information synthesis example 611) in Fig. 9, the virtual viewpoint When a viewpoint direction such as the viewpoint at the position 701 and the virtual viewpoint direction 702 are set (that is, when viewed from the horizontal direction with a slight advance), an image like the viewpoint change image generation example 811 can be generated.
- FIG. 15 shows an example of an image assuming a viewpoint position and direction for an image having certain stereoscopic information.
- An image example 1412 is an image example at the time of the image position example 1402.
- An image example 1411 is an image example at the time of the image position example 1401.
- the viewpoint position and the viewpoint object are schematically represented by the viewpoint position 1403 and the viewpoint object 1404.
- FIG. 15 is used as an example in which an image is generated by setting a virtual viewpoint from an image having certain stereoscopic information.
- the still image used to acquire the stereoscopic information is the image example 1412
- the image when the viewpoint position 1403 and the viewpoint target 1404 are set for the stereoscopic information extracted from the image example 1412 is an image example. It can be said that it is 1412.
- FIG. 16 shows an image example 1511 and an image example 1512 as image examples corresponding to the image position example 1501 and the image position example 1502, respectively. At this time, some of the image examples may overlap. For example, the image common part 1521 and the image common part 1521 correspond to this.
- the viewpoint, focus, zoom, pan, etc. are applied to the inside and outside of the stereoscopic information, or a transition or effect is applied. Of course, an image can be generated.
- image common part 1521 and image common part 1521 which is not simply generated as a moving image or a still image taken in a three-dimensional space with a virtual camera, it is cut out as a still image.
- video or still images can be connected to each other with camera work / effects (or in a mixed video / still image situation) while corresponding common parts.
- Figure 17 shows how images that have common parts (ie, the parts shown in bold frames) can be morphed using transitions, image transformations (affine transformations, etc.), effects, camera angle changes, camera parameter changes, etc.
- An example is shown in which a transition is made. Identification of common parts is easily possible with the ability of information on the body, and conversely, it is possible to set camera work to have common parts.
- FIG. 21 is a flowchart showing the flow of processing in the viewpoint control unit 181 described above.
- the viewpoint control unit 181 sets the start point and end point of camera work (S200).
- the start point and end point of the camera work are set approximately at the front of the virtual space, and the end point is set at a point closer to the vanishing point from the start point.
- a predetermined database or the like may be used for setting the start point and end point.
- the viewpoint control unit 181 determines the destination and direction of movement of the camera (S202), and determines the movement method (S204). For example, it moves from the near side to the vanishing point while passing through the vicinity of each object. Further, it may move in a spiral shape by simply moving in a straight line, or the speed may be changed during the movement. [0138] Furthermore, the viewpoint control unit 181 actually moves the camera for a predetermined distance (S206 to 224). During this time, if a camera effect such as camera pan is executed (S208: Yes), a predetermined effect subroutine is executed (S212 to S218).
- the viewpoint control unit 181 sets the next movement destination (S228) and repeats the above processing. (S202 ⁇ S228).
- the viewpoint control unit 181 ends the camera work when the camera moves to the end point.
- the above-mentioned repetitive force can be used by preparing a predetermined viewpoint change template in a database, like the viewpoint change template storage unit 180, for the camera work related to these image generations. Further, a new viewpoint change template may be added to the viewpoint change template storage unit 180, or the viewpoint change template may be edited and used. Also, the viewpoint position can be determined by the user's instruction via the viewpoint control user IF section 182, or the viewpoint change template can be created / edited / added / deleted.
- a predetermined effect Z style template can be prepared in a database and used as in the effect Z style template storage unit 160.
- the effect Z style template storage unit 160 may be added to add an effect Z style template or edit an effect Z style template.
- the viewpoint position may be determined by the user's instruction via the effect user IF unit 162, or the effect Z style template may be created, edited, added, and deleted.
- any camera work depending on the object such as close to the object, close up to the object, or wrap around the object, is taken into consideration. It can also be set. Needless to say, the ability to create images that depend on the object is the same for effects other than camera work.
- spatial composition can be taken into account when setting camera work.
- the effect is the same.
- the processing that takes into account the common parts described above is the process of spatial composition and objects. This is an example of camera work or effect using both, whether the generated image is a moving image or a still image, any existing camera using spatial composition and objects, camera angle, camera parameters , Image conversion, transitions, etc. can be used.
- FIGS. 18 (a) and 18 (b) are diagrams showing an example of camera work.
- the camera movement example 1700 showing the camerawork trajectory in Fig. 18 (a) shows the case where the virtual camera starts imaging from the starting viewpoint position 1701 and the camera moves along the camera movement line 1708. Yes. From the viewpoint position 1702, the viewpoint position 1703, the viewpoint position 1704, the viewpoint position 1705, and the viewpoint position 1706 are sequentially passed, and the camera work is ended at the end viewpoint position 1707. At the starting viewpoint position 1701, the starting viewpoint area 1710 is photographed, and at the ending viewpoint position 1707, the ending viewpoint area 1711 is photographed. During this movement, the camera movement ground projection line 1709 is obtained by projecting the camera movement onto a plane corresponding to the ground.
- the camera moves from the start viewpoint position 1751 to the end viewpoint position 1752, and images the start viewpoint area 1760 and the end viewpoint area 1761, respectively. ing.
- the movement of the camera during this time is shown schematically by the camera movement line 1753.
- the locus of the camera movement line 1753 projected on the ground and the wall surface is indicated by a camera movement ground projection line 1754 and a camera movement wall projection line 1755, respectively.
- an image can be generated at any timing that moves on the camera movement line 1708 and the camera movement line 1753 (it goes without saying that it can be a moving image, a still image, or a mixture of both). .
- the camera work setting image generation unit 190 generates an image when the current camera position force is also viewed and presents it to the user so that the user can determine the camera work.
- An example of this is shown in a camera image generation example 1810 in FIG. In FIG. 19, the image when the shooting range 1805 is shot from the current camera position 1803 is displayed as the current camera image 1 804.
- FIGS. 14 (a) and 14 (b) are diagrams showing an example in the case of combining a plurality of three-dimensional information.
- the current image data object A1311 and the current image data object B1312 are shown in the current image data 1301, and the past image data object A1313 and the past image data object are included in the past image data 1302.
- the case where B1314 is shown is shown. In this case, two image data can be combined in the same three-dimensional space.
- a synthesis example in this case is a synthesis three-dimensional information example 1320 shown in FIG.
- composition may be performed from common elements between a plurality of original images. Also, completely different original image data may be synthesized, and the spatial composition may be changed as necessary.
- “fate” refers to the overall effects on images (still images and moving images).
- effects include general non-linear image processing methods, and those that can be given (can be given) when shooting is possible by changing camera work, camera angle, and camera parameters.
- processing can be performed with general digital image processing software.
- placing music and onomatopoeia according to the image scene also falls within the category of effects.
- effects are written together with other terms that express the effects included in the definition of the effect, such as camera angles, the written effects are emphasized and the category of the effects is narrowed. Specify that it is not.
- the thickness information about the extracted object may be missing.
- it is also possible to set an appropriate value as the thickness based on the depth information an arbitrary method such as calculating the relative depth of the depth information and setting the thickness appropriately from the size.
- a template or the like is prepared in advance, the object is recognized, and the recognition result may be used for setting the thickness. For example, if it is recognized as an apple, the thickness may be set to a size corresponding to an apple, and if it is recognized as a car, the thickness may be set to a size corresponding to a car.
- the vanishing point may be set in the object. Even objects that are not actually at infinity can be treated as being at infinity.
- mapping the extracted object to the three-dimensional information it may be rearranged at an appropriate position in the depth information. It is not always necessary to map to a position that is faithful to the original image data. It is easy to apply effects!
- information corresponding to the back side of the object may be appropriately given. It is possible that information on the back side of the object cannot be obtained from the original image, but the information on the back side may be set based on the information on the front side (for example, image information corresponding to the front side of the object). (In terms of 3D information, information corresponding to textures, polygons, etc.) is also copied to the back of the object). Of course, information on the back side may be set with reference to other objects and other spatial information.
- any smoothing process may be performed to make the object and background appear smoother.
- the camera parameters may be changed based on the positions of objects arranged three-dimensionally as spatial information.
- focus information (out-of-focus information) may be generated based on the camera position Z depth from the object position or spatial composition when generating an image, and a perspective image may be generated.
- focus information may be generated based on the camera position Z depth from the object position or spatial composition when generating an image, and a perspective image may be generated.
- only the object may be squeezed, or the object and its surroundings may be squeezed.
- the power of the functional configuration separated as the viewpoint control user IF unit 182 may be configured to have one IF unit having the functions of each IF unit described above.
- the present invention provides a static computer such as a microcomputer, a digital camera, or a camera-equipped mobile phone.
- the present invention can be used for an image processing apparatus that generates a stereoscopic image from a still image.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/629,618 US20080018668A1 (en) | 2004-07-23 | 2005-07-22 | Image Processing Device and Image Processing Method |
JP2006519641A JP4642757B2 (ja) | 2004-07-23 | 2005-07-22 | 画像処理装置および画像処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-215233 | 2004-07-23 | ||
JP2004215233 | 2004-07-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006009257A1 true WO2006009257A1 (ja) | 2006-01-26 |
Family
ID=35785364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/013505 WO2006009257A1 (ja) | 2004-07-23 | 2005-07-22 | 画像処理装置および画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20080018668A1 (ja) |
JP (1) | JP4642757B2 (ja) |
CN (1) | CN101019151A (ja) |
WO (1) | WO2006009257A1 (ja) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009015583A (ja) * | 2007-07-04 | 2009-01-22 | Nagasaki Univ | 画像処理装置及び画像処理方法 |
JP2013506198A (ja) * | 2009-09-25 | 2013-02-21 | イーストマン コダック カンパニー | デジタル画像の美的品質の推定方法 |
JP2013037510A (ja) * | 2011-08-08 | 2013-02-21 | Juki Corp | 画像処理装置 |
CN103063314A (zh) * | 2012-01-12 | 2013-04-24 | 杭州美盛红外光电技术有限公司 | 热像装置和热像拍摄方法 |
CN103105234A (zh) * | 2012-01-12 | 2013-05-15 | 杭州美盛红外光电技术有限公司 | 热像装置和热像规范拍摄方法 |
JP2015039490A (ja) * | 2013-08-21 | 2015-03-02 | 株式会社三共 | 遊技機 |
WO2018051688A1 (ja) * | 2016-09-15 | 2018-03-22 | キヤノン株式会社 | 仮想視点画像の生成に関する情報処理装置、方法及びプログラム |
US9948913B2 (en) | 2014-12-24 | 2018-04-17 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for processing an image pair |
CN108171649A (zh) * | 2017-12-08 | 2018-06-15 | 广东工业大学 | 一种保持焦点信息的图像风格化方法 |
JP2019096996A (ja) * | 2017-11-21 | 2019-06-20 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP2022069007A (ja) * | 2020-10-23 | 2022-05-11 | 株式会社アフェクション | 情報処理システム、情報処理方法および情報処理プログラム |
Families Citing this family (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8559705B2 (en) | 2006-12-01 | 2013-10-15 | Lytro, Inc. | Interactive refocusing of electronic images |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US20100265385A1 (en) * | 2009-04-18 | 2010-10-21 | Knight Timothy J | Light Field Camera Image, File and Configuration Data, and Methods of Using, Storing and Communicating Same |
US8117137B2 (en) | 2007-04-19 | 2012-02-14 | Microsoft Corporation | Field-programmable gate array based accelerator system |
US20080310707A1 (en) * | 2007-06-15 | 2008-12-18 | Microsoft Corporation | Virtual reality enhancement using real world data |
US8264505B2 (en) * | 2007-12-28 | 2012-09-11 | Microsoft Corporation | Augmented reality and filtering |
TW200948043A (en) * | 2008-01-24 | 2009-11-16 | Koninkl Philips Electronics Nv | Method and image-processing device for hole filling |
KR20090092153A (ko) * | 2008-02-26 | 2009-08-31 | 삼성전자주식회사 | 이미지 처리 장치 및 방법 |
US8301638B2 (en) | 2008-09-25 | 2012-10-30 | Microsoft Corporation | Automated feature selection based on rankboost for ranking |
US8131659B2 (en) | 2008-09-25 | 2012-03-06 | Microsoft Corporation | Field-programmable gate array based accelerator system |
US8279325B2 (en) | 2008-11-25 | 2012-10-02 | Lytro, Inc. | System and method for acquiring, editing, generating and outputting video data |
WO2010077625A1 (en) | 2008-12-08 | 2010-07-08 | Refocus Imaging, Inc. | Light field data acquisition devices, and methods of using and manufacturing same |
US8624962B2 (en) * | 2009-02-02 | 2014-01-07 | Ydreams—Informatica, S.A. Ydreams | Systems and methods for simulating three-dimensional virtual interactions from two-dimensional camera images |
JP5257157B2 (ja) * | 2009-03-11 | 2013-08-07 | ソニー株式会社 | 撮像装置、撮像装置の制御方法およびプログラム |
US8908058B2 (en) * | 2009-04-18 | 2014-12-09 | Lytro, Inc. | Storage and transmission of pictures including multiple frames |
US8310523B2 (en) * | 2009-08-27 | 2012-11-13 | Sony Corporation | Plug-in to enable CAD software not having greater than 180 degree capability to present image from camera of more than 180 degrees |
EP2513868A4 (en) | 2009-12-16 | 2014-01-22 | Hewlett Packard Development Co | ESTIMATING A 3D STRUCTURE FROM A 2D IMAGE |
JP5424926B2 (ja) * | 2010-02-15 | 2014-02-26 | パナソニック株式会社 | 映像処理装置、映像処理方法 |
US8749620B1 (en) | 2010-02-20 | 2014-06-10 | Lytro, Inc. | 3D light field cameras, images and files, and methods of using, operating, processing and viewing same |
US8666978B2 (en) * | 2010-09-16 | 2014-03-04 | Alcatel Lucent | Method and apparatus for managing content tagging and tagged content |
US8533192B2 (en) | 2010-09-16 | 2013-09-10 | Alcatel Lucent | Content capture device and methods for automatically tagging content |
US8655881B2 (en) | 2010-09-16 | 2014-02-18 | Alcatel Lucent | Method and apparatus for automatically tagging content |
US8768102B1 (en) | 2011-02-09 | 2014-07-01 | Lytro, Inc. | Downsampling light field images |
US9184199B2 (en) | 2011-08-01 | 2015-11-10 | Lytro, Inc. | Optical assembly including plenoptic microlens array |
JP5724057B2 (ja) * | 2011-08-30 | 2015-05-27 | パナソニックIpマネジメント株式会社 | 撮像装置 |
JP5269972B2 (ja) * | 2011-11-29 | 2013-08-21 | 株式会社東芝 | 電子機器及び三次元モデル生成支援方法 |
US8811769B1 (en) | 2012-02-28 | 2014-08-19 | Lytro, Inc. | Extended depth of field and variable center of perspective in light-field processing |
US8948545B2 (en) | 2012-02-28 | 2015-02-03 | Lytro, Inc. | Compensating for sensor saturation and microlens modulation during light-field image processing |
US8995785B2 (en) | 2012-02-28 | 2015-03-31 | Lytro, Inc. | Light-field processing and analysis, camera control, and user interfaces and interaction on light-field capture devices |
US8831377B2 (en) | 2012-02-28 | 2014-09-09 | Lytro, Inc. | Compensating for variation in microlens position during light-field image processing |
US9330466B2 (en) * | 2012-03-19 | 2016-05-03 | Adobe Systems Incorporated | Methods and apparatus for 3D camera positioning using a 2D vanishing point grid |
US9754357B2 (en) * | 2012-03-23 | 2017-09-05 | Panasonic Intellectual Property Corporation Of America | Image processing device, stereoscoopic device, integrated circuit, and program for determining depth of object in real space generating histogram from image obtained by filming real space and performing smoothing of histogram |
CN102752616A (zh) * | 2012-06-20 | 2012-10-24 | 四川长虹电器股份有限公司 | 双目立体视频转换多目立体视频的方法 |
US9607424B2 (en) | 2012-06-26 | 2017-03-28 | Lytro, Inc. | Depth-assigned content for depth-enhanced pictures |
US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
US10129524B2 (en) | 2012-06-26 | 2018-11-13 | Google Llc | Depth-assigned content for depth-enhanced virtual reality images |
US8997021B2 (en) | 2012-11-06 | 2015-03-31 | Lytro, Inc. | Parallax and/or three-dimensional effects for thumbnail image displays |
US9001226B1 (en) | 2012-12-04 | 2015-04-07 | Lytro, Inc. | Capturing and relighting images using multiple devices |
US8983176B2 (en) * | 2013-01-02 | 2015-03-17 | International Business Machines Corporation | Image selection and masking using imported depth information |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
JP6027705B2 (ja) * | 2014-03-20 | 2016-11-16 | 富士フイルム株式会社 | 画像処理装置,方法,およびそのプログラム |
US9414087B2 (en) | 2014-04-24 | 2016-08-09 | Lytro, Inc. | Compression of light field images |
US9712820B2 (en) | 2014-04-24 | 2017-07-18 | Lytro, Inc. | Predictive light field compression |
US9336432B2 (en) * | 2014-06-05 | 2016-05-10 | Adobe Systems Incorporated | Adaptation of a vector drawing based on a modified perspective |
US8988317B1 (en) | 2014-06-12 | 2015-03-24 | Lytro, Inc. | Depth determination for light field images |
EP3185747A1 (en) | 2014-08-31 | 2017-07-05 | Berestka, John | Systems and methods for analyzing the eye |
US9635332B2 (en) | 2014-09-08 | 2017-04-25 | Lytro, Inc. | Saturated pixel recovery in light-field images |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
JP6256509B2 (ja) * | 2016-03-30 | 2018-01-10 | マツダ株式会社 | 電子ミラー制御装置 |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
CN110110718B (zh) * | 2019-03-20 | 2022-11-22 | 安徽名德智能科技有限公司 | 一种人工智能图像处理装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10271535A (ja) * | 1997-03-19 | 1998-10-09 | Hitachi Ltd | 画像変換方法及び画像変換装置 |
JP2000030084A (ja) * | 1998-07-13 | 2000-01-28 | Dainippon Printing Co Ltd | 画像合成装置 |
JP2000123196A (ja) * | 1998-09-25 | 2000-04-28 | Lucent Technol Inc | 三次元仮想現実のための表示技法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5625408A (en) * | 1993-06-24 | 1997-04-29 | Canon Kabushiki Kaisha | Three-dimensional image recording/reconstructing method and apparatus therefor |
EP0637815B1 (en) * | 1993-08-04 | 2006-04-05 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US5687249A (en) * | 1993-09-06 | 1997-11-11 | Nippon Telephone And Telegraph | Method and apparatus for extracting features of moving objects |
US6839081B1 (en) * | 1994-09-09 | 2005-01-04 | Canon Kabushiki Kaisha | Virtual image sensing and generating method and apparatus |
US6640004B2 (en) * | 1995-07-28 | 2003-10-28 | Canon Kabushiki Kaisha | Image sensing and image processing apparatuses |
US6057847A (en) * | 1996-12-20 | 2000-05-02 | Jenkins; Barry | System and method of image generation and encoding using primitive reprojection |
US6229548B1 (en) * | 1998-06-30 | 2001-05-08 | Lucent Technologies, Inc. | Distorting a two-dimensional image to represent a realistic three-dimensional virtual reality |
US6417850B1 (en) * | 1999-01-27 | 2002-07-09 | Compaq Information Technologies Group, L.P. | Depth painting for 3-D rendering applications |
CN1160210C (zh) * | 1999-09-20 | 2004-08-04 | 松下电器产业株式会社 | 驾驶提醒装置 |
JP2001111804A (ja) * | 1999-10-04 | 2001-04-20 | Nippon Columbia Co Ltd | 画像変換装置及び画像変換方法 |
KR100443552B1 (ko) * | 2002-11-18 | 2004-08-09 | 한국전자통신연구원 | 가상 현실 구현 시스템 및 방법 |
-
2005
- 2005-07-22 US US11/629,618 patent/US20080018668A1/en not_active Abandoned
- 2005-07-22 WO PCT/JP2005/013505 patent/WO2006009257A1/ja active Application Filing
- 2005-07-22 JP JP2006519641A patent/JP4642757B2/ja active Active
- 2005-07-22 CN CNA2005800247535A patent/CN101019151A/zh active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10271535A (ja) * | 1997-03-19 | 1998-10-09 | Hitachi Ltd | 画像変換方法及び画像変換装置 |
JP2000030084A (ja) * | 1998-07-13 | 2000-01-28 | Dainippon Printing Co Ltd | 画像合成装置 |
JP2000123196A (ja) * | 1998-09-25 | 2000-04-28 | Lucent Technol Inc | 三次元仮想現実のための表示技法 |
Non-Patent Citations (1)
Title |
---|
GUILLOU E. AND MENEVEAUX D. ET AL: "Using vanishing points for camera calibration and coarse 3D reconstruction from a single image.", THE VISUAL COMPUTER., vol. 16, no. 7, 2000, pages 396 - 410, XP002991719 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009015583A (ja) * | 2007-07-04 | 2009-01-22 | Nagasaki Univ | 画像処理装置及び画像処理方法 |
JP2013506198A (ja) * | 2009-09-25 | 2013-02-21 | イーストマン コダック カンパニー | デジタル画像の美的品質の推定方法 |
JP2013037510A (ja) * | 2011-08-08 | 2013-02-21 | Juki Corp | 画像処理装置 |
CN103063314A (zh) * | 2012-01-12 | 2013-04-24 | 杭州美盛红外光电技术有限公司 | 热像装置和热像拍摄方法 |
CN103105234A (zh) * | 2012-01-12 | 2013-05-15 | 杭州美盛红外光电技术有限公司 | 热像装置和热像规范拍摄方法 |
JP2015039490A (ja) * | 2013-08-21 | 2015-03-02 | 株式会社三共 | 遊技機 |
US9948913B2 (en) | 2014-12-24 | 2018-04-17 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for processing an image pair |
WO2018051688A1 (ja) * | 2016-09-15 | 2018-03-22 | キヤノン株式会社 | 仮想視点画像の生成に関する情報処理装置、方法及びプログラム |
JP2018046448A (ja) * | 2016-09-15 | 2018-03-22 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JP2019096996A (ja) * | 2017-11-21 | 2019-06-20 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
CN108171649A (zh) * | 2017-12-08 | 2018-06-15 | 广东工业大学 | 一种保持焦点信息的图像风格化方法 |
CN108171649B (zh) * | 2017-12-08 | 2021-08-17 | 广东工业大学 | 一种保持焦点信息的图像风格化方法 |
JP2022069007A (ja) * | 2020-10-23 | 2022-05-11 | 株式会社アフェクション | 情報処理システム、情報処理方法および情報処理プログラム |
Also Published As
Publication number | Publication date |
---|---|
CN101019151A (zh) | 2007-08-15 |
JPWO2006009257A1 (ja) | 2008-05-01 |
US20080018668A1 (en) | 2008-01-24 |
JP4642757B2 (ja) | 2011-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4642757B2 (ja) | 画像処理装置および画像処理方法 | |
CN110738595B (zh) | 图片处理方法、装置和设备及计算机存储介质 | |
US9489765B2 (en) | Silhouette-based object and texture alignment, systems and methods | |
JP6220486B1 (ja) | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム | |
US7903111B2 (en) | Depth image-based modeling method and apparatus | |
US8947422B2 (en) | Gradient modeling toolkit for sculpting stereoscopic depth models for converting 2-D images into stereoscopic 3-D images | |
JP6196416B1 (ja) | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム | |
EP3668093B1 (en) | Method, system and apparatus for capture of image data for free viewpoint video | |
US9374535B2 (en) | Moving-image processing device, moving-image processing method, and information recording medium | |
JP2019053732A (ja) | シーン内に存在する不要なオブジェクトの除去に基づくシーンの画像の動的生成 | |
WO1998009253A1 (fr) | Procede permettant de fournir des informations sur une texture, procede d'extraction d'objet, procede de production de modeles tridimensionnels et appareillage associe a ceux-ci | |
JP2014178957A (ja) | 学習データ生成装置、学習データ作成システム、方法およびプログラム | |
CN104735435B (zh) | 影像处理方法及电子装置 | |
US8436852B2 (en) | Image editing consistent with scene geometry | |
CN111008927B (zh) | 一种人脸替换方法、存储介质及终端设备 | |
JP2011048586A (ja) | 画像処理装置および画像処理方法、並びにプログラム | |
JP2010237804A (ja) | 画像検索システム及び画像検索方法 | |
JP2010287174A (ja) | 家具シミュレーション方法、装置、プログラム、記録媒体 | |
CN111724470B (zh) | 一种处理方法及电子设备 | |
JP6272071B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
Park | Interactive 3D reconstruction from multiple images: A primitive-based approach | |
KR101566459B1 (ko) | 이미지 기반의 비주얼 헐에서의 오목 표면 모델링 | |
Zhong et al. | Slippage-free background replacement for hand-held video | |
JP2020173726A (ja) | 仮想視点変換装置及びプログラム | |
CN115359169A (zh) | 图像处理方法、装置和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006519641 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11629618 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580024753.5 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 11629618 Country of ref document: US |