WO2014070120A2 - Method of interaction using augmented reality - Google Patents

Method of interaction using augmented reality Download PDF

Info

Publication number
WO2014070120A2
WO2014070120A2 PCT/SK2013/050009 SK2013050009W WO2014070120A2 WO 2014070120 A2 WO2014070120 A2 WO 2014070120A2 SK 2013050009 W SK2013050009 W SK 2013050009W WO 2014070120 A2 WO2014070120 A2 WO 2014070120A2
Authority
WO
WIPO (PCT)
Prior art keywords
virtual objects
optical information
imaging
augmented reality
camera
Prior art date
Application number
PCT/SK2013/050009
Other languages
French (fr)
Other versions
WO2014070120A3 (en
Inventor
Andrej GRÉK
Original Assignee
Grék Andrej
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Grék Andrej filed Critical Grék Andrej
Publication of WO2014070120A2 publication Critical patent/WO2014070120A2/en
Publication of WO2014070120A3 publication Critical patent/WO2014070120A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/204Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform the platform being a handheld device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present invention relates to an application of augmented reality and more precisely a method of interaction using augmented reality and a corresponding augmented reality system in which the user is provided an augmented reality experience with high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • This method of interaction enables the user of the augmented reality software to view a physical object of a specific planar spatial configuration, which carries optical information, using devices that include cameras in such a way, that the user positions a camera in close proximity to the physical object, while maintaining the ability of the augmented reality software to recognize the physical object by means of a software module for recognizing objects carrying optical information.
  • the ratio of the surface area of the physical object positioned in the camera's field of view to the total area of the physical object is significantly reduced when viewing the physical object this way and can be less than 0.01.
  • the physical object can be enlarged and high scalability of the number of augmented reality software users viewing a single physical object simultaneously can be achieved, and also a high level of detail of virtual objects or small parts of large-scale virtual objects can be displayed to the user.
  • Augmented reality represents a virtual reality technology through use of which physical objects in a camera's field of view are transmitted to a computing device, where the incoming video stream from the camera is enriched by the addition of virtual objects in the form of computer graphics, computer-generated imagery or other optical information after which a composite video stream is transmitted to a displaying device.
  • Some augmented reality applications utilize physical objects as triggers for displaying virtual objects and these triggers must be present in great amount or completely visible in the camera's field of view in order to be considered as recognized by the object recognition software module. When recognized, the trigger is overlapped with virtual objects in such a way, that the user then sees virtual objects and the physical space and physical objects from the camera's video stream, that were not overlapped by any virtual objects on a displaying device.
  • augmented reality applications include physical objects in planar spatial configuration with various sizes to be used as triggers. In several cases, these objects have no other general role than that of triggering physical objects for an augmented reality system and the consequent function of carrying optical information regardless of whether a single physical object or multiple objects are concerned. These objects are always regarded as separate objects with each object corresponding to a single data file of optical characteristics. These objects are not fragmented into multiple optical information parts and therefore they are only recognizable as separate objects without any capability to individually recognize certain parts of a physical object. In multiple augmented reality applications are these physical objects considered as objects intended to be, during interaction using augmented reality, present in great amount or completely visible in a camera's field of view and the camera is to be positioned at a corresponding distance.
  • augmented reality applications lack the capability of using a camera to view physical objects, that are not intended to be present in great amount or completely visible in a camera's field of view, and are therefore not intended to be viewed using a camera positioned in a close proximity to the physical object, when the ratio of the surface area of the physical object positioned in the camera's field of view to the total area of the physical object is less than 0.01.
  • These augmented reality applications therefore lack the capability of delivering high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects by having the user position a camera in close proximity to the viewed physical object.
  • augmented reality software Since the positioning of a camera during viewing of a physical object is determined by the user, proper functioning of augmented reality software depends on whether the user is able to position a camera so that the physical object is in great amount or completely in the field of view of the camera. If the physical object is of a larger size, than a size allowing the user in a given situation to position it in great amount or completely into the field of view of the camera from a comfortable position or a position natural for a given activity, the triggering physical object will not be recognized.
  • devices that include cameras are positioned somewhere on or near the user's body and a camera is used for searching for physical objects in its field of view, as is common practice, the user must be at a certain distance from physical objects, for them be recognizable.
  • the user In order for the augmented reality application to allow direct control over the view at virtual objects, the user needs to have the ability to view virtual objects by changing the spatial relationship between the camera and the object. The user carries this out by changing position and rotation in physical space either of himself or the physical object.
  • the physical object must be positioned at a comfortable distance from the user's body when it is to be manipulated by the user, or it must be of a certain maximum size when the user needs to move around it, in order to manipulate the virtual object that is overlapping it.
  • a virtual object contains a high level of detail or is large scale and only a low level of detail or large parts of virtual objects are visible from a range in which the surface area of the triggering physical object is in great amount or completely visible in a camera's field of view, achieving a greater level of detail of virtual objects by a simple movement of a camera closer to a physical object is not possible with current augmented reality applications.
  • the physical object recognition software module is in this case also not capable of recognizing the physical object because it is not present in great amount in a camera's field of view.
  • Some augmented reality applications which allow direct control over the view of virtual objects by manipulating the spatial relationship between a physical object carrying optical information and a user-operated camera are also capable of simultaneous operation by multiple users. At least one physical object is then shared by multiple users.
  • the mentioned problems can be avoided by using the present invention.
  • the present invention provides a method of interaction using augmented reality and a corresponding system, which allows users to view a physical object carrying optical information in a specific spatial configuration of a planar shape, using devices that include cameras in such a way, that the user can position such surface area of the physical object in the field of view of a camera so that the ratio of the surface area in the field of view of a camera to the total area of the physical object can be less than 0.01, while the ability of the software module for recognizing objects to recognize the physical object is maintained. Consequently a computing device is able to add virtual objects in the form of computer graphics, computer-generated imagery or other optical information by overlapping the incoming video stream and to transmit such composite video stream to a displaying device.
  • the present invention provides an augmented reality experience with a high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • the present invention is implemented as a method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • a triggering physical object in the form of a device carrying optical information is positioned by the user into the field of view of a camera of a device that includes cameras, which is used to view it's surface area.
  • the device that includes cameras transmits the video stream from the camera used to view the device carrying optical information into the computing device that is running augmented reality software, which includes a software module for recognizing objects.
  • Optical characteristics of the surface area of the device carrying optical information, which is in the field of view of the camera are compared with optical characteristics stored in data, to which the augmented reality software has provided access.
  • Optical characteristics of the surface area of the device carrying optical information are recognized by the software module for recognizing objects of the augmented reality software running on the computing device. Basing on the recognition of the triggering physical object in the form of a device carrying optical information, the augmented reality software can determine the spatial relationship between this device and the camera of the device that includes cameras and use that to calculate and determine a correct placement of virtual objects into the video stream. Virtual objects are placed into the video stream that is transmitted to the computing device from the device that includes cameras and such composite video stream is transmitted to the displaying device, which displays it. Physical and virtual objects may or may not be visible in this imaging.
  • the user performs further input signals for controlling functions of the augmented reality software on a controlling device, without any manipulation of the device carrying optical information or other triggering physical objects, and as a result a single device carrying optical information can be used simultaneously by multiple users.
  • the user changes the spatial relationship between the camera and the device carrying optical information, in order to change the view of virtual objects, that are displayed to him on a displaying device, while parts of virtual objects, that are included in an imaging on a displaying device of one user are completely independent of other simultaneous users.
  • Camera of the device that includes cameras is further positioned into close proximity to the device carrying optical information, so that the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area is less than 0.01.
  • the augmented reality software is able to recognize at least one part of the device carrying optical information. Basing on the recognition of optical characteristics of at least one part of the device carrying optical information with concurrent placement of the camera of the device that includes cameras into such spatial relation with this device, when the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area is less than 0.01, high level of detail imaging of virtual objects or imaging of small parts of large-scale virtual objects can be displayed.
  • the present invention is implemented as a method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects, where the method for providing access to data for augmented reality software running on a computing device is characterised by division of the total surface area of the device carrying optical information, that is in a specific spatial configuration of a planar shape, into multiple separate segments of optical information.
  • Each segment of optical information represents a quadrilateral shape of a variable size and the number carried out of divisions is arbitrary so long as at least one division is made.
  • Each segment of optical information, which is a result of the performed divisions must contain a sufficient amount of optical information in order to be recognizable by the augmented reality software and the software module for recognizing objects independently from the other segments.
  • each segment of optical information of the device carrying optical information is analysed by the software for analysis of optical characteristics, basing on which a separate data file of optical characteristics for each segment of optical information is generated.
  • Each data file contains optical characteristics of a corresponding segment of optical information, which is a result of the performed divisions of the surface area of the device carrying optical information.
  • This arrangement is performed in such way, that the spatial configuration of segments of optical information in a data file or a database corresponds to the spatial configuration of the segments of optical information on the surface of the device carrying optical information.
  • These data files or databases are used for comparing optical characteristics of the device carrying optical information with the data and the recognition of optical characteristics of this device in the data.
  • the augmented reality software retains the ability to recognize at least one segment of the device carrying optical information, with concurrent positioning of a camera of the device that includes cameras into close proximity to the device carrying optical information.
  • the ratio of the surface area of the device carrying optical information in the field of view of a camera to it's total area can be less than 0.01, allowing displaying of a high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects to the user.
  • the created data files or databases with such properties are provided to the augmented reality software. This provision can be performed using a network or an Internet transmission and can also be carried out by a transmission to a storage medium, which is physically attached to the computing device, which is running the augmented reality software.
  • the present invention is implemented in such way, that the method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects is characteristic in that the device carrying optical information comprises a physical object, which serves as a triggering physical object for an augmented reality software.
  • This device is in a specific spatial configuration of a planar shape and is composed of multiple segments of optical information. These segments of optical information correspond to segments of optical information created with the purpose of providing access to data for the augmented reality software and therefore correspond to separate data files of optical characteristics.
  • These segments of optical information are characteristic by containing a sufficient amount of unique optical characteristics to make recognition of each segment separately possible, even with concurrent positioning of the camera of a device that includes cameras into close proximity to the device carrying optical information.
  • augmented reality software recognizes at least one segment of the device carrying optical information and the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area can be less than 0.01.
  • This property of the optical segments of the device carrying optical information allows displaying a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects.
  • the device carrying optical information is intended to carry optical information, which can be at any given moment viewed by a single camera of a device that includes cameras or be simultaneously viewed by multiple cameras of one or more devices that include cameras.
  • the size of the device carrying optical information can be arbitrary and depends on the particular application of the method of interaction using augmented reality for displaying a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects and the number of users, which are meant to simultaneously view this device using devices that include cameras in the particular application.
  • the device carrying optical information can be constructed from a single physical object or from multiple physical objects, when it may or may not have mechanically secured joints.
  • This device can be further constructed from any material of a group of materials consisting of paper, cardboard, carton, plastic, rubber, metal, glass, wood, and cork.
  • the device carrying optical information can also be displayed using a displaying or a projecting device, where the spatial configuration of the surface on which the device is displayed corresponds to the spatial configuration of the device carrying optical information, i.e. is in a spatial configuration of a planar shape.
  • this spatial configuration may be a configuration of a planar shape, or may only appear to be as a configuration of a planar shape when viewed by the naked eye.
  • the device carrying optical information in this spatial configuration can be, as a separate unit, included as a component of a physical object which itself is in another spatial configuration.
  • the surface area of the device carrying optical information which is viewed by devices that include cameras contains optical information, which is created on the basis of designs of composition of optical information and is created so, that it contains a sufficient amount of optical information in each segment of the device, as a result maintaining the capability of the augmented reality software to recognize each segment individually.
  • the number of such designs is not limited in any way.
  • an augmented reality system for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • This system comprises a device that includes cameras, a computing device running augmented reality software, a displaying device, a controlling device and the device carrying optical information.
  • the device that includes cameras is intended to receive a video stream from a camera and transmit it to the computing device, and is configured to allow the user of the device to modify the spatial relationship between the camera and the device carrying optical information. The user can carry out a modification of the spatial relationship between these devices in order to manipulate the imaging of virtual objects displayed on the displaying device.
  • the device that includes cameras may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, a displaying device that includes cameras intended to be worn on a head, gaming console, portable gaming console and a camera.
  • PDA personal data assistant
  • the computing device running augmented reality software is configured so, that using the software module for recognizing objects it recognizes optical characteristics in the video stream from a camera in order to determine correct placement of virtual objects, and is further configured to place these virtual objects into the video stream.
  • the augmented reality software running on the computing device is configured so that it has access to data files of optical characteristics, which it can use to compare and recognize optical characteristics present in the video stream from a camera. By recognizing optical characteristics, the augmented reality software running on the computing device recognizes individual segments of the device carrying optical information.
  • the computing device running the augmented reality software is intended to transmit the video stream from a camera combined with placed virtual objects to the displaying device and is intended to receive and process any input signals from the controlling device, for the purpose of manipulating virtual objects.
  • the computing device may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, gaming console and a portable gaming console.
  • the displaying device is intended to display the composite video stream with placed virtual objects, which is transmitted to it from a computing device running augmented reality software.
  • the displaying device may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, a displaying device intended to be worn on a head, gaming console, portable gaming console, projector and a display.
  • PDA personal data assistant
  • the controlling device is intended for the user to use it to perform input signals for the augmented reality software and these to be transmitted to the computing device running augmented reality software, as a result granting the user access to the functions of augmented reality software, which manipulate the virtual objects displayed to the user and to other functions of the augmented reality software.
  • the controlling device may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, gaming console, portable gaming console, control, keyboard, mouse, touchpad and a spatial sensor for tracking hand or finger movement in space.
  • a controlling device can also be a microphone or a device that includes a microphone for capturing sound for the purpose of issuing voice commands for voice operated control of the functions of the augmented reality software.
  • the device carrying optical information is intended to be viewed by a camera of the device that includes cameras and is in a specific spatial configuration of a planar shape consisting of multiple segments of optical information.
  • These segments of optical information correspond to the segments of optical information created to provide access to data for the augmented reality software and therefore they correspond to separate data files of optical characteristics.
  • These segments contain a sufficient amount of unique optical information to allow recognition of each segment individually, even with concurrent positioning of a camera of a device that includes cameras into close proximity to the device carrying optical information. With such positioning of a camera of a device that includes cameras, this configuration of the device carrying optical information allows the augmented reality software to recognize at least one segment of the device carrying optical information.
  • the ratio of the surface area of the device carrying optical information in the field of view of a camera to it's total surface area can be less than 0.01
  • This configuration of the device carrying optical information is necessary for filling the purpose of displaying a high level of detail imaging of virtual objects or an imaging of small parts of large-scale virtual objects.
  • All devices that comprise this augmented reality system for displaying a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects except for the device carrying optical information can be configured in whatever way into a single combined device or several individual or combined devices.
  • the presented method of interaction using augmented reality and the corresponding system for displaying of a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects provides the capability to view a device carrying optical information of such size, due to which this device cannot be placed in great amount or completely into the field of view of a camera of a device that includes cameras from a comfortable position or a position natural for certain activity.
  • the ratio of the surface area of the device in the field of view of the camera to the total surface area of the device may become less than 0.01.
  • Viewing which means also changing the displayed imaging of virtual objects, is performed by manipulating the spatial relationship between the camera and the device carrying optical information, without any manipulation of any triggering physical objects by the users.
  • Functions of augmented reality software are made accessible using the controlling device, also without any manipulation of triggering physical objects.
  • the same triggering physical object can be used by multiple users simultaneously.
  • a camera of a device that includes cameras positioned into close proximity to the device carrying optical information so, that the ratio of the surface area in the field of view of the camera to the total surface area of the device carrying optical information is less than 0.01
  • a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects can be displayed to the user.
  • Such great size of virtual objects and high level of displayed detail, nor the ability to control all functions of augmented reality software concurrently with displaying such degree of detail is not achievable using methods or systems and devices of current augmented reality applications.
  • FIG. 1 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of positioning the device carrying optical information into the field of view of a camera of a device that includes cameras;
  • FIG. 2 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of placing virtual objects into the video stream transmitted to a displaying device;
  • FIG. 3 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of performing input signals on a controlling device for controlling functions of augmented reality software;
  • FIG. 4 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of performing a change in the spatial relationship between a camera and a device carrying optical information in order to change the displayed imaging of virtual objects on a displaying device
  • FIG. 5 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of positioning a camera of a device that includes cameras into close proximity to the device carrying optical information in order to display a high level of detail imaging of virtual objects or an imaging of small parts of large-scale virtual objects;
  • FIG. 6 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a relationship between the positioning of a camera of a device that includes cameras against the device carrying optical information and the displayed size or level of detail of virtual objects displayed on a displaying device;
  • FIG. 7 is a table illustrating scalability of a device carrying optical information of an embodiment of a method of interaction using augmented reality, in terms of a relationship between size of this device and the number of users who can simultaneously view this device using cameras of devices that include cameras;
  • FIG. 8 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of performing a change in the spatial relationship between a camera and a device carrying optical information, in order to change the displayed imaging of the virtual objects on a displaying device by multiple users simultaneously, where the displayed imaging of individual users are independent of each other;
  • FIG. 9 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a spatial relationship between a device carrying optical information and virtual objects displayed on a displaying device;
  • FIG. 10 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of a method of providing access to data for the augmented reality software running on a computing device, during which the division of the total surface area of a device carrying optical information is performed;
  • FIG. 11 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a device carrying optical information with marked segments of optical information, where each segment of optical information corresponds to a separate data file of optical characteristics;
  • FIG. 12 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a configuration of the device carrying optical information consisting of a single physical object without mechanically secured joints with indication of uniqueness of individual segments of optical information;
  • FIG. 13 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of positioning a camera of a device that includes cameras into close proximity to a device carrying optical information, where during such positioning of a camera of a device that includes cameras an augmented reality software recognizes at least one segment of a device carrying optical information;
  • FIG. 14 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an example configuration of devices of an augmented reality system for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • FIG. 1 to FIG. 14 describe an embodiment of this method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • This embodiment is at the same time an example of the carrying out the invention.
  • This method of interaction is defined by several steps, out of which FIG. 1 illustrates an operation of positioning the device carrying optical information 500 into the field of view 120 of a camera 110 of a device that includes cameras 100 performed by the user 600.
  • the result of this operation is, that an imaging of a physical object 501, which is a part of a device carrying optical information 500, gets into a video stream from the camera 121.
  • the device carrying optical information 500 is in this embodiment, for the purposes of this example, constructed out of paper.
  • a device that includes cameras 100 is configured in this embodiment so that it together with a computing device 200, a displaying device 300, and a controlling device 400, form a single combined device.
  • this device is a tablet.
  • FIG. 2 illustrates the next step of this interaction method, which is placement of virtual objects 310 into a video stream from a camera, which creates a composite video stream.
  • a device carrying optical information 500 which is positioned by a user 600 into the field of view of a camera is then present in a video stream from the camera and optical information 510, which is on the surface of the device carrying optical information 500 is simultaneously displayed in this composite video stream, if it is not overlapped with an imaging of virtual objects 310.
  • This composite video stream with an imaging of virtual objects 310 and with an imaging of the optical information 520 is displayed on a displaying device 300.
  • FIG. 3 illustrates the next step of this interaction method, where a user 600 is performing input signals 401 on a controlling device 400, which are controlling functions of an augmented reality software, which manipulate the displayed virtual objects 310.
  • a device carrying optical information 500 is in the field of view of a camera during this operation. This step is characterised mainly by, that the user 600 does not perform any changes on the device carrying optical information 500 to manipulate the displayed virtual objects 310.
  • FIG. 4 shows how a user 600 can modify an imaging of virtual objects displayed on a displaying device 300 when viewing a device carrying optical information 500 using a device that includes cameras.
  • the user 600 positioned a camera of a device that includes cameras into position 111.
  • optical information 510 of the device carrying optical information 500 displayed on the displaying device 300 is overlapped by an imaging of virtual objects 311.
  • the user 600 performs a change in the positioning of the camera of a device that includes cameras into position 112.
  • new imaging of virtual objects 112 is displayed to the user on the displaying device 300. This way a user can modify a displayed imaging of virtual objects.
  • FIG. 5 illustrates the next step of this interaction, during which a device carrying optical information 500 is positioned into the field of view of a camera 120 of a device that includes cameras 100.
  • the camera 110 is positioned into close proximity to the device carrying optical information 500 so that the ratio of the area of this device in the field of view of the camera to the total surface area of the device is less than 0.01.
  • the video stream from the camera only contains imaging of a physical object 501, which is a part of the device carrying optical information 500.
  • FIG. 6 illustrates a relationship between the positioning of a camera 110 of a device that includes cameras 100 and the properties of displayed views of virtual objects in a composite video stream 122 displayed on a displaying device.
  • the camera 110 of the device that includes cameras 100 is placed into position 113 when a great amount of the area or the whole surface area of a device carrying optical information 500 is in the field of view of a camera 120, such imaging of virtual objects 313 is displayed, when only large parts of large-scale virtual objects and small level of detail of virtual objects are visible.
  • the camera 110 of the device that includes cameras 100 When the camera 110 of the device that includes cameras 100 is placed into position 114 when the camera 110 is in close proximity to the device carrying optical information 500, such area of this device is in the field of view of the camera 120, that the ratio of the area of this device in the field of view of the camera to the total surface area of the device is less than 0.01.
  • the capability of an augmented reality software to recognize the device carrying optical information 500 is then maintained and such imaging 314 is displayed by the composite video stream 122, when small parts of large-scale virtual objects and a high level of detail of virtual objects is displayed.
  • FIG. 7 shows a table, where a single device carrying optical information 500 is configured into different sizes, which are multiples of a module of the device carrying optical information 503.
  • This device is scalable in this manner into various sizes, in order to make it possible to view the device using cameras of devices that include cameras 100 of multiple users 600 simultaneously.
  • a device carrying optical information 500 is viewed at a given time by several users, as illustrated in FIG. 8, views of virtual objects displayed on a displaying device 300 of individual users are independent of each other.
  • a change of imaging of virtual objects 313 is carried out by a user 601 by changing the camera position 113 into a new position of the camera 114, by which he also changes the spatial relationship between the camera and the device carrying optical information 500 without manipulating this device and acquires a new imaging of virtual objects 314.
  • another user 602 can modify the spatial relationship between the camera and the device carrying optical information 500 independently of this user 601.
  • the user 602 modifies imaging of virtual objects 315 by changing the camera position 115 into a new camera position 116, resulting in a change of its spatial relationship to the device carrying optical information 500, thereby acquiring a new imaging of the virtual objects 316.
  • FIG. 9 illustrates a spatial relationship between a device carrying optical information 500 and virtual objects 320 displayed on a displaying device.
  • An origin of the coordinate system for virtual objects 530 and its orientation determine placement of the virtual objects, and it is positioned and oriented so that it coincides with the positioning and orientation of the coordinate system of the device carrying optical information 540 and its origin.
  • This coordinate system of the device carrying optical information 540 is evaluated by the augmented reality software on the basis of the spatial relationship between the camera and the device carrying optical information .
  • FIG. 10 illustrates an operation of a method of providing access to data for the augmented reality software running on a computing device, of this method of interaction.
  • Data is provided so that on the entire surface area of the device carrying optical information 500 containing optical information 510 a division into separate segments of optical information 550 is performed. Number of divisions is arbitrary, as long as at least one division is made.
  • the surface area of this device is divided into several separate segments, where each individual segment of this device 560 contains individual optical information 570.
  • FIG. 11 further illustrates a device carrying optical information 500 divided into individual segments of optical information 580, each of which corresponds to a separate data file of optical characteristics 590. These data files are generated by an analysis of each segment of optical information separately and facilitate the separate recognition of each segment of the device carrying optical information 500.
  • FIG. 12 clearly shows, that a device carrying optical information 500, which for the purposes of this example in this embodiment is constructed out of a single physical object without mechanically secured joints 502, is composed of such separate segments of optical information, where each segments contains unique optical information 581.
  • FIG. 13 illustrates, how during performing an operation of this method of interaction, positioning of a camera 110, of a device that includes cameras 100 by a user 600 into close proximity to a device carrying optical information 500, so that the ratio of the surface area of this device in the field of view of the camera 120 to the total surface area of this device is less than 0.01, augmented reality software is capable of recognizing at least one segment of optical information. In the particular case, augmented reality software recognizes an individual segment of optical information 582.
  • FIG. 14 illustrates a configuration of individual devices of the augmented reality system corresponding to this method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
  • a device that includes cameras 100, a computing device 200, a displaying device 300 and a controlling device 400 are configured so that they form a single combined device, a tablet, which a user 600 holds in his hands and independently determines its position in relation to the device carrying optical information 500.
  • the present invention can be applied in several industrial fields such as education, entertainment business, cartography or in the field of design, because the present invention provides such method of interaction using augmented reality, which allows multiple simultaneous users to view a single triggering physical object in the form of a device carrying optical information.
  • augmented reality allows multiple simultaneous users to view a single triggering physical object in the form of a device carrying optical information.
  • Next it allows positioning of a camera into close proximity to a device carrying optical information, where the ratio of the surface area in the field of view of a camera to the total surface area of the device carrying optical information can be less than 0.01, while at the same time it maintains the function of the augmented reality software to recognize the device carrying optical information, as a result of what it allows displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a method of interaction using augmented reality for displaying a high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects and a corresponding system. Utilizing this method allows a user to view a device carrying optical information (500) in the form of a physical object of a specific spatial configuration of a planar shape, using a camera (110) of a device that includes cameras (100) in such way, that he positions the camera into close proximity to the device carrying optical information (114). The ability of augmented reality software to recognize optical characteristics of this device is maintained, on the basis of which, a high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects can be displayed to a user (314).

Description

Method of interaction using augmented reality
Technical Field
The present invention relates to an application of augmented reality and more precisely a method of interaction using augmented reality and a corresponding augmented reality system in which the user is provided an augmented reality experience with high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects. This method of interaction enables the user of the augmented reality software to view a physical object of a specific planar spatial configuration, which carries optical information, using devices that include cameras in such a way, that the user positions a camera in close proximity to the physical object, while maintaining the ability of the augmented reality software to recognize the physical object by means of a software module for recognizing objects carrying optical information. The ratio of the surface area of the physical object positioned in the camera's field of view to the total area of the physical object is significantly reduced when viewing the physical object this way and can be less than 0.01. As a result, the physical object can be enlarged and high scalability of the number of augmented reality software users viewing a single physical object simultaneously can be achieved, and also a high level of detail of virtual objects or small parts of large-scale virtual objects can be displayed to the user.
Background Art
Augmented reality represents a virtual reality technology through use of which physical objects in a camera's field of view are transmitted to a computing device, where the incoming video stream from the camera is enriched by the addition of virtual objects in the form of computer graphics, computer-generated imagery or other optical information after which a composite video stream is transmitted to a displaying device. Some augmented reality applications utilize physical objects as triggers for displaying virtual objects and these triggers must be present in great amount or completely visible in the camera's field of view in order to be considered as recognized by the object recognition software module. When recognized, the trigger is overlapped with virtual objects in such a way, that the user then sees virtual objects and the physical space and physical objects from the camera's video stream, that were not overlapped by any virtual objects on a displaying device.
The spatial characteristics of virtual objects are therefore dependent on the spatial relationship between the triggering physical object and the camera used for viewing it. Some examples of augmented reality applications include physical objects in planar spatial configuration with various sizes to be used as triggers. In several cases, these objects have no other general role than that of triggering physical objects for an augmented reality system and the consequent function of carrying optical information regardless of whether a single physical object or multiple objects are concerned. These objects are always regarded as separate objects with each object corresponding to a single data file of optical characteristics. These objects are not fragmented into multiple optical information parts and therefore they are only recognizable as separate objects without any capability to individually recognize certain parts of a physical object. In multiple augmented reality applications are these physical objects considered as objects intended to be, during interaction using augmented reality, present in great amount or completely visible in a camera's field of view and the camera is to be positioned at a corresponding distance.
These facts result in further described issues of associated implementations of augmented reality applications, including implementations such as those described in the international patent publications WO 2006/023268 A2, WO 2011/103272 A2, WO 2011/123192 Al, WO 2009/084782 Al, WO 2012/054063 Al, WO 2012/094605 Al, in the European patent publications EP 2 433 683 A2, EP 2 267 595 A2, EP 2 490 182 Al, and United States patent publication US 2007/0242886 Al. The issues are based on the fact that these augmented reality applications lack the capability of using a camera to view physical objects, that are not intended to be present in great amount or completely visible in a camera's field of view, and are therefore not intended to be viewed using a camera positioned in a close proximity to the physical object, when the ratio of the surface area of the physical object positioned in the camera's field of view to the total area of the physical object is less than 0.01. These augmented reality applications therefore lack the capability of delivering high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects by having the user position a camera in close proximity to the viewed physical object.
Current augmented reality applications utilizing physical objects carrying optical information from augmented reality software and from an object recognition software module to determine correct displaying of virtual objects are also limited by the maximum size of a physical object. Due to the fact that physical objects used in current augmented reality applications are intended to be in great amount or completely in the field of view of a camera, a certain minimal value of the ratio of the surface area of the physical object positioned in the camera's field of view to the total area of the physical object attainable in practice. The value of this ratio is never lower than 0.01 if the object is to be recognized and in most cases a great amount of the surface area or the whole surface area of the physical object must be visible in the field of view of a camera in order for the augmented reality software to work properly. Since the positioning of a camera during viewing of a physical object is determined by the user, proper functioning of augmented reality software depends on whether the user is able to position a camera so that the physical object is in great amount or completely in the field of view of the camera. If the physical object is of a larger size, than a size allowing the user in a given situation to position it in great amount or completely into the field of view of the camera from a comfortable position or a position natural for a given activity, the triggering physical object will not be recognized.
If one considers that devices that include cameras are positioned somewhere on or near the user's body and a camera is used for searching for physical objects in its field of view, as is common practice, the user must be at a certain distance from physical objects, for them be recognizable. In order for the augmented reality application to allow direct control over the view at virtual objects, the user needs to have the ability to view virtual objects by changing the spatial relationship between the camera and the object. The user carries this out by changing position and rotation in physical space either of himself or the physical object. To allow the user to perform this, the physical object must be positioned at a comfortable distance from the user's body when it is to be manipulated by the user, or it must be of a certain maximum size when the user needs to move around it, in order to manipulate the virtual object that is overlapping it. When a virtual object contains a high level of detail or is large scale and only a low level of detail or large parts of virtual objects are visible from a range in which the surface area of the triggering physical object is in great amount or completely visible in a camera's field of view, achieving a greater level of detail of virtual objects by a simple movement of a camera closer to a physical object is not possible with current augmented reality applications. The physical object recognition software module is in this case also not capable of recognizing the physical object because it is not present in great amount in a camera's field of view. Some augmented reality applications, which allow direct control over the view of virtual objects by manipulating the spatial relationship between a physical object carrying optical information and a user-operated camera are also capable of simultaneous operation by multiple users. At least one physical object is then shared by multiple users. These users need to be at a certain distance from a physical object and a physical object needs to be of a certain maximum size, and in situations when each user views a virtual object with a high level of detail or of a large scale and a higher level of detail of a virtual object can only be achieved through placement of a camera closer to a physical object and the physical object is of such size, that when multiple users approach the physical object, they will be block each others cameras field of views, the virtual object will not be displayed at least for one of the users. In a similar situation, when a single physical object carrying optical information is shared by multiple users and a single user moves the physical object to modify his view of the virtual object, a modification of the spatial relationship occurs between the physical object and the cameras of all users. This way the user does not modify only his view of the virtual object, but the views for all users, thereby preventing simultaneous usage of a single physical object carrying optical information by multiple users. As a result these applications are not scalable above a certain number of simultaneous users and do not allow all users to have maximum control over the field's of view of cameras they are operating.
For these reasons, it was necessary to develop a solution which allows a user or several users to simultaneously use a camera to view a single physical object carrying optical information, which would not be intended to be present in great amount or completely visible in the field of view of a camera during interaction using augmented reality, and at the same time would allow it to be viewed from a shorter distance, when a camera is placed in close proximity to the physical object and where the ratio of the surface area of the physical object carrying optical information in the field of view of a camera to the total area of the physical object can be less than 0.01, to provide an augmented reality experience with high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
Disclosure of Invention
Technical problem
The mentioned problems can be avoided by using the present invention. The present invention provides a method of interaction using augmented reality and a corresponding system, which allows users to view a physical object carrying optical information in a specific spatial configuration of a planar shape, using devices that include cameras in such a way, that the user can position such surface area of the physical object in the field of view of a camera so that the ratio of the surface area in the field of view of a camera to the total area of the physical object can be less than 0.01, while the ability of the software module for recognizing objects to recognize the physical object is maintained. Consequently a computing device is able to add virtual objects in the form of computer graphics, computer-generated imagery or other optical information by overlapping the incoming video stream and to transmit such composite video stream to a displaying device. This way, the present invention provides an augmented reality experience with a high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
Other interactions with virtual objects are carried out on a controlling device, without any manipulation of triggering physical objects, so that views of the triggering physical objects present in the fields of view of cameras of devices of any one of the participating users are in no way altered or interrupted. This way, parts of virtual objects, which are included in an imaging on a displaying device of one user, are independent of other users. This is due to the fact that a single sample of optical information from a triggering physical object in a specific spatial configuration of a planar shape can come from any part of the physical object carrying optical information and may or may not include parts, that are in fields of view of cameras on devices that include cameras of other users. Present invention therefore provides a method of interaction using augmented reality, which allows multiple users to simultaneously manipulate virtual objects.
Technical solution
In one embodiment the present invention is implemented as a method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects. A triggering physical object in the form of a device carrying optical information is positioned by the user into the field of view of a camera of a device that includes cameras, which is used to view it's surface area. The device that includes cameras transmits the video stream from the camera used to view the device carrying optical information into the computing device that is running augmented reality software, which includes a software module for recognizing objects. Optical characteristics of the surface area of the device carrying optical information, which is in the field of view of the camera, are compared with optical characteristics stored in data, to which the augmented reality software has provided access. Optical characteristics of the surface area of the device carrying optical information are recognized by the software module for recognizing objects of the augmented reality software running on the computing device. Basing on the recognition of the triggering physical object in the form of a device carrying optical information, the augmented reality software can determine the spatial relationship between this device and the camera of the device that includes cameras and use that to calculate and determine a correct placement of virtual objects into the video stream. Virtual objects are placed into the video stream that is transmitted to the computing device from the device that includes cameras and such composite video stream is transmitted to the displaying device, which displays it. Physical and virtual objects may or may not be visible in this imaging. The user performs further input signals for controlling functions of the augmented reality software on a controlling device, without any manipulation of the device carrying optical information or other triggering physical objects, and as a result a single device carrying optical information can be used simultaneously by multiple users. Next, the user changes the spatial relationship between the camera and the device carrying optical information, in order to change the view of virtual objects, that are displayed to him on a displaying device, while parts of virtual objects, that are included in an imaging on a displaying device of one user are completely independent of other simultaneous users. Camera of the device that includes cameras is further positioned into close proximity to the device carrying optical information, so that the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area is less than 0.01. With such spatial relationship of the camera and the device carrying optical information in effect, as a result of the way, by which access to data is provided to the augmented reality software and the specific spatial configuration of the device carrying optical information, the augmented reality software is able to recognize at least one part of the device carrying optical information. Basing on the recognition of optical characteristics of at least one part of the device carrying optical information with concurrent placement of the camera of the device that includes cameras into such spatial relation with this device, when the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area is less than 0.01, high level of detail imaging of virtual objects or imaging of small parts of large-scale virtual objects can be displayed.
In another embodiment, the present invention is implemented as a method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects, where the method for providing access to data for augmented reality software running on a computing device is characterised by division of the total surface area of the device carrying optical information, that is in a specific spatial configuration of a planar shape, into multiple separate segments of optical information. Each segment of optical information represents a quadrilateral shape of a variable size and the number carried out of divisions is arbitrary so long as at least one division is made. Each segment of optical information, which is a result of the performed divisions, must contain a sufficient amount of optical information in order to be recognizable by the augmented reality software and the software module for recognizing objects independently from the other segments. Positioning of these divisions may or may not correspond to any divisions present in the composition of optical information on the surface of the device. In the next step, each segment of optical information of the device carrying optical information is analysed by the software for analysis of optical characteristics, basing on which a separate data file of optical characteristics for each segment of optical information is generated. Each data file contains optical characteristics of a corresponding segment of optical information, which is a result of the performed divisions of the surface area of the device carrying optical information. After the analysis of each segment of optical information by the software for analysis of optical characteristics and after generation of separate data files of optical characteristics for each segment of optical information separately, these data files are arranged into data files or databases. This arrangement is performed in such way, that the spatial configuration of segments of optical information in a data file or a database corresponds to the spatial configuration of the segments of optical information on the surface of the device carrying optical information. These data files or databases are used for comparing optical characteristics of the device carrying optical information with the data and the recognition of optical characteristics of this device in the data. As a result of such arrangement of these data files or databases out of separate data files for each segment of optical information of the device carrying optical information, the augmented reality software retains the ability to recognize at least one segment of the device carrying optical information, with concurrent positioning of a camera of the device that includes cameras into close proximity to the device carrying optical information. With such camera positioning, the ratio of the surface area of the device carrying optical information in the field of view of a camera to it's total area can be less than 0.01, allowing displaying of a high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects to the user. The created data files or databases with such properties are provided to the augmented reality software. This provision can be performed using a network or an Internet transmission and can also be carried out by a transmission to a storage medium, which is physically attached to the computing device, which is running the augmented reality software.
In another embodiment, the present invention is implemented in such way, that the method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects is characteristic in that the device carrying optical information comprises a physical object, which serves as a triggering physical object for an augmented reality software. This device is in a specific spatial configuration of a planar shape and is composed of multiple segments of optical information. These segments of optical information correspond to segments of optical information created with the purpose of providing access to data for the augmented reality software and therefore correspond to separate data files of optical characteristics. These segments of optical information are characteristic by containing a sufficient amount of unique optical characteristics to make recognition of each segment separately possible, even with concurrent positioning of the camera of a device that includes cameras into close proximity to the device carrying optical information. When positioning the camera of a device that includes cameras this way, augmented reality software recognizes at least one segment of the device carrying optical information and the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area can be less than 0.01. This property of the optical segments of the device carrying optical information allows displaying a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects. The device carrying optical information is intended to carry optical information, which can be at any given moment viewed by a single camera of a device that includes cameras or be simultaneously viewed by multiple cameras of one or more devices that include cameras. The size of the device carrying optical information can be arbitrary and depends on the particular application of the method of interaction using augmented reality for displaying a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects and the number of users, which are meant to simultaneously view this device using devices that include cameras in the particular application. The device carrying optical information can be constructed from a single physical object or from multiple physical objects, when it may or may not have mechanically secured joints. This device can be further constructed from any material of a group of materials consisting of paper, cardboard, carton, plastic, rubber, metal, glass, wood, and cork. The device carrying optical information can also be displayed using a displaying or a projecting device, where the spatial configuration of the surface on which the device is displayed corresponds to the spatial configuration of the device carrying optical information, i.e. is in a spatial configuration of a planar shape. However constructed or displayed the device carrying optical information was, this spatial configuration may be a configuration of a planar shape, or may only appear to be as a configuration of a planar shape when viewed by the naked eye. The device carrying optical information in this spatial configuration can be, as a separate unit, included as a component of a physical object which itself is in another spatial configuration. The surface area of the device carrying optical information, which is viewed by devices that include cameras contains optical information, which is created on the basis of designs of composition of optical information and is created so, that it contains a sufficient amount of optical information in each segment of the device, as a result maintaining the capability of the augmented reality software to recognize each segment individually. The number of such designs is not limited in any way.
In yet another embodiment, an augmented reality system for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects is presented. This system comprises a device that includes cameras, a computing device running augmented reality software, a displaying device, a controlling device and the device carrying optical information. The device that includes cameras is intended to receive a video stream from a camera and transmit it to the computing device, and is configured to allow the user of the device to modify the spatial relationship between the camera and the device carrying optical information. The user can carry out a modification of the spatial relationship between these devices in order to manipulate the imaging of virtual objects displayed on the displaying device. The device that includes cameras may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, a displaying device that includes cameras intended to be worn on a head, gaming console, portable gaming console and a camera.
The computing device running augmented reality software is configured so, that using the software module for recognizing objects it recognizes optical characteristics in the video stream from a camera in order to determine correct placement of virtual objects, and is further configured to place these virtual objects into the video stream. The augmented reality software running on the computing device is configured so that it has access to data files of optical characteristics, which it can use to compare and recognize optical characteristics present in the video stream from a camera. By recognizing optical characteristics, the augmented reality software running on the computing device recognizes individual segments of the device carrying optical information. Furthermore, the computing device running the augmented reality software is intended to transmit the video stream from a camera combined with placed virtual objects to the displaying device and is intended to receive and process any input signals from the controlling device, for the purpose of manipulating virtual objects. The computing device may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, gaming console and a portable gaming console.
The displaying device is intended to display the composite video stream with placed virtual objects, which is transmitted to it from a computing device running augmented reality software. The displaying device may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, a displaying device intended to be worn on a head, gaming console, portable gaming console, projector and a display.
The controlling device is intended for the user to use it to perform input signals for the augmented reality software and these to be transmitted to the computing device running augmented reality software, as a result granting the user access to the functions of augmented reality software, which manipulate the virtual objects displayed to the user and to other functions of the augmented reality software. The controlling device may be a device selected from a group of devices or be a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, gaming console, portable gaming console, control, keyboard, mouse, touchpad and a spatial sensor for tracking hand or finger movement in space. A controlling device can also be a microphone or a device that includes a microphone for capturing sound for the purpose of issuing voice commands for voice operated control of the functions of the augmented reality software.
The device carrying optical information is intended to be viewed by a camera of the device that includes cameras and is in a specific spatial configuration of a planar shape consisting of multiple segments of optical information. These segments of optical information correspond to the segments of optical information created to provide access to data for the augmented reality software and therefore they correspond to separate data files of optical characteristics. These segments contain a sufficient amount of unique optical information to allow recognition of each segment individually, even with concurrent positioning of a camera of a device that includes cameras into close proximity to the device carrying optical information. With such positioning of a camera of a device that includes cameras, this configuration of the device carrying optical information allows the augmented reality software to recognize at least one segment of the device carrying optical information. With such positioning of a camera of a device that includes cameras to the device carrying optical information, the ratio of the surface area of the device carrying optical information in the field of view of a camera to it's total surface area can be less than 0.01 This configuration of the device carrying optical information is necessary for filling the purpose of displaying a high level of detail imaging of virtual objects or an imaging of small parts of large-scale virtual objects.
All devices that comprise this augmented reality system for displaying a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects except for the device carrying optical information can be configured in whatever way into a single combined device or several individual or combined devices. Advantageous effects
According to the mentioned embodiments of the invention, it is possible to provide an augmented reality experience with displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects. Furthermore, the presented method of interaction using augmented reality and the corresponding system for displaying of a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects provides the capability to view a device carrying optical information of such size, due to which this device cannot be placed in great amount or completely into the field of view of a camera of a device that includes cameras from a comfortable position or a position natural for certain activity. During viewing of this device using a camera of a device that includes cameras, the ratio of the surface area of the device in the field of view of the camera to the total surface area of the device may become less than 0.01. Viewing, which means also changing the displayed imaging of virtual objects, is performed by manipulating the spatial relationship between the camera and the device carrying optical information, without any manipulation of any triggering physical objects by the users. Functions of augmented reality software are made accessible using the controlling device, also without any manipulation of triggering physical objects. As a result of a greater size of a triggering physical object and the inability of users to manipulate physical objects and therefore to change views of virtual objects of other simultaneous users, the same triggering physical object can be used by multiple users simultaneously. With a camera of a device that includes cameras positioned into close proximity to the device carrying optical information so, that the ratio of the surface area in the field of view of the camera to the total surface area of the device carrying optical information is less than 0.01, using the present invention and its embodiments a high level of detail imaging of virtual objects and an imaging of small parts of large-scale virtual objects can be displayed to the user. Such great size of virtual objects and high level of displayed detail, nor the ability to control all functions of augmented reality software concurrently with displaying such degree of detail is not achievable using methods or systems and devices of current augmented reality applications.
Brief Description of Drawings
The present invention will become more apparent in terms of the following figures in the drawings and the best mode for carrying out the invention. Components of the invention on the drawings are not shown to scale and the emphasis is primarily placed on illustrating the principles of the invention. Individual components are marked in the drawings using corresponding reference numerals.
FIG. 1 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of positioning the device carrying optical information into the field of view of a camera of a device that includes cameras;
FIG. 2 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of placing virtual objects into the video stream transmitted to a displaying device;
FIG. 3 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of performing input signals on a controlling device for controlling functions of augmented reality software;
FIG. 4 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of performing a change in the spatial relationship between a camera and a device carrying optical information in order to change the displayed imaging of virtual objects on a displaying device; FIG. 5 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of positioning a camera of a device that includes cameras into close proximity to the device carrying optical information in order to display a high level of detail imaging of virtual objects or an imaging of small parts of large-scale virtual objects;
FIG. 6 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a relationship between the positioning of a camera of a device that includes cameras against the device carrying optical information and the displayed size or level of detail of virtual objects displayed on a displaying device;
FIG. 7 is a table illustrating scalability of a device carrying optical information of an embodiment of a method of interaction using augmented reality, in terms of a relationship between size of this device and the number of users who can simultaneously view this device using cameras of devices that include cameras;
FIG. 8 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of performing a change in the spatial relationship between a camera and a device carrying optical information, in order to change the displayed imaging of the virtual objects on a displaying device by multiple users simultaneously, where the displayed imaging of individual users are independent of each other;
FIG. 9 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a spatial relationship between a device carrying optical information and virtual objects displayed on a displaying device;
FIG. 10 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of a method of providing access to data for the augmented reality software running on a computing device, during which the division of the total surface area of a device carrying optical information is performed;
FIG. 11 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a device carrying optical information with marked segments of optical information, where each segment of optical information corresponds to a separate data file of optical characteristics;
FIG. 12 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating a configuration of the device carrying optical information consisting of a single physical object without mechanically secured joints with indication of uniqueness of individual segments of optical information;
FIG. 13 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an operation of positioning a camera of a device that includes cameras into close proximity to a device carrying optical information, where during such positioning of a camera of a device that includes cameras an augmented reality software recognizes at least one segment of a device carrying optical information;
FIG. 14 is a schematic view of an embodiment of a method of interaction using augmented reality illustrating an example configuration of devices of an augmented reality system for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.
Best Mode for Carrying Out the Invention
FIG. 1 to FIG. 14 describe an embodiment of this method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects. This embodiment is at the same time an example of the carrying out the invention. This method of interaction is defined by several steps, out of which FIG. 1 illustrates an operation of positioning the device carrying optical information 500 into the field of view 120 of a camera 110 of a device that includes cameras 100 performed by the user 600. The result of this operation is, that an imaging of a physical object 501, which is a part of a device carrying optical information 500, gets into a video stream from the camera 121. The device carrying optical information 500 is in this embodiment, for the purposes of this example, constructed out of paper.
As it is illustrated in FIG. 14, a device that includes cameras 100 is configured in this embodiment so that it together with a computing device 200, a displaying device 300, and a controlling device 400, form a single combined device. For the purposes of this example, in this embodiment this device is a tablet.
FIG. 2 illustrates the next step of this interaction method, which is placement of virtual objects 310 into a video stream from a camera, which creates a composite video stream. A device carrying optical information 500, which is positioned by a user 600 into the field of view of a camera is then present in a video stream from the camera and optical information 510, which is on the surface of the device carrying optical information 500 is simultaneously displayed in this composite video stream, if it is not overlapped with an imaging of virtual objects 310. This composite video stream with an imaging of virtual objects 310 and with an imaging of the optical information 520 is displayed on a displaying device 300.
FIG. 3 illustrates the next step of this interaction method, where a user 600 is performing input signals 401 on a controlling device 400, which are controlling functions of an augmented reality software, which manipulate the displayed virtual objects 310. A device carrying optical information 500 is in the field of view of a camera during this operation. This step is characterised mainly by, that the user 600 does not perform any changes on the device carrying optical information 500 to manipulate the displayed virtual objects 310.
FIG. 4 shows how a user 600 can modify an imaging of virtual objects displayed on a displaying device 300 when viewing a device carrying optical information 500 using a device that includes cameras. In the beginning of this operation, the user 600 positioned a camera of a device that includes cameras into position 111. In this position 111, optical information 510 of the device carrying optical information 500 displayed on the displaying device 300, is overlapped by an imaging of virtual objects 311. During this operation, the user 600 performs a change in the positioning of the camera of a device that includes cameras into position 112. In this camera position 112, new imaging of virtual objects 112 is displayed to the user on the displaying device 300. This way a user can modify a displayed imaging of virtual objects.
FIG. 5 illustrates the next step of this interaction, during which a device carrying optical information 500 is positioned into the field of view of a camera 120 of a device that includes cameras 100. However, the camera 110 is positioned into close proximity to the device carrying optical information 500 so that the ratio of the area of this device in the field of view of the camera to the total surface area of the device is less than 0.01. At this moment, the video stream from the camera only contains imaging of a physical object 501, which is a part of the device carrying optical information 500.
FIG. 6 illustrates a relationship between the positioning of a camera 110 of a device that includes cameras 100 and the properties of displayed views of virtual objects in a composite video stream 122 displayed on a displaying device. When the camera 110 of the device that includes cameras 100 is placed into position 113 when a great amount of the area or the whole surface area of a device carrying optical information 500 is in the field of view of a camera 120, such imaging of virtual objects 313 is displayed, when only large parts of large-scale virtual objects and small level of detail of virtual objects are visible. When the camera 110 of the device that includes cameras 100 is placed into position 114 when the camera 110 is in close proximity to the device carrying optical information 500, such area of this device is in the field of view of the camera 120, that the ratio of the area of this device in the field of view of the camera to the total surface area of the device is less than 0.01. The capability of an augmented reality software to recognize the device carrying optical information 500 is then maintained and such imaging 314 is displayed by the composite video stream 122, when small parts of large-scale virtual objects and a high level of detail of virtual objects is displayed.
FIG. 7 shows a table, where a single device carrying optical information 500 is configured into different sizes, which are multiples of a module of the device carrying optical information 503. This device is scalable in this manner into various sizes, in order to make it possible to view the device using cameras of devices that include cameras 100 of multiple users 600 simultaneously.
In such case, that a device carrying optical information 500 is viewed at a given time by several users, as illustrated in FIG. 8, views of virtual objects displayed on a displaying device 300 of individual users are independent of each other. This is achieved in such way, that a change of imaging of virtual objects 313 is carried out by a user 601 by changing the camera position 113 into a new position of the camera 114, by which he also changes the spatial relationship between the camera and the device carrying optical information 500 without manipulating this device and acquires a new imaging of virtual objects 314. Due to the fact that the user 601 does not manipulate the device carrying optical information 500 in order to acquire a change in the imaging of virtual objects, another user 602 can modify the spatial relationship between the camera and the device carrying optical information 500 independently of this user 601. The user 602 modifies imaging of virtual objects 315 by changing the camera position 115 into a new camera position 116, resulting in a change of its spatial relationship to the device carrying optical information 500, thereby acquiring a new imaging of the virtual objects 316.
Virtual objects are displayed on the basis of determining the spatial relationship between a device carrying optical information 500 and a camera. FIG. 9 illustrates a spatial relationship between a device carrying optical information 500 and virtual objects 320 displayed on a displaying device. An origin of the coordinate system for virtual objects 530 and its orientation determine placement of the virtual objects, and it is positioned and oriented so that it coincides with the positioning and orientation of the coordinate system of the device carrying optical information 540 and its origin. This coordinate system of the device carrying optical information 540 is evaluated by the augmented reality software on the basis of the spatial relationship between the camera and the device carrying optical information .
FIG. 10 illustrates an operation of a method of providing access to data for the augmented reality software running on a computing device, of this method of interaction. Data is provided so that on the entire surface area of the device carrying optical information 500 containing optical information 510 a division into separate segments of optical information 550 is performed. Number of divisions is arbitrary, as long as at least one division is made. The surface area of this device is divided into several separate segments, where each individual segment of this device 560 contains individual optical information 570.
FIG. 11 further illustrates a device carrying optical information 500 divided into individual segments of optical information 580, each of which corresponds to a separate data file of optical characteristics 590. These data files are generated by an analysis of each segment of optical information separately and facilitate the separate recognition of each segment of the device carrying optical information 500. FIG. 12 clearly shows, that a device carrying optical information 500, which for the purposes of this example in this embodiment is constructed out of a single physical object without mechanically secured joints 502, is composed of such separate segments of optical information, where each segments contains unique optical information 581.
FIG. 13 illustrates, how during performing an operation of this method of interaction, positioning of a camera 110, of a device that includes cameras 100 by a user 600 into close proximity to a device carrying optical information 500, so that the ratio of the surface area of this device in the field of view of the camera 120 to the total surface area of this device is less than 0.01, augmented reality software is capable of recognizing at least one segment of optical information. In the particular case, augmented reality software recognizes an individual segment of optical information 582.
FIG. 14 illustrates a configuration of individual devices of the augmented reality system corresponding to this method of interaction using augmented reality for displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects. For the purposes of this example, in this embodiment, a device that includes cameras 100, a computing device 200, a displaying device 300 and a controlling device 400 are configured so that they form a single combined device, a tablet, which a user 600 holds in his hands and independently determines its position in relation to the device carrying optical information 500.
Industrial Applicability
The present invention can be applied in several industrial fields such as education, entertainment business, cartography or in the field of design, because the present invention provides such method of interaction using augmented reality, which allows multiple simultaneous users to view a single triggering physical object in the form of a device carrying optical information. Next it allows positioning of a camera into close proximity to a device carrying optical information, where the ratio of the surface area in the field of view of a camera to the total surface area of the device carrying optical information can be less than 0.01, while at the same time it maintains the function of the augmented reality software to recognize the device carrying optical information, as a result of what it allows displaying high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects.

Claims

1. A method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects, comprising the steps of:
- Positioning of a device carrying optical information into the field of view of a camera of a device that includes cameras;
- Recognition of optical characteristics of the device carrying optical information by the augmented reality software running on a computing device, basing on a comparison of optical characteristics of the device carrying optical information with data and recognition of optical characteristics of the device carrying optical information in the data, to which the augmented reality software has provide access;
- Placement of virtual objects into a video stream transmitted onto a displaying device;
- Performing of input signals on a controlling device to control functions of the augmented reality software;
- Performing of changes in a spatial relationship between the camera and the device carrying optical information for changing the displayed imaging of virtual objects on a displaying device;
- Positioning of the camera of the device that includes cameras in close proximity to the device carrying optical information for displaying high level of detail imaging of virtual objects or imaging of small parts of large-scale virtual objects, where during such positioning of the camera of the device that includes cameras, at least one segment of the device carrying optical information is recognized by the augmented reality software and where during such positioning of the camera of the device that includes cameras the ratio of the surface area of the device carrying optical information in the field of view of a camera to it's total area can be less than 0.01.
2. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 1, where the method of providing access to data for an augmented reality software running on a computing device comprises the operation steps of:
- The division of the entire surface area of the device carrying optical information of a specific spatial configuration of a planar shape into several separate segments of optical information, where each segment of optical information represents a quadrilateral segment of arbitrary size and the number of divisions is arbitrary, provided that at least one division is performed and where each segments contains a sufficient quantity of optical information for enabling each segment to be recognized separately;
- Each segment of optical information of the device is analysed by the software for analysis of optical characteristics, basing on which a separate data file of optical characteristics is generated for each segment, where this data file contains optical characteristics of the respective segment of the device carrying optical information;
- Arranging of individual data files of individual segments into data files or databases is performed, with which the augmented reality software compares and in which it recognizes at least one segment of a device carrying optical information, even with concurrent positioning of the camera of the device that includes cameras into close proximity of the device carrying optical information for displaying high level of detail imaging of virtual objects or imaging of small parts of large-scale virtual objects, and where during such placement of the camera of the device that includes cameras the ratio of the surface area of the device carrying optical information in the field of view of a camera to it's total area can be less than 0.01;
- Providing access to created data files or databases to the augmented reality software.
3. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 2, where the operation step of the division of the entire surface area of the device carrying optical information comprises creating divisions, which may or may not correspond to any divisions present in the composition of optical information on the surface of the device carrying optical information.
4. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 2, where the operation step of providing access to created data files and databases to the software of augmented reality is performed using a network or an internet transmission.
5. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 2, where the operation step of providing access to created data files and databases to the software of augmented reality is performed by a transmission to a storage medium, which is physically attached to the computing device running the augmented reality software.
6. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 1, where the device carrying optical information is characterised by being in a specific spatial configuration of a planar shape, consisting of multiple segments of optical information, where each segment of optical information corresponds to a separate data file of optical characteristics, containing a sufficient amount of unique optical information to make recognition of each segment separately possible even with concurrent positioning of the camera of the device that includes cameras into close proximity of the device carrying optical information for displaying high level of detail imaging of virtual objects or imaging of small parts of large-scale virtual objects, where during such positioning of the camera of the device that includes cameras the augmented reality software recognizes at least one segment of the device carrying optical information, and where during such placement of the camera of the device that includes cameras the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area can be less than 0.01.
7. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is at one moment viewed by a single camera of a device that includes cameras.
8. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is at one moment viewed simultaneously by a multiple cameras of a single or multiple devices that include cameras.
9. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is constructed of a single physical object without mechanically secured joints.
10. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is constructed of multiple physical objects with mechanically secured joints.
11. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is constructed of multiple physical objects without mechanically secured joints.
12. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the main material of the device carrying optical information is selected from a group of materials comprising paper, cardboard, carton, plastic, rubber, metal, glass, wood, and cork.
13. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is displayed by a displaying or a projecting device, where the spatial configuration of the surface on which the device is displayed corresponds with the spatial configuration of the device carrying optical information.
14. The method of interaction using augmented reality for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 6, where the device carrying optical information is of a specific spatial configuration, which appears as a configuration of a planar shape when it is viewed by the naked eye.
15. An augmented reality system for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 1, comprising:
- A device that includes cameras receiving a video stream from a camera;
- A computing device running augmented reality software, recognizing optical characteristics in the video stream from the camera and placing virtual objects into this video stream, where the augmented reality software has access to data files of optical characteristics for comparison and recognition of optical characteristics in the video stream from the camera;
- A displaying device displaying the video stream with placed virtual objects from the computing device running augmented reality software;
- A controlling device controlling functions of the augmented reality software running on the computing device;
A device carrying optical information, which is in a specific spatial configuration of a planar shape, consisting of multiple segments of optical information, where each segment of optical information corresponds to a separate data file of optical characteristics, containing a sufficient amount of unique optical information to make recognition of each segment separately possible even with concurrent positioning of the camera of the device that includes cameras into close proximity of the device carrying optical information for displaying high level of detail imaging of virtual objects or imaging of small parts of large-scale virtual objects, where during such positioning of the camera of the device that includes cameras the augmented reality software recognizes at least one segment of the device carrying optical information, and where during such placement of the camera of the device that includes cameras the ratio of the surface area of the device carrying optical information in the field of view of the camera to it's total surface area can be less than 0.01 ;
16. The augmented reality system for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 15, where the device that includes cameras is a device selected from a group of devices or is a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, a displaying device that includes cameras intended to be worn on a head, gaming console, portable gaming console and a camera.
17. The augmented reality system for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 15, where the computing device is a device selected from a group of devices or is a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, gaming console, portable gaming console and a camera.
18. The augmented reality system for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 15, where the displaying device is a device selected from a group of devices or is a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, a displaying device intended to be worn on a head, gaming console, portable gaming console, projector and a display.
19. The augmented reality system for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 15, where the controlling device is a device selected from a group of devices or is a part of a device from a group of devices comprising a tablet, mobile phone, computer, personal data assistant (PDA), portable music or video player, gaming console, portable gaming console, control, keyboard, mouse, touchpad and a spatial sensor for tracking hand or finger movement in space.
20. The augmented reality system for displaying of high level of detail imaging of virtual objects and imaging of small parts of large-scale virtual objects according to Claim 15, where the controlling device is a microphone or a device that includes a microphone for capturing sound for voice control.
PCT/SK2013/050009 2012-10-31 2013-10-30 Method of interaction using augmented reality WO2014070120A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SKPP50049-2012 2012-10-31
SK50049-2012A SK500492012A3 (en) 2012-10-31 2012-10-31 The mode of interaction using augmented reality

Publications (2)

Publication Number Publication Date
WO2014070120A2 true WO2014070120A2 (en) 2014-05-08
WO2014070120A3 WO2014070120A3 (en) 2014-08-21

Family

ID=49726851

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SK2013/050009 WO2014070120A2 (en) 2012-10-31 2013-10-30 Method of interaction using augmented reality

Country Status (2)

Country Link
SK (1) SK500492012A3 (en)
WO (1) WO2014070120A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016140643A1 (en) * 2015-03-02 2016-09-09 Hewlett-Packard Development Company, L.P. Projecting a virtual display
WO2020115084A1 (en) 2018-12-03 2020-06-11 Smartmunk Gmbh Object-based method and system for collaborative problem solving

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060038833A1 (en) * 2004-08-19 2006-02-23 Mallinson Dominic S Portable augmented reality device and method
US20090221368A1 (en) * 2007-11-28 2009-09-03 Ailive Inc., Method and system for creating a shared game space for a networked game
EP2267595A2 (en) * 2008-02-12 2010-12-29 Gwangju Institute of Science and Technology Tabletop, mobile augmented reality system for personalization and cooperation, and interaction method using augmented reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060038833A1 (en) * 2004-08-19 2006-02-23 Mallinson Dominic S Portable augmented reality device and method
US20090221368A1 (en) * 2007-11-28 2009-09-03 Ailive Inc., Method and system for creating a shared game space for a networked game
EP2267595A2 (en) * 2008-02-12 2010-12-29 Gwangju Institute of Science and Technology Tabletop, mobile augmented reality system for personalization and cooperation, and interaction method using augmented reality

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016140643A1 (en) * 2015-03-02 2016-09-09 Hewlett-Packard Development Company, L.P. Projecting a virtual display
US20170357312A1 (en) * 2015-03-02 2017-12-14 Hewlett- Packard Development Company, Lp. Facilitating scanning of protected resources
WO2020115084A1 (en) 2018-12-03 2020-06-11 Smartmunk Gmbh Object-based method and system for collaborative problem solving

Also Published As

Publication number Publication date
WO2014070120A3 (en) 2014-08-21
SK500492012A3 (en) 2014-05-06

Similar Documents

Publication Publication Date Title
US10339723B2 (en) Generating virtual notation surfaces with gestures in an augmented and/or virtual reality environment
CN109891368B (en) Switching of moving objects in augmented and/or virtual reality environments
Gugenheimer et al. Facetouch: Enabling touch interaction in display fixed uis for mobile virtual reality
US9628783B2 (en) Method for interacting with virtual environment using stereoscope attached to computing device and modifying view of virtual environment based on user input in order to be displayed on portion of display
Piumsomboon et al. User-defined gestures for augmented reality
Hürst et al. Gesture-based interaction via finger tracking for mobile augmented reality
CN115443445A (en) Hand gesture input for wearable systems
CN107771309A (en) Three dimensional user inputs
Hürst et al. Multimodal interaction concepts for mobile augmented reality applications
Brasier et al. Arpads: Mid-air indirect input for augmented reality
Budhiraja et al. Using a HHD with a HMD for mobile AR interaction
US10216357B2 (en) Apparatus and method for controlling the apparatus
White et al. Interaction and presentation techniques for shake menus in tangible augmented reality
Ryu et al. GG Interaction: a gaze–grasp pose interaction for 3D virtual object selection
CN112204508A (en) Method and apparatus for presenting a synthetic reality user interface
Caggianese et al. An investigation of leap motion based 3d manipulation techniques for use in egocentric viewpoint
WO2014070120A2 (en) Method of interaction using augmented reality
Schreiber et al. New interaction concepts by using the wii remote
Adhikarla et al. Design and evaluation of freehand gesture interaction for light field display
Knierim et al. The SmARtphone Controller: Leveraging Smartphones as Input and Output Modality for Improved Interaction within Mobile Augmented Reality Environments
CN110717993A (en) Interaction method, system and medium of split type AR glasses system
Arslan et al. E-Pad: Large display pointing in a continuous interaction space around a mobile device
Zhang et al. A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality
Franz et al. A virtual reality scene taxonomy: Identifying and designing accessible scene-viewing techniques
Babic et al. Understanding and creating spatial interactions with distant displays enabled by unmodified off-the-shelf smartphones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13802463

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 13802463

Country of ref document: EP

Kind code of ref document: A2