CN103543827A - Immersive outdoor activity interactive platform implement method based on single camera - Google Patents

Immersive outdoor activity interactive platform implement method based on single camera Download PDF

Info

Publication number
CN103543827A
CN103543827A CN201310479754.5A CN201310479754A CN103543827A CN 103543827 A CN103543827 A CN 103543827A CN 201310479754 A CN201310479754 A CN 201310479754A CN 103543827 A CN103543827 A CN 103543827A
Authority
CN
China
Prior art keywords
virtual
large screen
real
camera
outdoor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310479754.5A
Other languages
Chinese (zh)
Other versions
CN103543827B (en
Inventor
李静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou miaomi Intelligent Technology Co.,Ltd.
Original Assignee
NANJING RONGTU CHUANGSI INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING RONGTU CHUANGSI INFORMATION TECHNOLOGY Co Ltd filed Critical NANJING RONGTU CHUANGSI INFORMATION TECHNOLOGY Co Ltd
Priority to CN201310479754.5A priority Critical patent/CN103543827B/en
Publication of CN103543827A publication Critical patent/CN103543827A/en
Application granted granted Critical
Publication of CN103543827B publication Critical patent/CN103543827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Disclosed an immersive outdoor activity interactive platform implement method based on a single camera. The mixed reality technology is based on, the computer graphics technology is combined, a human-computer interactive method applied in outdoor large screen is provided, by means of the high-definition video capture technology and the human skeletal reconstruction technology, skeletons of users can be computed in real time, when a determined relative part reaches a triggering region, preset reality enhancing information is triggered automatically, and acquired gesture images in reality and enhanced two-dimensional images or three-dimensional virtual models are overlapped and are displayed to users through an outdoor high-definition LED large screen. Thus, the users can see interactive effect of realistic-virtual combination in the large screen.

Description

The implementation method of the immersion outdoor activities interaction platform based on single camera
Technical field
The invention belongs to computer image processing technology field, relate to mixed reality technology, is a kind of special man-machine interaction method based on mixed reality towards outdoor large screen.
Background technology
The mixed reality application of current main-stream be mostly for lower than 30 inches with sub-screen exploitation, due to the requirement of outdoor activities, generally all adopts and is at least greater than the LED large high-definition screen curtain of 100 inches as displaying screen.On the other hand, current main user action limbs interactive mode is calculated in conjunction with infrared and depth finding mode for only relying on graphics mode to calculate with graphics mode, the Kinect that the more famous equipment of this mode is Microsoft, which requires user to move in certain scope, can calculate comparatively accurately mensuration, because this equipment is to prepare for the TV screen lower than 60 inches, so the optimum distance of this equipment work is set in 2 to 3 meters apart from screen.For the above giant-screen at least 100 inches of open airs, this distance obviously can not meet the demands, the present invention adopts and only relies on the skeleton method for reconstructing of iconology to calculate, be used in conjunction with high-definition camera, requirement that can outdoor large screen, is set in viewing ratio by user's activity space.
Summary of the invention
The present invention is directed to the problems of the prior art, in conjunction with computer graphics techniques and mixed reality technology, a kind of man-machine interaction method that is applied in outdoor large screen is provided, by high-definition camera acquisition technique and skeleton reconstruction technique, calculate in real time user's bone, and realize human-computer interaction function with result of calculation.Finally, user can see truly and the virtual interaction effect combining in giant-screen.
Technical scheme of the present invention is: the implementation method of the immersion outdoor activities interaction platform based on single camera, based on mixed reality technology, user's attitude image input mixed reality server that single camera in real world is gathered, in mixed reality server, superpose with the enhancing content of setting in advance, and be shown to user by outdoor large screen, described enhancing content comprises two-dimensional picture and three dimensional virtual models, user utilizes different attitudes to carry out interaction with the enhancing content of setting in advance, and the implementation method of described interaction platform specifically comprises the following steps:
First carry out registration training, comprise two steps:
1) preparatory stage, the virtual world of outdoor real world and mixed reality server is carried out to registration:
11) standardized the perpendicular line perpendicular to outdoor large screen on the ground, intersection point is positioned at outdoor large screen base mid point;
12) on described perpendicular line, determine user interactions anchor point, user interactions anchor point and outdoor large screen distance are datum length L1, and datum length L1 is greater than the height of outdoor large screen;
13) real camera is positioned over to user interactions anchor point top higher than the position of 1.5 meters, real camera lens direction keeps level and towards outdoor large screen, video camera content of shooting is whole giant-screen; Interaction scenarios corresponding to real world is set in virtual world, comprise with reality in the mutual anchor point of virtual outdoor large screen, Virtual User and the virtual video camera of outdoor large screen same size and same position, the distance of virtual video camera and virtual outdoor large screen is datum length L1 ', L1 ' equals L1, and the content that virtual video camera is taken is whole virtual big screen; Virtual video camera is superimposed with the captured picture of real camera, using the outdoor large screen of real world as object of reference, the size of the size of the outdoor large screen of real world and virtual outdoor large screen is carried out to registration at least one times;
14) in real world, real camera is moved to the working position that is positioned at outdoor large screen side or top by user interactions anchor point, and on user interactions anchor point, with perpendicular to ground, also be parallel to outdoor large screen and establish a station meter L2, the height of L2 is more than or equal to 1 meter;
15) mobile virtual video camera in virtual world corresponding to the working position of step 14) real camera, and on the user interactions anchor point in virtual world, with perpendicular to ground, establish one with the corresponding virtual scale bar L2 ' of real world, the height of L2 ' is more than or equal to 1 meter and equal L2, the true station meter L2 of take is object of reference, and true station meter L2 and virtual scale bar L2 ' are carried out to registration;
16) preserve now position and the angle information of virtual world scene;
2) supplement the registration stage:
21) in real world, real camera is positioned over to the working position at outdoor large screen side or top, according to the selected user's of the image acquisition scope of real camera interactive position, in selected interactive position, stick triggering icon, simultaneously in triggering the scope of icon as user's scope of activities, the action that surpasses this scope of activities will can not get identifying;
22) in virtual world, same position place corresponding to real world sets virtual triggering icon, take and truly trigger icon and carry out registration as object of reference, after will this registration adjusting, obtain the adjustment information of scene location and angle with the preparatory stage 1) position and the angle information of the virtual world scene that obtains after registration merge, and obtains optimum scene location and angle information;
Complete after registration training, enter the real-time follow-up stage;
3) the real-time follow-up stage:
31) subscriber station is after triggering on icon, by real camera, obtains the icon information that is blocked that triggers, the relevant information in the automatic or manual triggering virtual world of system, and be superimposed to the image that real world gathers, obtain the effect of mixed reality;
32) image collecting according to image collecting device, rebuilds user's three-dimensional motion bone, and after obtaining user's position information, the enhancing information of setting is in advance carried out with user's position information according to the requirement setting alternately.
Described in step 13), the registration of the true size of outdoor large screen and the size of virtual outdoor large screen is: the image that the picture that real camera is taken is taken with virtual video camera merges, and obtains the image of mixed reality, and on giant-screen, shows out of doors; According to the image showing, by convergent-divergent, adjust the ratio of virtual world, make virtual outdoor large screen obtain and overlap with true outdoor large screen, preserve and fix virtual world engineer's scale now.
Described in step 15), the registration of true station meter L2 and virtual scale bar L2 ' is: the image that the picture that real camera is taken is taken with virtual video camera merges, obtain the image of mixed reality, and on giant-screen, show out of doors, by position and the angle of fine setting virtual video camera, virtual scale bar L2 ' is overlapped with true station meter L2, and virtual outdoor large screen center line and true outdoor large screen center line overlap; Record is carried out in the position of virtual video camera now and angle, and be set to final working position and angle.
Described outdoor large screen is to be not less than the LED screen of 100 inches, and resolution is more than or equal to 720P.
Described real camera is high-definition camera, and resolution is more than or equal to 720P.
In the real-time follow-up stage, in the plane picture gathering at real camera, delimit toggle area, when user's three-dimensional motion bone position data is in toggle area, trigger the corresponding augmented reality content information setting in advance, to user, show.
Step 32) in, adopt existing algorithm " the human body three-dimensional motion skeleton of motion image sequence is rebuild ", in the first frame picture, user's bone is demarcated, utilized afterwards three-dimensional (3 D) manikin knowledge and motion continuity to set up successively the human motion skeleton of each image.
The present invention, in conjunction with computer graphics techniques and mixed reality technology, provides a kind of man-machine interaction method that is applied in outdoor large screen.The present invention utilizes advanced formula method for registering of two-stage, and a kind of efficient method for registering for outdoor immersion man-machine interactive system of novelty is provided.The present invention only utilizes a high-definition camera to complete registration until whole module of work is compared the legacy system of a plurality of video cameras, obtaining identical effect simultaneously, greatly reduces the complexity of system, has improved the operational efficiency of system.The present invention adopts the means of station meter to position auxiliary in the proposition of registration stage novelty, when reducing the difficulty of registration process, efficiency and the precision in registration stage have been improved greatly, the present invention is by high-definition camera acquisition technique and skeleton reconstruction technique, calculate in real time user's bone framework, and realize human-computer interaction function with result of calculation.Finally, user can see truly and the virtual interaction effect combining in giant-screen.In the same user interaction stage, adopt the method for novel toggle area to trigger relevant interactive information, this not high inferior position of degree of accuracy that can make up skeleton collection in outdoor situation, the difficulty of the collection reducing, improved the speed gathering, still can meet interactive person's experience effect simultaneously.
Accompanying drawing explanation
Fig. 1 is workflow diagram of the present invention.
Fig. 2 is registration process flow diagram of the present invention, comprises preparatory stage and the supplementary registration stage of registration.
Fig. 3 is device schematic diagram of the present invention.
Fig. 4 is that step of registration one of the present invention is implemented schematic diagram.
Fig. 5 is that in step of registration of the present invention, step 15) is implemented schematic diagram.
Fig. 6 is that supplementary registration of the present invention is implemented schematic diagram.
System schematic when Fig. 7 is real work of the present invention.
Fig. 8 is real-time follow-up stage step 32 of the present invention) in the schematic diagram of interactive triggering method.
Embodiment
The present invention combines the skeleton reconstruction technique of forefront with hybrid technology, by high-definition camera acquisition technique, calculate in real time user's bone framework, and realize human-computer interaction function with result of calculation.Finally, user can see truly and the virtual interaction effect combining in high definition outdoor LED giant-screen, and high definition refers to that resolution is more than or equal to 720P here.
The present invention is by arranging real camera 101, mixed reality application server 102 and outdoor large screen 103, real camera 101 is high-definition image acquisition device, gather user's whole body image, in input mixed reality application server 102, in mixed reality application server 102, to gathering the bone of the user in image, carry out three-dimensional reconstruction, judge user's hand, the positional information of the key points such as pin, when these key points move to predefined trigger range, the enhancing information that automatically triggers or be correlated with by manual activation, and these information are presented on outdoor large screen 103, outdoor large screen 103 is high definition LED screen.Enhancing information is mixed mutually with original image collecting, and realizes the effect of mixed reality, mixed image is presented on to outdoor LED high definition and shows that on large-size screen monitors,, as Fig. 6, described image collector is set to a video camera or a camera.As Fig. 3, Fig. 4, Fig. 5, Fig. 6 and Fig. 7, real camera 101 carries out real time image collection, and by the data transmission after gathering, to mixed reality application server 102, the information by the enhancing information of triggering with the user 504 who gathers combines, and the image of mixing is presented on outdoor large screen 103, finally realize user 504 and carry out mutual effect with virtual enhancing information.
Illustrate enforcement of the present invention below.
The present invention is based on mixed reality technology, by user images in the reality gathering and the stack of the three dimensional virtual models for mutual object of setting in advance, and be shown to user, as Fig. 7, comprise the following steps:
1) preparatory stage, outdoor real world and virtual world are carried out to registration, as Fig. 4:
11) standardized the perpendicular line perpendicular to outdoor large screen 103 on the ground, intersection point is positioned at outdoor large screen base mid point;
12) on described perpendicular line, determine user interactions anchor point, user interactions anchor point and outdoor large screen distance are datum length L1202, and datum length L1 should be greater than the height of outdoor large screen in principle;
13) real camera is positioned over to user interactions anchor point top higher than the position of 1.5 meters, real camera lens direction keeps level and towards outdoor large screen; Interaction scenarios corresponding to real world is set in the virtual world in mixed reality server, comprise with reality in the mutual anchor point of virtual outdoor large screen, Virtual User and the virtual video camera of outdoor large screen same size and same position, the distance of virtual video camera and virtual outdoor large screen is datum length L1 ', L1 ' equals L1, then the size of the size of true outdoor large screen and virtual outdoor large screen is carried out to registration at least one times;
14) in real world, real camera is moved to the working position that is positioned at outdoor large screen side or top by user interactions anchor point, and on user interactions anchor point, with perpendicular to ground, also be parallel to outdoor large screen and establish a station meter L2, the height of L2 is more than or equal to 1 meter;
15) mobile virtual video camera in virtual world corresponding to the working position of step 14) real camera, and on the user interactions anchor point in virtual world, with perpendicular to ground, establish one with the corresponding virtual scale bar L2 ' of real world, the height of L2 ' is more than or equal to 1 meter and equal L2, the true station meter L2 of take is object of reference, and true station meter L2 and virtual scale bar L2 ' are carried out to registration;
During step 13) registration, the image that the picture that real camera is taken is taken with virtual video camera merges, and obtains the image of mixed reality, and on giant-screen, shows out of doors; By the ratio of convergent-divergent virtual world, make virtual big screen obtain and overlap with true giant-screen, preserve and fix virtual world engineer's scale now;
Described in step 15), the registration of true station meter L2 and virtual scale bar L2 ' is: the image that the picture that real camera is taken is taken with virtual video camera merges, obtain the image of mixed reality, and on giant-screen, show out of doors, by position and the angle of fine setting virtual video camera, virtual scale bar L2 ' is overlapped with true station meter L2, and virtual outdoor large screen center line and true outdoor large screen center line overlap; Record is carried out in the position of virtual video camera now and angle, and be set to final working position and angle.
2) supplement the registration stage, as Fig. 6:
21) in real world, real camera is positioned over to the working position at outdoor large screen side or top, according to the selected user's of the image acquisition scope of real camera interactive position, in selected interactive position, stick and trigger icon 402, simultaneously in triggering the scope of icon 402 as user's scope of activities, the action that surpasses this scope of activities will can not get identifying;
22) in virtual world, same position place corresponding to real world sets virtual triggering icon, by fine setting, virtual triggering icon is overlapped completely with real triggering in the picture of icon after having mixed real world and virtual world, to after fine setting, obtain the adjustment information of scene location and angle with the preparatory stage 1) position and the angle information of the virtual world scene that obtains after registration merge, and obtains optimum scene location and angle information.
3) the real-time follow-up stage, as Fig. 7:
31) after user 504 stands in and triggers on icon, by real camera 101, obtain the icons information that is blocked that triggers, relevant information in the automatic or manual triggering virtual world of system, enhancing information is superimposed to the image that real world gathers, and be presented on outdoor large screen 103, obtain the interaction effect of mixed reality.
32) image collecting according to image collecting device, rebuilds user's three-dimensional motion bone.After obtaining user's position information, enhancing information is carried out with user according to the requirement setting in advance alternately.
Step 32) in, adopt existing algorithm " the human body three-dimensional motion skeleton of motion image sequence is rebuild ", in the first frame picture, user's bone is demarcated, utilized afterwards three-dimensional (3 D) manikin knowledge and motion continuity to set up successively the human motion skeleton of each image.The user's three-dimensional motion bone of real-time reconstruction of take is basis, calculates the six-degree-of-freedom information of the corresponding position of user and angle, for determining user's position information.
In gathered plane picture, delimit toggle area, as Fig. 8, when user's corresponding bone position data is in toggle area, as 601,602, can trigger the corresponding augmented reality content information setting in advance, as the virtual fireworks effects of blast etc.
From above-mentioned, the present invention, specially for the mutual feature in the open air of mixed reality, proposes a kind of localization method of applicable outdoor feature, and the method flexibility ratio is high, and matching is good, does not need complicated device, is easy to realize.User's health is carried out to three-dimensional skeleton model reconstruction, utilize the data after rebuilding, whether the region of interest of judging user has arrived the toggle area in the plane of delineation, once arrive, automatically triggers corresponding information, and the accuracy requirement that this mode is rebuild three skeleton models is not high yet.The present invention is conducive to the popularization that mixed reality is applied in outdoor interactive application, and device structure is simple, is easy to realize, by the personal interactive experience of user, by user's self-obtaining information.

Claims (6)

1. the implementation method of the immersion outdoor activities interaction platform based on single camera, it is characterized in that based on mixed reality technology, user's attitude image input mixed reality server that single camera in real world is gathered, in mixed reality server, superpose with the enhancing content of setting in advance, and be shown to user by outdoor large screen, described enhancing content comprises two-dimensional picture and three dimensional virtual models, user utilizes different attitudes to carry out interaction with the enhancing content of setting in advance, and the implementation method of described interaction platform specifically comprises the following steps:
First carry out registration training, comprise two steps:
1) preparatory stage, the virtual world of outdoor real world and mixed reality server is carried out to registration:
11) standardized the perpendicular line perpendicular to outdoor large screen on the ground, intersection point is positioned at outdoor large screen base mid point;
12) on described perpendicular line, determine user interactions anchor point, user interactions anchor point and outdoor large screen distance are datum length L1, and datum length L1 is greater than the height of outdoor large screen;
13) real camera is positioned over to user interactions anchor point top higher than the position of 1.5 meters, real camera lens direction keeps level and towards outdoor large screen, video camera content of shooting is whole giant-screen; Interaction scenarios corresponding to real world is set in virtual world, comprise with reality in the mutual anchor point of virtual outdoor large screen, Virtual User and the virtual video camera of outdoor large screen same size and same position, the distance of virtual video camera and virtual outdoor large screen is datum length L1 ', L1 ' equals L1, and the content that virtual video camera is taken is whole virtual big screen; Virtual video camera is superimposed with the captured picture of real camera, using the outdoor large screen of real world as object of reference, the size of the size of the outdoor large screen of real world and virtual outdoor large screen is carried out to registration at least one times;
14) in real world, real camera is moved to the working position that is positioned at outdoor large screen side or top by user interactions anchor point, and on user interactions anchor point, with perpendicular to ground, also be parallel to outdoor large screen and establish a station meter L2, the height of L2 is more than or equal to 1 meter;
15) mobile virtual video camera in virtual world corresponding to the working position of step 14) real camera, and on the user interactions anchor point in virtual world, with perpendicular to ground, establish one with the corresponding virtual scale bar L2 ' of real world, the height of L2 ' is more than or equal to 1 meter and equal L2, the true station meter L2 of take is object of reference, and true station meter L2 and virtual scale bar L2 ' are carried out to registration;
16) preserve now position and the angle information of virtual world scene;
2) supplement the registration stage:
21) in real world, real camera is positioned over to the working position at outdoor large screen side or top, according to the selected user's of the image acquisition scope of real camera interactive position, in selected interactive position, stick triggering icon, simultaneously in triggering the scope of icon as user's scope of activities, the action that surpasses this scope of activities will can not get identifying;
22) in virtual world, same position place corresponding to real world sets virtual triggering icon, take and truly trigger icon and carry out registration as object of reference, after will this registration adjusting, obtain the adjustment information of scene location and angle with the preparatory stage 1) position and the angle information of the virtual world scene that obtains after registration merge, and obtains optimum scene location and angle information;
Complete after registration training, enter the real-time follow-up stage;
3) the real-time follow-up stage:
31) subscriber station is after triggering on icon, by real camera, obtains the icon information that is blocked that triggers, the relevant information in the automatic or manual triggering virtual world of system, and be superimposed to the image that real world gathers, obtain the effect of mixed reality;
32) image collecting according to image collecting device, rebuilds user's three-dimensional motion bone, and after obtaining user's position information, the enhancing information of setting is in advance carried out with user's position information according to the requirement setting alternately.
2. the implementation method of the immersion outdoor activities interaction platform based on single camera according to claim 1, it is characterized in that the registration of the true size of outdoor large screen and the size of virtual outdoor large screen is described in step 13): the image that the picture that real camera is taken is taken with virtual video camera merges, obtain the image of mixed reality, and on giant-screen, show out of doors; According to the image showing, by convergent-divergent, adjust the ratio of virtual world, make virtual outdoor large screen obtain and overlap with true outdoor large screen, preserve and fix virtual world engineer's scale now.
3. the implementation method of the immersion outdoor activities interaction platform based on single camera according to claim 1, it is characterized in that the registration of true station meter L2 and virtual scale bar L2 ' is described in step 15): the image that the picture that real camera is taken is taken with virtual video camera merges, obtain the image of mixed reality, and on giant-screen, show out of doors, by position and the angle of fine setting virtual video camera, virtual scale bar L2 ' is overlapped with true station meter L2, and virtual outdoor large screen center line and true outdoor large screen center line overlap; Record is carried out in the position of virtual video camera now and angle, and be set to final working position and angle.
4. according to the implementation method of the immersion outdoor activities interaction platform based on single camera described in claim 1-3 any one, it is characterized in that described outdoor large screen is to be not less than the LED screen of 100 inches, resolution is more than or equal to 720P.
5. according to the implementation method of the immersion outdoor activities interaction platform based on single camera described in claim 1-3 any one, it is characterized in that described real camera is high-definition camera, resolution is more than or equal to 720P.
6. according to the implementation method of the immersion outdoor activities interaction platform based on single camera described in claim 1-3 any one, it is characterized in that in the real-time follow-up stage, in the plane picture gathering at real camera, delimit toggle area, when user's three-dimensional motion bone position data is in toggle area, trigger the corresponding augmented reality content information setting in advance, to user, show.
CN201310479754.5A 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera Active CN103543827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310479754.5A CN103543827B (en) 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310479754.5A CN103543827B (en) 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera

Publications (2)

Publication Number Publication Date
CN103543827A true CN103543827A (en) 2014-01-29
CN103543827B CN103543827B (en) 2016-04-06

Family

ID=49967364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310479754.5A Active CN103543827B (en) 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera

Country Status (1)

Country Link
CN (1) CN103543827B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656893A (en) * 2015-02-06 2015-05-27 西北工业大学 Remote interaction control system and method for physical information space
CN106293083A (en) * 2016-08-07 2017-01-04 苏州苍龙电子科技有限公司 A kind of large-screen interactive system and exchange method thereof
CN106816077A (en) * 2015-12-08 2017-06-09 张涛 Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality
CN107168619A (en) * 2017-03-29 2017-09-15 腾讯科技(深圳)有限公司 User-generated content treating method and apparatus
CN107223245A (en) * 2016-12-27 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of data display processing method and device
CN108255304A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 Video data handling procedure, device and storage medium based on augmented reality
CN110521186A (en) * 2017-02-09 2019-11-29 索菲斯研究股份有限公司 For using number, physics, time or the method and system of the shared mixed reality experience of space discovery service
CN111093301A (en) * 2019-12-14 2020-05-01 安琦道尔(上海)环境规划建筑设计咨询有限公司 Light control method and system
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN112198963A (en) * 2020-10-19 2021-01-08 深圳市太和世纪文化创意有限公司 Immersive tunnel type multimedia interactive display method, equipment and storage medium
WO2022188733A1 (en) * 2021-03-08 2022-09-15 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070038944A1 (en) * 2005-05-03 2007-02-15 Seac02 S.R.I. Augmented reality system with real marker object identification
US20090167787A1 (en) * 2007-12-28 2009-07-02 Microsoft Corporation Augmented reality and filtering
CN101551732A (en) * 2009-03-24 2009-10-07 上海水晶石信息技术有限公司 Method for strengthening reality having interactive function and a system thereof
CN102156808A (en) * 2011-03-30 2011-08-17 北京触角科技有限公司 System and method for improving try-on effect of reality real-time virtual ornament
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070038944A1 (en) * 2005-05-03 2007-02-15 Seac02 S.R.I. Augmented reality system with real marker object identification
US20090167787A1 (en) * 2007-12-28 2009-07-02 Microsoft Corporation Augmented reality and filtering
CN101551732A (en) * 2009-03-24 2009-10-07 上海水晶石信息技术有限公司 Method for strengthening reality having interactive function and a system thereof
CN102156808A (en) * 2011-03-30 2011-08-17 北京触角科技有限公司 System and method for improving try-on effect of reality real-time virtual ornament
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656893B (en) * 2015-02-06 2017-10-13 西北工业大学 The long-distance interactive control system and method in a kind of information physical space
CN104656893A (en) * 2015-02-06 2015-05-27 西北工业大学 Remote interaction control system and method for physical information space
CN106816077A (en) * 2015-12-08 2017-06-09 张涛 Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality
CN106816077B (en) * 2015-12-08 2019-03-22 张涛 Interactive sandbox methods of exhibiting based on two dimensional code and augmented reality
CN106293083A (en) * 2016-08-07 2017-01-04 苏州苍龙电子科技有限公司 A kind of large-screen interactive system and exchange method thereof
CN107223245A (en) * 2016-12-27 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of data display processing method and device
WO2018119676A1 (en) * 2016-12-27 2018-07-05 深圳前海达闼云端智能科技有限公司 Display data processing method and apparatus
CN110521186A (en) * 2017-02-09 2019-11-29 索菲斯研究股份有限公司 For using number, physics, time or the method and system of the shared mixed reality experience of space discovery service
CN107168619A (en) * 2017-03-29 2017-09-15 腾讯科技(深圳)有限公司 User-generated content treating method and apparatus
CN107168619B (en) * 2017-03-29 2023-09-19 腾讯科技(深圳)有限公司 User generated content processing method and device
CN108255304A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 Video data handling procedure, device and storage medium based on augmented reality
CN108255304B (en) * 2018-01-26 2022-10-04 腾讯科技(深圳)有限公司 Video data processing method and device based on augmented reality and storage medium
CN111093301A (en) * 2019-12-14 2020-05-01 安琦道尔(上海)环境规划建筑设计咨询有限公司 Light control method and system
CN111093301B (en) * 2019-12-14 2022-02-25 安琦道尔(上海)环境规划建筑设计咨询有限公司 Light control method and system
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN111223192B (en) * 2020-01-09 2023-10-03 北京华捷艾米科技有限公司 Image processing method, application method, device and equipment thereof
CN112198963A (en) * 2020-10-19 2021-01-08 深圳市太和世纪文化创意有限公司 Immersive tunnel type multimedia interactive display method, equipment and storage medium
WO2022188733A1 (en) * 2021-03-08 2022-09-15 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification

Also Published As

Publication number Publication date
CN103543827B (en) 2016-04-06

Similar Documents

Publication Publication Date Title
CN103543827B (en) Based on the implementation method of the immersion outdoor activities interaction platform of single camera
CN104219584B (en) Panoramic video exchange method and system based on augmented reality
CN204465706U (en) Terminal installation
US11798224B2 (en) Generation apparatus, system and method for generating virtual viewpoint image
KR101841668B1 (en) Apparatus and method for producing 3D model
CN102221887B (en) Interactive projection system and method
CN104599243B (en) A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
US10701344B2 (en) Information processing device, information processing system, control method of an information processing device, and parameter setting method
CN103226830A (en) Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN106097435A (en) A kind of augmented reality camera system and method
CN105429989A (en) Simulative tourism method and system for virtual reality equipment
CN102801994B (en) Physical image information fusion device and method
CN110610547A (en) Cabin training method and system based on virtual reality and storage medium
CN107154197A (en) Immersion flight simulator
CN106780629A (en) A kind of three-dimensional panorama data acquisition, modeling method
CN107256082B (en) Throwing object trajectory measuring and calculating system based on network integration and binocular vision technology
CN104427230A (en) Reality enhancement method and reality enhancement system
CN107134194A (en) Immersion vehicle simulator
CN105183161A (en) Synchronized moving method for user in real environment and virtual environment
CN108572731A (en) Dynamic based on more Kinect and UE4 catches Data Representation method and device
CN109961520A (en) A kind of classroom VR/MR and its construction method based on third visual angle technology
CN103489219A (en) 3D hair style effect simulation system based on depth image analysis
CN106791629A (en) A kind of building based on AR virtual reality technologies builds design system
CN107957772A (en) The method that the processing method of VR images is gathered in reality scene and realizes VR experience
Yu et al. Intelligent visual-IoT-enabled real-time 3D visualization for autonomous crowd management

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
CB03 Change of inventor or designer information

Inventor after: Xu Jian

Inventor before: Li Jing

COR Change of bibliographic data
TA01 Transfer of patent application right

Effective date of registration: 20160303

Address after: Nanjing City, Nanjing City, Jiangsu Province, No. 99 Nanjing College of Information Technology 210023

Applicant after: Xu Jian

Address before: Songshan Road, Jianye District of Nanjing City, Jiangsu province 210000 No. 129 building 7 1106 Wanda Washington Dongyuan

Applicant before: Nanjing Rongtu Chuangsi Information Technology Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190513

Address after: 214000 China Sensor Network International Innovation Park E2-417, 200 Linghu Avenue, Taihu International Science Park, Xinwu District, Wuxi City, Jiangsu Province

Patentee after: Wuxi Rong domain Mdt InfoTech Ltd

Address before: 210023 No. 99 Wenlan Road, Nanjing City, Jiangsu Province, Nanjing Institute of Information Technology

Patentee before: Xu Jian

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210407

Address after: Room 2003-104, building 4, No. 209, Zhuyuan Road, high tech Zone, Suzhou City, Jiangsu Province, 215011

Patentee after: Suzhou miaomi Intelligent Technology Co.,Ltd.

Address before: 214000 China Sensor Network International Innovation Park E2-417, 200 Linghu Avenue, Taihu International Science Park, Xinwu District, Wuxi City, Jiangsu Province

Patentee before: Wuxi Rong domain Mdt InfoTech Ltd.