CN103329146A - Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual - Google Patents

Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual Download PDF

Info

Publication number
CN103329146A
CN103329146A CN2012800061790A CN201280006179A CN103329146A CN 103329146 A CN103329146 A CN 103329146A CN 2012800061790 A CN2012800061790 A CN 2012800061790A CN 201280006179 A CN201280006179 A CN 201280006179A CN 103329146 A CN103329146 A CN 103329146A
Authority
CN
China
Prior art keywords
individuality
demonstration
content
individual
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012800061790A
Other languages
Chinese (zh)
Other versions
CN103329146B (en
Inventor
菲利普·埃克霍夫
威廉·盖茨
彼得·L·哈格尔斯坦
罗德里克·A·海德
穆里尔·Y·伊什克瓦
乔丁·T·卡勒
罗伯特·兰格
E·C·鲁塔德
埃雷兹·利伯曼
内森·P·米佛德
迈克尔·史诺-莱文
克拉伦斯·T·特格林
小洛厄尔·L·伍德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jill Bos limited liability company
Original Assignee
Searete LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/931,145 external-priority patent/US20110211738A1/en
Priority claimed from US12/931,156 external-priority patent/US20110211739A1/en
Priority claimed from US12/931,157 external-priority patent/US20110206245A1/en
Application filed by Searete LLC filed Critical Searete LLC
Publication of CN103329146A publication Critical patent/CN103329146A/en
Application granted granted Critical
Publication of CN103329146B publication Critical patent/CN103329146B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

A method may include automatically remotely identifying at least one characteristic of an individual via facial recognition; and providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual. A system may include means for automatically remotely identifying at least one characteristic of an individual via facial recognition; and means for providing a display for the individual, the display having a content at least partially based on the identified at least one characteristic of the individual.

Description

Utilize facial identification to identify individual characteristic and provide a demonstration for this individuality
The inventor:
Philips's Eckhof
William Gates
The special L Hagelstein of skin
Rhoderick A Hai De
Mu Lier Y Yi Shikewa
Qiao Ding T card is strangled
Robert's Lange
Eric C Lu Tade
The Ai Leizili Berman
Interior gloomy P Mi Fode
Michael Shi Nuo-Lai Wen
The special Green of Clarens T
Little Lowell L Wood
The cross reference of related application
The application relates to following listed application (" related application ") and requires the rights and interests of the operational live application day the earliest of these applications (for example, require the earliest operational priority date except temporary patent application or required rights and interests for any of temporary patent application, these related applications and all original applications, upper two generations (grandparent) apply for, upper three generations (great-grandparent) applies for etc. according to 35USC § 119 (e)).
Related application:
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/931st, a part continuation application of No. 157 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on January 25th, 2011, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 179 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 194 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 184 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 188 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 185 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 186 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 183 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
Purpose for the requirement outside the USPTO rules, the application consisted of name be called " utilize facial identification to identify individual characteristic and provide a demonstration (IDENTIFYING A CHARACTERISTIC OF AN INDIVIDUAL UTILIZING FACIAL RECOGNITION AND PROVIDING A DISPLAY FOR THE INDIVIDUAL) for this individuality " the 12/655th, a part continuation application of No. 187 U.S. Patent applications, the invention people of this application is Philips's Eckhof; William Gates; The special L Hagelstein of skin; Rhoderick A Hai De; Mu Lier Y Yi Shikewa; Qiao Ding T card is strangled; Robert's Lange; Eric C Lu Tade; The Ai Leizili Berman; Interior gloomy P Mi Fode; Michael Shi Nuo-Lai Wen; The special Green of Clarens T; And little Lowell L Wood, the applying date is on Dec 23rd, 2009, and this application is in current common unsettled state or following portion application, that is: and a current common pending application of this application has the right to enjoy the interests of its date of application.
United States Patent Office (USPO) (USPTO) has been announced a statement, claims that the computer program of USPTO requires the patent applicant not only will mention sequence number, and will illustrate whether a application is a continuation application or part continuation application.Si Difen G storehouse is peaceful, and " rights and interests (Benefit of Prior-Filed Application) of the application of formerly submitting to ", the USPTO communique on March 18th, 2003, Http:// www.uspto.gov/web/offices/com/sol/og/2003/week11/patbene .htm place can ObtainThe applicant's entity (hereinafter to be referred as " the applicant ") provides hereinbefore to specifically the quoting of this or these application, and has therefrom required right of priority in accordance with the law.The applicant understands, and these rules call the turn at its definite quotations and are sturdy and provide a sequence number or any characteristic explanation (such as " continuation application " or " part continuation application ") for the purpose that U.S. Patent application is required right of priority and failed call.Be that as it may, the applicant still understands, the computer program of USPTO has the requirement of some data typing aspect, therefore and the applicant provides the appointment to the application's relation between a or many parts of original applications (as above listed) with it, but the applicant points out clearly: this one or more appointment must not be interpreted as by any way for the application whether also comprised any fresh content outside the content of its portion or many parts of original applications any type explanation and/or admit.
All subject matters of any and all original applications of these related applications and these related applications, the application of upper two generations, upper three generations application etc. all are combined in this by reference, and its degree is so that this type of subject matter does not conflict mutually with this paper.
General introduction
In one aspect, a kind of method includes but not limited to: automatically remotely identify one by one at least one characteristic of body via facial identification; For this individuality provides a demonstration, this demonstration has at least in part the content based on this at least one characteristic of identifying of this individuality; And at least in part based on identifying and the staring an object being associated of orientation and be this content of this individual choice of this individuality.In addition to the foregoing, in claims, accompanying drawing and the text of a part that forms this disclosure, described aspect the additive method.
One or more different aspect, a plurality of related systems include but not limited to for realizing in circuit and/or programming aspect this method of quoting; Depend on the design alternative of system designer, this circuit and/or programming can be the in fact any combinations that is configured to realize hardware, software and/or firmware aspect this method of quoting.
In one aspect, a kind of system includes but not limited to: be used for automatically remotely identifying the one by one device of at least one characteristic of body via facial identification; Be used to this individuality to provide the device of a demonstration, this demonstration to have at least in part content based on this at least one characteristic of identifying of this individuality; And be used at least in part based on identifying and the staring the object that orientation is associated and be the device of this this content of individual choice of this individuality.In addition to the foregoing, in claims, accompanying drawing and the text of a part that forms this disclosure, described aspect the other system.
In addition to the foregoing, teach in the content statement and described various additive methods and/or system and/or program product aspect at the text (for example, claims and/or detailed description) of for example this disclosure and/or accompanying drawing etc.
Therefore aforementioned content is general introduction, and may contain simplification to details, summarizes, comprises and/or omit; Therefore, those skilled in the art will appreciate that this general introduction only is illustrative, and do not wish to limit by any way.Other aspects, feature and the advantage of device described here and/or process and/or other themes will be in becoming clear in the content teaching of this statement.
Brief Description Of Drawings
Fig. 1 is the schematic diagram of a demonstration.
Fig. 2 is the schematic diagram of one or more demonstrations.
Fig. 3 is the schematic diagram of an action of body one by one.
Fig. 4 is the schematic diagram of a demonstration.
Fig. 5 is the schematic diagram of one or more demonstrations.
Fig. 6 is the schematic diagram of one or more demonstrations.
Fig. 7 is the schematic diagram of one or more demonstrations.
Fig. 8 is the schematic diagram of one or more demonstrations.
Fig. 9 is the schematic diagram of one or more demonstrations.
Figure 10 is the schematic diagram of a demonstration.
Figure 11 is the schematic diagram of one or more display modules.
Figure 12 is the schematic diagram of a facial recognition module being connected with one or more display modules.
Figure 13 is the schematic diagram of a demonstration and a light source.
Figure 14 is the schematic diagram of the observability characteristic of a demonstration.
Figure 15 is the demographic schematic diagram of body one by one.
Figure 16 illustrates an operations flows, and this operations flows represents and utilizes facial identification automatically remotely to identify one or more characteristics of body one by one, an example operation that sight line is relevant of knowing that shows and be identified between this demonstration and this individuality that has at least in part based on the content of these one or more characteristics of identifying of this individuality is provided for this individuality.
Figure 17 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 18 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 19 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 20 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 21 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 22 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 23 illustrates an alternate embodiment of the operations flows of Figure 16.
Figure 24 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 25 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 26 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 27 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 28 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 29 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 30 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and selecting to be used for the relevant example operation of this content of this demonstration between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 31 illustrates an operations flows, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and selecting to be used for the relevant example operation of this content of this demonstration between this demonstration and this individuality for this individuality with utilizing facial identification.
Figure 32 illustrates an alternate embodiment of the operations flows of Figure 31.
Figure 33 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 34 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 35 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 36 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 37 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 38 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 39 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 40 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, stop for this individuality provide in this demonstration or this content at least one and be identified in this demonstration and this individuality between know the example operation that sight line is relevant.
Figure 41 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, stop for this individuality provide in this demonstration or this content at least one and be identified in this demonstration and this individuality between know the example operation that sight line is relevant.
Figure 42 illustrates an alternate embodiment of the operations flows of Figure 33.
Figure 43 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 44 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 45 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 46 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 47 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 48 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.
Figure 49 illustrates an alternate embodiment of the operations flows of Figure 48.
Figure 50 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality provide of having at least in part based on the content of these one or more characteristics of this individuality to show and at least in part based on identify with this individuality to stare the object that orientation is associated be the relevant example operation of this content of this individual choice.
Figure 51 illustrates an alternate embodiment of the operations flows of Figure 50.
Figure 52 illustrates an alternate embodiment of the operations flows of Figure 50.
Figure 53 illustrates an alternate embodiment of the operations flows of Figure 50.
Figure 54 illustrates an alternate embodiment of the operations flows of Figure 50.
Figure 55 illustrates an alternate embodiment of the operations flows of Figure 50.
Figure 56 illustrates an alternate embodiment of the operations flows of Figure 50.
Figure 57 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality provide have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identify with this individuality stare a object that orientation is associated be this content of this individual choice and be identified in this demonstration and this individuality between know the example operation that sight line is relevant.
Figure 58 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.
Figure 59 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.
Figure 60 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.
Figure 61 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.
Figure 62 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.
Figure 63 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.
Figure 64 illustrates an operations flows, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality provide have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identify with this individuality to stare a object that orientation is associated be this content of this individual choice and be the relevant example operation of this this content of the first individual choice based at least one characteristic of second individuality at least in part.
Figure 65 illustrates an alternate embodiment of the operations flows of Figure 64.
Describe in detail
In the following detailed description, accompanying drawing is carried out reference, accompanying drawing has formed the part of detailed description.In the accompanying drawings, similar symbol parts like the recognition category typically are unless point out in addition in the context.It is restrictive that the illustrative embodiment of describing in detailed description, accompanying drawing and claims is not planned.In situation about not deviating from the spirit or scope of this subject matter that presents, can utilize other embodiment, and can carry out other changes.
Person of skill in the art will appreciate that, prior art has developed into the stage that has minimum difference between the hardware of the each side of system, software and/or firmware embodiment; The use of hardware, software and/or firmware general (but not always, reason is not under some background, and it is more important that the selection between the hardware and software can become) is the design alternative of expression cost and efficiency trade-off.One of skill in the art will appreciate that, existence can realize process described here and/or system and/or other technologies by various delivery vehicles (for example, hardware, software and/or firmware), and preferred delivery vehicle will change along with the background of wherein having disposed process and/or system and/or other technologies.For instance, be very important if the implementer determines speed and accuracy, this implementer can select main hardware and/or firmware delivery vehicle so; Perhaps, if dirigibility is very important, this implementer can select the main software embodiment so; Perhaps, again alternatively, this implementer can select a certain combination of hardware, software and/or firmware.Therefore, existence can realize process described here and/or device and/or other technologies by some possible delivery vehicle, in these delivery vehicles any one is not better than its other inherently, reason is, a selection with any delivery vehicle that utilizes, this selection depends on and will dispose therein the background of delivery vehicle and implementer's particular attention given (for example, speed, dirigibility or predictability), and any one in these backgrounds and the particular attention given may change.Person of skill in the art will appreciate that, the optics aspect of embodiment will typically adopt hardware, software and or the firmware towards optics.
In embodiments more described here, logic and similar embodiment can comprise software or other control structures.For instance, electronic circuit can have the one or more current paths that are configured and are arranged to enforcement various functions described here.In some embodiments, when one or more media keep or transmission can operate with as this describes and the device of carrying out can detect instruction the time, this medium can be configured to carry a kind of device can detect embodiment.For instance, in some variants, a plurality of embodiments can be for example by carrying out about the reception of one or more instructions of one or more operations described here or renewal or the modification that transmission comprises existing software or firmware or gate array or programmable hardware.Alternatively or in addition, in some variants, an embodiment can comprise specialized hardware, software, firmware component and/or the universal component of carrying out or calling in addition special-purpose member.A plurality of specifications or other embodiments can be transmitted by one or more examples of tangible transmission medium described here, randomly transmit or transmit by passing distributed medium in each time in addition by bag.
Alternatively or in addition, a plurality of embodiments can comprise and carry out the special instruction sequence or call circuit and enable, trigger, coordinate, ask or facilitate in addition the one or many of in fact any feature operation described here to occur being used for.In some variants, this operation or other logical descriptions can be expressed as source code and to be compiled or to call in addition be an executable instruction sequence.For instance, in some contexts, can pass through whole or in part source code (for example, C++) or other code sequences a plurality of embodiments are provided.In other embodiments, use source or other code implementation schemes commercially available and/or technology in this area can be compiled/implement/(for example translate/be converted to senior descriptor language, originally in C or C++ programming language, implement described technology, but and thereafter this programming language embodiment is converted to logic synthetic language embodiment, hardware description language embodiment, hardware design simulation embodiment and/or other this type of similar expression pattern).For instance, some or all logical expressions (for example, the computer programming language embodiment) (for example can be declared as Verilog type hardware description, via hardware description language (HDL) and/or very high speed integrated circuit hardware description symbol language (VHDL)) or can be used for subsequently other circuit models that foundation has the physical implementation scheme (for example, special IC) of hardware.Person of skill in the art will appreciate that, how teach to obtain, configure according to these and optimize suitable transmission or computing element, material supply, actuator or other structures.
Referring now to Fig. 1 and 12,, can utilize a facial recognition module 50 to come automatically remotely to identify one or more characteristics of the first individuality 52.In one embodiment, this face recognition module 50 can comprise an image capture device 120, and for example digital camera, video camera or be used for captured the analog of the image of the first individuality 52.This face recognition module 50 can also comprise be used to hardware, software, firmware or the analog of implementing one or more facial identification algorithms and identify this first individuality 52.For instance, one or more facial characteristics of this first individuality 52 can be stored in and can comprise a database or analog by this storer of storer 122(of facial recognition module 50 accesses) in, and facial recognition module 50 can utilize the data (for example, facial characteristics data) that are stored in this database to identify the first individuality 52.In a plurality of embodiment, identify this first individuality 52 and can comprise the identity of determining this first individuality 52.For instance, facial characteristics that can be by will being stored in the first individuality 52 in the storer 122 compares with the one or more facial characteristics by image capture device 120 imagings, determines the identity of the first individuality 52.In a plurality of embodiment, storer 122 can be connected to (for example, via bus 126) on the processor 124, is used for implementing one or more facial identification algorithms and identifies the first individuality 52.Facial identification algorithm can be stored in the storer 122.In addition, can transmit 138 via data data (for example, facial characteristics data) are offered facial recognition module 50.For instance, data transfer module 138 can be connected on the facial recognition module 50.In a plurality of embodiment, data transfer module 138 can comprise one or more in beacon 140, mobile communications device 142, RFID label 144 or the analog.Perhaps, facial recognition module 50 can be via network 130(for example, the Internet, Intranet, Local Area Network, wide area network (WAN), private or analog) remotely be connected to outside the venue disposal system 128 or an analog.Disposal system 128 can be implemented one or more facial identification algorithms and identify the first individuality 52 via network 130 outside the venue, and the result is sent to facial recognition module 50.
Can utilize the first display module 54 to come for the first individuality 52 provides first to show 56, wherein first shows that 56 have at least in part the content based on one or more characteristics of identifying of the first individuality 52.The first display module 54 can provide and comprise that first of visual stimulus shows 56, and this visual stimulus for example is the first individual 52 appreciable images or a succession of image (for example, video).In one embodiment, the first display module 54 can comprise video projector, slide projector, motion-picture projection instrument or be used for the another kind of device of the individual appreciable movement of projection or still image.The first display module 54 can provide and comprise that first of audio stimulation shows 56, and this audio stimulation for example is a sound or a succession of sound (for example, a succession of spoken word) that the first individuality 52 can be heard.In one embodiment, the first display module 54 can comprise loudspeaker, loudspeaker, poly-sound projector or be used for the another kind of device of audio frequency projection to individuality.For instance, can utilize poly-sound projector at the first individual 52 places projection narrow beam sound, not allow at least substantially other people can hear to the audio frequency of the first individual 52 broadcasting simultaneously.The first display module 54 can provide and comprise that first of sense of smell or haptic stimulus shows 56, and this sense of smell or haptic stimulus be one air for being smelt or feel by the first individuality 52 for example.。For instance, can utilize a fan to guide to the first individuality 52 with the air stream of smell.In a plurality of embodiment, the first display module 54 can provide and comprise for first of any combination of one or more images, sound or the sensation of the first individuality 52 and show 56.
In a plurality of embodiment, first shows that 56 content can comprise advertisement, amusement or information.This content of this first demonstration 56 may be aimed at this first individuality 52 uniquely.Perhaps, can based on the first individual 52 relation of sharing a certain types (for example, spatial relationship) or share to one or more other individual characteristics of the connection (for example, social connect) of the first individuality 52, show content alignment the first individuality 52 of 56 with first.For instance, can select to show 56 content for first of the first individuality 52 based on the characteristic (for example, facial characteristics, acoustic characteristic or identity) of the second individuality 80 at least in part.In a plurality of embodiment, the second individuality 80 may occupy and the first individual 52 general areas that approaches.In addition, the second individuality 80 may be walked together with the first individuality 52.For instance, the second individuality 80 may and be connected on the first individuality 52 via the social activity connection, for example, and as the role of acquaintance, friend, spouse or analog.In this example, when selection shows 56 content for first of the first individuality 52, can utilize the identification (for example, sex) of a certain characteristic of the second individuality 80.In a plurality of embodiment, this demonstration can comprise the information that may be intended for second individual 80 product of buying about the first individuality 52, and this product for example is dress.
Referring now to Fig. 1 and 14,, can utilize the first display module 54 to come to provide first to show 56 for the first individuality 52 based on one or more observability characteristics of identifying of the first demonstration 56 that is used for the first individuality 52 at least in part.In a plurality of embodiment, the observability characteristic that is used for the first demonstration 56 of the first individuality 52 can comprise watches angle 42(namely, first individual 52 with on substantially perpendicular to the direction that shows away from first show 56 and the angle of a line extending), scope 44(for example, the first individual 52 distances first show 56 distance), angle size 46(for example, show institute's perception size of 56 based on first of the first individual angle with showing) or institute's perceived resolution 48 of showing.In addition, be used for first of the first individuality 52 and show that 56 observability characteristic can be based on one or more of the identity of the first individuality 52 or demography.The first display module 54 can record the first individuality 52 can see that first shows 56 duration.Can be at least in part based on knowing that sight line (namely between identification first individual 52 and the display, be identified in the first individual 52 and first substantially uncrossed visual pathway that shows between 56) or the first individuality 52 show that with respect to first 56 facial directed (for example, pointing to substantially the facial directed of display) determine the observability of 56 pairs of the first individualities 52 of the first demonstration.In a plurality of embodiment, can utilize the first individuality 52 can see that first shows that 56 the duration that records is assigned to providing of first individual 52 appreciable the first demonstration 56 with a monetary value.
Referring to Figure 13, the first display module 54 can utilize various technology to recognize the sight line of knowing of the first individuality 52.For instance, facial recognition module 50 can be from identifying one or more characteristics of the first individuality 52 near a position of the first demonstration 56.In a plurality of embodiment, can be with light source 26 guiding to the first individuality 52, and can detect from light source 26 near the first reflection of light that shows a position of 56.Therefore, can utilize first show the 56, first individuality 52, the second individuality 80 that approaches or the object 26 that approaches in one or more position predict one or more sight line characteristics.
Can provide at least in part content based on the demography 28 of the first individuality 52 referring to Fig. 1 and 15, the first display modules 54.For instance, the demography 28 of the first individuality can comprise one or more in age 30 roughly, race 32, face shape 34, facial size 36 or the sex 40.In one embodiment, the first display module 54 can provide at least in part the content based on the identity of the first individuality 52.In addition, these one or more facial identification algorithms can utilize the face of the first individuality 52 to identify the first individuality 52 with respect to the orientation of the first demonstration 56.These one or more facial identification algorithms can also utilize eyes of the first individuality 52 to identify the first individuality 52 with respect to the orientation of the first demonstration 56.
The first display module 54 can based on the change of the individual environment of the first individuality 52 or the change of state (for example, when the first individuality 52 can see that from the first individuality 52 first shows that the first district 58 of 56 moves to the first individuality 52 and can not see first when showing 56 Second Region 60) one or more and stop to provide the first content that shows the 56 or first demonstration 56 to this first individuality 52.In addition, the first display module 54 can based in the change of the change of the individual environment of the first individuality 52 or state one or more and provide first to show that 56 or first shows 56 content to this first individuality 52.Can record and stop to show 56 provide to being used for first of the first individuality 52.
The generation that the change of individual environment can comprise an event (for example, this individuality is paged or receive a honeycomb telephone call) or the change of the state of a certain lifeless object (for example, before turned to now towards a mark of this individuality leave this individuality).In addition, the change of individual environment can comprise movement, color, attitude, relation or one or more the change in the time.The change of individual state can comprise that individual and xenobiotic product, one have the change of the relation between in life article, people, group or the one group of article one or more.In a plurality of embodiment, the change of individual state can comprise near in the second individual the 80 or the 3rd individuality 86 of the first individuality 52 one or more existence or do not exist in one or more change.The change of individual state can comprise the position of the second individuality.In one embodiment, the change of individual state can comprise that being identified in first shows and know not existing of sight line between the 56 and first individuality 52.In addition, the change of individual state can comprise an action (for example, moving to Second Region 60 from the first district 58) of this individuality.To understand, display module can stop based on a combination of the change of the change of the change of the environment of individuality, individual state or individual environment and the change of individual state providing to this individuality and shows or content.Also will understand, display module can come based on a combination of the change of the change of the change of the environment of individuality, individual state or individual environment and the change of individual state to provide to this individuality and shows or content.
Referring now to Fig. 1 and 14,, can utilize the first display module 54 to come to show 56 based on stopping to be provided for first of the first individuality 52 for first of the first individuality 52 shows one or more observability characteristics 40 of identifying of 56 at least in part.Be used for first of the first individuality 52 and show that 56 observability characteristic 40 can comprise institute's perceived resolution 48 of watching angle 42, scope 44, angle size 46 or showing.In addition, be used for first of the first individuality 52 and show that 56 observability characteristic can be based on one or more of the identity of the first individuality 52 or demography.
Referring now to Fig. 2 and 3,, can be chosen as based on an action of individual 62 the first individual 52 content of selecting.This action of individual 62 can comprise one or more in the orientation 74 of at least a portion of the motion 72 of at least a portion of staring orientation 64, gesture 66, audio sound 68, spoken sound 70, health or health.In one embodiment, stare orientation 64 and can comprise that (for example) have a look article but do not move towards that article.In one embodiment, gesture 66 can comprise facial the expression.In one embodiment, the orientation 74 of at least a portion of health can include but not limited to individual posture or attitude, individuality and the angle that shows or individual distance apart from showing.First shows that 56 can be projected to and hang on the screen, and has the first content advertisement of the commodity of selling at information kiosk 76 places (for example, for) can be the first individuality 52 stands in information kiosk 76 near the time.When the first individuality 52 began to move towards StoreFront 78, first showed that 56 can be projected on the wall of StoreFront 78, and can have different content (for example, the advertisement of the commodity sold of the inside).
Referring now to Fig. 4,, the first display module 54 can stop based on one or more characteristics of automatically remotely identifying the second individuality 80 providing first to show 56 to the first individuality 52.Can utilize facial recognition module 50 to come automatically remotely to identify one or more characteristics of the second individuality 80.The second individuality 80 can be the individuality (according to the criterion of any user appointment) higher than the first individual 52 priority, and can utilize the first display module 54 to show that with first 56 offer the second individuality 80, wherein first shows that 56 have at least in part the content based on one or more characteristics of identifying of the second individuality 80.In a plurality of embodiment, can utilize such as roughly age, race, demography, watch the criterions such as angle or scope that the second individuality 80 is identified as higher priority individuality (for example, with respect to the first individuality 52).For instance, the second individuality 80 can have more closely coupling and is used for the roughly age, race or the demography that show the objective criteria of 54 ad contents that provide by first.Perhaps, the second individuality 80 can be in watching the angle or being in first of more wanting and show in 54 the scope of more wanting, thereby allows to utilize first to show that 54 more effectively present to the second individuality 80 with content.In one embodiment, controller 132 can be connected on facial recognition module 50 and the first display module 54.When facial recognition module 50 identified second individual 80, controller 132 can order the first display module 54 to stop to provide first to show 56 to the first individuality 52.In addition, controller 132 can order display module 54 to provide first to show 56 to the second individuality 80.
Referring now to Fig. 5 and 6,, can utilize facial recognition module 50 to come automatically remotely to identify one or more characteristics of the first individuality 52.Can utilize the first display module 54 to come for the first individuality 52 provides first to show 56, wherein first shows that 56 have at least in part the content based on one or more characteristics of identifying of the first individuality 52.In addition, can utilize facial recognition module 50 to come automatically remotely to identify one or more characteristics of the second individuality 80.Can utilize the second display module 82 to be provided for second of the second individuality 80 and show 84, wherein second shows that 84 have at least in part the content based on one or more characteristics of identifying of the second individuality 80.The first display module 54 can stop to the first individuality 52 providing first for example to show 56(based on one of the first individuality 52 action, can see first when showing 56 StoreFront 78 when the first individuality 52 moves away from the first individuality 52).The second display module 82 can stop to the second individuality 80 providing second for example to show 84(based on one of the second individuality 80 action, can see second when showing 84 StoreFront 78 when the second individuality 80 moves away from the second individuality 80).
Referring now to Fig. 7,, can utilize facial recognition module 50 to come automatically remotely to identify one or more characteristics of the 3rd individuality 86.Can be at least in part select for the content of the first individuality 52 or be used for the content of the second individuality 80 based on the 3rd individuality 86.
Referring now to Fig. 8,, the first display module 54 can move based on of the first individuality 52 and stop to provide first to show 56 to the first individuality 52.Can utilize facial recognition module 50 to identify the action of the first individuality 52 (for example, when the first individuality 52 can see that from the first individuality 52 first shows that the first district of 56 moves to the first individuality 52 can not see the Second Region of the first demonstration 56 time).Can utilize the first display module 54 to come for the first individuality 52 provides the 3rd to show 88, wherein the 3rd shows that 88 have at least in part the content based on one or more characteristics of identifying of the first individuality 52.And this content can be from identical or different by the first demonstration 56 contents that provide.
Referring now to Figure 11,, the first display module 54 or the second display module 82 can comprise fixed-direction display 90 or the display 92 that can alter course in one or more.Perhaps, the first display module 54 or the second display module 82 can comprise one or more in many view displays 94, automatic stereoscopic display device 96 or the three dimensional display 146.In a plurality of embodiment, three dimensional display 146 can be included in holographic a demonstration or one or more visible object in the first individual 52 appreciable arrangements.For instance, this demonstration can comprise the hologram image of an overcoat.Perhaps, this demonstration can comprise the one or more overcoats on the shelf, and these shelf are through rotate to give watching of first individual 52 pairs of these overcoats comprehensively.Considered that at this three dimensional display 146 can be specific (being that the first individual first clothes that shows can be rotated out so that be second individually to show second clothes for example) for body one by one.
In addition, the first display module 54 and the second display module 82 can comprise shared components 98.Shared components 98 can comprise many view displays 94.In one embodiment, these many view displays 94 can comprise biconvex lens assembly, one or more polarizing filter, one or more LCD light filter or be used for providing to first individual the 52 and second individuality 80 one or more of similar hardware of different images.For instance, first shows that 56 and second shows that 84 can comprise the alternate frame that can be shown by many view displays 94.To the first individuality 52 provide first show 56 may be in time with provide to the second individuality 80 second show 84 overlapping (for example, can provide the first frame 100 to the first individuality 52 at time t=A place, can provide the second frame 102 to the second individuality 80 at identical substantially time t=A place simultaneously; Similarly, can provide the 3rd frame 104 to the first individuality 52 at time t=B place, can provide the 4th frame 106 to the second individuality 80 at identical substantially time t=B place simultaneously; By that analogy).
Figure 16 illustrates an operations flows 1600, and this operations flows represents and utilizes facial identification automatically remotely to identify one or more characteristics of body one by one, an example operation that sight line is relevant of knowing that shows and be identified between this demonstration and this individuality that has at least in part based on the content of these one or more characteristics of identifying of this individuality is provided for this individuality.Should be understood that the sign that should not explain with restrictive one " beginning " or " end " in this operational flowchart.These a little signs are not deterministic, but as a reference point and provide.Illustrated and described process or method can be included in other processes or the method that comprise other steps or feature.Do not express the following meaning at this: before the operation that can not describe in the drawings or carry out afterwards other operations.In Figure 16 and in figure below of each example that comprises operations flows, will and/or provide about other examples and context and discuss and explaination about the above-mentioned example of Fig. 1 to 15.However, it should be understood that can be in many other environment and context and/or in Fig. 1 these operations flows of execution in revision to Figure 15.And, although present each operations flows with this illustrated (a bit) sequence, should be understood that and can carry out each operation with other order different from illustrated order, perhaps can carry out simultaneously these operations.
After beginning operation, operations flows 1600 moves to operation 1610.Operation 1610 is described automatically remotely to identify one by one at least one characteristic of body via facial identification.For instance, as shown in Fig. 1 to 15, facial recognition module 50 can comprise a computer applied algorithm for a characteristic identifying the first individuality 52 via facial identification.In one embodiment, this computer applied algorithm can utilize one or more images of capturing of this individuality to identify facial characteristics.
Subsequently, operation 1620 is depicted as this individuality a demonstration is provided, and this demonstration has at least in part the content based on this at least one characteristic of identifying of this individuality.For instance, as shown in Fig. 1 to 15, can utilize the first display module 54 to come for the first individuality 52 provides first to show 56, wherein this first shows that 56 have at least in part the content based on one or more characteristics of identifying of the first individuality 52.
Subsequently, operation 1630 sight lines of knowing of describing to be identified between this demonstration and this individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can utilize various technology to recognize the sight line of knowing of the first individuality 52.
Figure 17 illustrates a plurality of alternate embodiment of the example operation stream 1600 of Figure 16.Figure 17 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1702, operation 1704 and/or operate 1706.
Operation 1702 explanations are identified this individuality based on this at least one characteristic of identifying of this individuality at least in part.For instance, as shown in Fig. 1 to 15, can utilize facial recognition module 50 to come automatically remotely to identify one or more characteristics of the first individuality 52.In one embodiment, facial recognition module 50 can comprise be used to utilizing digital frame, frame of video or another kind of image of capturing automatically to identify a people's computer applied algorithm.For instance, facial recognition module 50 can be identified in the one or more diacritic sign on a people's who takes in the image of capturing the face, and indicate to compile one or more characteristics of identifying (for example, the width of the distance between the human eye or people's nose) of this individuality with these.Facial recognition module 50 can and comprise that with this one or more characteristics of identifying the characteristic of a plurality of individualities in the database of facial characteristics of many Different Individual compares.By utilizing this database and this one or more characteristics of identifying, facial recognition module 50 can identify a particular individual.The identity of this particular individual can be associated with this first individuality 52 subsequently.In addition, operation 1704 illustrates and utilizes the database of this at least one characteristic identified that comprises this individuality to identify this individuality.For instance, as shown in Fig. 1 to 15, facial recognition module 50 can comprise a storer 122, and this storer comprises a database 108.This database 108 can comprise a plurality of discernible characteristic of many Different Individual.For instance, but an evident characteristics can comprise the one by one height of body.In addition, operation 1706 illustrates and utilizes the database that comprises at least one individual facial characteristics to identify this individuality.For instance, as shown in Fig. 1 to 15, the storer 122 of facial recognition module 50 can comprise a plurality of discernible facial characteristics of many Different Individual.
Figure 18 illustrates a plurality of alternate embodiment of the operations flows 1600 of Figure 16.Figure 18 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1802 and/or operate 1804.In addition, operation 1802 illustrates and utilizes at least one facial characteristics of this individuality that provides via the data transmission to identify this individuality.For instance, as shown in Fig. 1 to 15, can data (for example, facial characteristics data) be offered facial recognition module 50 via data transfer module 138.
Operation 1804 explanations are identified this individuality based on the face of this individuality with respect to the orientation of this demonstration at least in part.For instance, as shown in Fig. 1 to 15, the face that facial recognition module 50 can utilize one or more facial identification algorithms to identify the first individuality 52 shows 56 orientation with respect to first, and utilizes subsequently this orientation of the face of this first individuality to identify this first individuality 52.For instance, can utilize this orientation of the face of this first individuality to be adjusted at measured distance between two or more facial markses (for example, thereby consider that this distance is to be different from when the individual distance of measured distance when facing image capture device directly).
Figure 19 illustrates a plurality of alternate embodiment of the example operation stream 1600 of Figure 16.Figure 19 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1902.
Operation 1902 explanations are identified this individuality based on these individual eyes with respect to the orientation of this demonstration at least in part.For instance, as shown in Fig. 1 to 15, the eyes that facial recognition module 50 can utilize one or more facial identification algorithms to identify the first individuality 52 are identified this first individuality 52 with respect to the orientation of the first demonstration 56.For instance, can utilize the orientation of the eyes of the first individuality to be adjusted at the another side facial marker of the first individuality 52 and the measured distance between the eyes.Perhaps, can utilize the orientation of the eyes of the first individuality to adjust measured distance between two other facial markses.
Figure 20 illustrates a plurality of alternate embodiment of the example operation stream 1600 of Figure 16.Figure 20 explanation wherein operates the 1620 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 2002 and/or operate 2004.
Operation 2002 explanations provide this demonstration based on identifying at least one observability characteristic of this demonstration of this individuality for this individuality.For instance, as shown in Fig. 1 to 15, can utilize the first display module 54 to come to provide first to show 56 for the first individuality 52 based on one or more observability characteristics of identifying of the first demonstration 56 that is used for the first individuality 52 at least in part.In addition, operation 2004 explanations come to provide this demonstration for this individuality based in institute's perceived resolution of watching angle, scope, angle size or showing at least one.For instance, as shown in Fig. 1 to 15, be used for first of the first individuality 52 and show that 56 observability characteristic can comprise institute's perceived resolution 48 of watching angle 42, scope 44, angle size 46 or showing.
Figure 21 illustrates a plurality of alternate embodiment of the example operation stream 1600 of Figure 16.Figure 21 explanation wherein operates the 1620 Multi-instance embodiment that can comprise at least one operation bidirectional.Figure 21 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2102, operation 2104 and/or operate 2106.
Operation 2102 explanations based near the existence of the second individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.For instance, as shown in Fig. 1 to 15, the first display module can provide first to show that 56 or first shows 56 content to the first individuality 52 based on the change of the state of the first individuality 52.The change of individual state can comprise near in the second individual the 80 or the 3rd individuality 86 of the first individuality 52 one or more existence or do not exist in one or more change.In addition, operation 2104 explanations based near the existence of the 3rd individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.For instance, as shown in Fig. 1 to 15, the first display module can be based near the existence of the 3rd individuality 86 of the first individuality 52 or do not exist and provide first to show that 56 or first shows 56 content to the first individuality 52.
Operation 2106 explanations provide this demonstration based on the position of the second individuality and for this individuality.For instance, as shown in Fig. 1 to 15, the first display module can provide first to show that 56 or first shows 56 content to the first individuality 52 based on the position of the second individuality.
Figure 22 illustrates a plurality of alternate embodiment of the example operation stream 1600 of Figure 16.Figure 22 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the Multi-instance embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2202 and/or operate 2204.
Operate 2202 declare records to the duration that provides of the appreciable demonstration of individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can record the duration of the first demonstration 56 that offers the first individuality 52.In addition, operation 2204 explanations be assigned to appreciable this demonstration of this individuality based on this this duration that records that provides of this demonstration and with a monetary value this provide.For instance, as shown in Fig. 1 to 15, the first display module 54 can first show 56 duration and a monetary value is assigned to first shows based on what offer the first individuality 52.
Figure 23 illustrates a plurality of alternate embodiment of the example operation stream 1600 of Figure 16.Figure 23 explanation wherein operates the 1630 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 2302, operation 2304 and/or operate 2306.
Operation 2302 explanations are identified at least one individual characteristic from approaching the position that shows via facial identification.For instance, as shown in Fig. 1 to 15, facial recognition module 50 can be from identifying one or more characteristics of the first individuality 52 near the position of the first demonstration 56.
Operation 2304 explanations to individuality, and detect light from light source from approaching the reflection of the position that shows with light source-guide.For instance, as shown in Fig. 1 to 15, can be with light source 26 guiding to the first individuality 52, and can detect from the light of light source 26 to the reflection near the position of the first demonstration 56.
Operation 2306 explanations based on show, at least one position in individual, the second individuality of approaching or the object that approaches predicts at least one sight line characteristic.For instance, as shown in Fig. 1 to 15, can utilize first show the 56, first individuality 52, the second individuality 80 that approaches or the object 26 that approaches in one or more position predict one or more sight line characteristics.
Figure 24 illustrates an operations flows 2400, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 24 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2410.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 2400 moves to operation 2410.Operation 2410 explanations are based on identifying knowing not existing of sight line and stop to provide demonstration for this individuality between this demonstration and this individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can show knowing not existing of sight line and stop to provide the first content that shows the 56 or first demonstration 56 to the first individuality 52 between the 56 and first individuality 52 based on identifying first.
Figure 25 illustrates an operations flows 2500, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 25 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2510, operation 2512 and/or operate 2514.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 2500 moves to operation 2510.Operation 2510 explanations based in the environment of individuality or the individual state at least one change and stop to provide demonstration for this individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can based in the change of the change of the environment of the individuality of the first individuality 52 or state one or more and stop to provide first to show that 56 or first shows 56 content to the first individuality 52.
Operation 2512 explanations stop to provide demonstration for the first individuality based at least one characteristic that automatically remotely identifies the second individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can stop based on one or more characteristics of automatically remotely identifying the second individuality 80 providing first to show 56 to the first individuality 52.
Operation 2514 explanations stop to provide demonstration for the first individuality based on automatically remotely identifying the second higher priority individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can stop to provide first to show 56 to the first individuality 52 based on automatically remotely identifying the second individuality 80.The second individuality 80 can be the individuality (according to the criterion of any user appointment) higher than the first individual 52 priority.
Figure 26 illustrates an operations flows 2600, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 26 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2610 and/or operate 2612.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 2600 moves to operation 2610.Operation 2610 explanations stop to provide demonstration for this individuality based on identifying at least one observability characteristic of this demonstration of this individuality.For instance, as shown in Fig. 1 to 15, can utilize the first display module 54 at least in part based on stopping to provide first to show 56 for the first individuality 52 for first of the first individuality 52 shows one or more observability characteristics 40 of identifying of 56.
Operation 2612 explanations based in institute's perceived resolution of watching angle, scope, angle size or showing at least one and stop to provide demonstration for individuality.For instance, as shown in Fig. 1 to 15, be used for first of the first individuality 52 and show that 56 observability characteristic 40 can comprise institute's perceived resolution 48 of watching angle 42, scope 44, angle size 46 or showing.
Figure 27 illustrates an operations flows 2700, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 27 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2710 and/or operate 2712.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 2700 moves to operation 2710.Operation 2710 explanations based near the existence of the second individuality of the first individuality or in not existing at least one and stop to provide demonstration for the first individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can or not exist and stops to provide first to show that 56 or first shows 56 content to the first individuality 52 based on the existence of the second individuality 80.
Operation 2712 explanations based near the existence of the 3rd individuality of the first individuality or in not existing at least one and stop to provide demonstration for the first individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can be based near the existence of the 3rd individuality 86 of the first individuality 52 or do not exist and stop to provide first to show that 56 or first shows 56 content to the first individuality 52.
Figure 28 illustrates an operations flows 2800, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 28 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2810.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 2800 moves to operation 2810.Operation 2810 illustrates based on the position of the second individuality and stops to provide demonstration for the first individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can stop based on the position of the second individuality 80 providing first to show that 56 or first shows 56 content to the first individuality 52.
Figure 29 illustrates an operations flows 2900, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and stop to show relevant example operation for this individuality provides between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 29 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2910.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 2900 moves to operation 2910.Operate 2910 declare records providing this demonstration of being used for this individuality is provided.For instance, as shown in Fig. 1 to 15, can record and stop to show 56 provide to being used for first of the first individuality 52.
Figure 30 illustrates an operations flows 3000, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and selecting to be used for the relevant example operation of this content of this demonstration between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 30 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 3010.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 3000 moves to operation 3010.Operation 3010 explanations are at least in part based on identifying and the staring an object being associated of orientation and be this individual choice content of individuality.For instance, as shown in Fig. 1 to 15, can be chosen as based on an action of individual 62 the first individual 52 content of selecting.Individual 62 action can comprise stares orientation 64.For instance, staring orientation 64 can comprise and have a look article but do not move towards that article.
Figure 31 illustrates an operations flows 3100, and this operations flows represents automatically remotely to identify one or more characteristics of body one by one, a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, is identified in knowing sight line and selecting to be used for the relevant example operation of this content of this demonstration between this demonstration and this individuality for this individuality with utilizing facial identification.Figure 31 illustrates that the example operation stream 1600 of Figure 16 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 3110, operation 3112 and/or operate 3114.
Beginning operation, operation 1610, operation 1620 and operating after 1630, operations flows 3100 moves to operation 3110.Operation 3110 explanations at least in part based on occupy with the general areas of the first individuality or in walking together with the first individuality at least one the second individuality at least one characteristic and be the first individual chosen content.For instance, as shown in Fig. 1 to 15, can based on the first individual 52 relation of sharing a certain types (for example, spatial relationship) or the connection that shares to the first individuality 52 (for example, the social connection) one or more other individual characteristics, and with the first content alignment the first individuality 52 that shows.Can select to show 56 content for first of the first individuality 52 based on the characteristic (for example, facial characteristics, acoustic characteristic or identity) of the second individuality 80 at least in part.In a plurality of embodiment, the second individuality 80 may occupy and the first individual 52 general areas that approaches.In addition, the second individuality 80 may be walked together with the first individuality 52.
Operation 3112 explanations are the first individual chosen content based on the acoustic characteristic of the second individuality at least in part.For instance, as shown in Fig. 1 to 15, can be at least in part be first individual 52 to select first to show 56 content based on the acoustic characteristic of the second individuality 80.
Operation 3114 explanations are the first individual chosen content based on the facial characteristics of the second individuality at least in part.For instance, as shown in Fig. 1 to 15, can be at least in part be first individual 52 to select first to show 56 content based on the facial characteristics of the second individuality 80.
Figure 32 illustrates a plurality of alternate embodiment of the example operation stream 3100 of Figure 31.Figure 32 explanation wherein operates the 3110 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 3202.
Operation 3202 explanations are the first individual chosen content based on the identity of the second individuality at least in part.For instance, as shown in Fig. 1 to 15, can be at least in part be first individual 52 to select first to show 56 content based on the identity of the second individuality 80.For instance, can utilize facial recognition module 50 to identify the second individuality 80.
Figure 33 illustrates an operations flows 3300, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.In Figure 33 and in figure below of each example that comprises operations flows, will and/or provide about other examples and context and discuss and explaination about the above-mentioned example of Fig. 1 to 15.However, it should be understood that can be in many other environment and context and/or in Fig. 1 these operations flows of execution in revision to Figure 15.And, although present each operations flows with this illustrated (a bit) sequence, should be understood that and can carry out each operation with other order different from illustrated order, perhaps can carry out simultaneously these operations.
After beginning operation, operations flows 3300 moves to operation 1610.Operation 1610 is described automatically remotely to identify one by one at least one characteristic of body via facial identification.
Subsequently, operation 1620 is depicted as this individuality a demonstration is provided, and this demonstration has at least in part the content based on this at least one characteristic of identifying of this individuality.
Subsequently, operation 3330 describe based in the environment of individuality or the individual state at least one change and stop for this individuality provide show or content at least one.For instance, as shown in Fig. 1 to 15, the first display module 54 can based on the change of the environment of the individuality of the first individuality 52 or the change of state (for example, when the first individuality 52 can see that from the first individuality 52 first shows that the first district 58 of 56 moves to the first individuality 52 and can not see first when showing 56 Second Region 60) one or more and stop to provide the first content that shows the 56 or first demonstration 56 to the first individuality 52.
Figure 34 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 34 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1702, operation 3404 and/or operate 3406.
Operation 1702 explanations are identified this individuality based on this at least one characteristic of identifying of this individuality at least in part.In addition, operation 1704 illustrates and utilizes the database of this at least one characteristic identified that comprises this individuality to identify this individuality.In addition, operation 1706 illustrates and utilizes the database that comprises at least one individual facial characteristics to identify this individuality.
Figure 35 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 35 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1802 and/or operate 1804.In addition, operation 1802 illustrates and utilizes at least one facial characteristics of this individuality that provides via the data transmission to identify this individuality.
Operation 1804 explanations are identified this individuality based on the face of this individuality with respect to the orientation of this demonstration at least in part.
Figure 36 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 36 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1902.
Operation 1902 explanations are identified this individuality based on these individual eyes with respect to the orientation of this demonstration at least in part.
Figure 37 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 37 explanation wherein operates the 1620 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 2002 and/or operate 2004.
Operation 2002 explanations provide this demonstration based on identifying at least one observability characteristic of this demonstration of this individuality for this individuality.In addition, operation 2004 explanations come to provide this demonstration for this individuality based in institute's perceived resolution of watching angle, scope, angle size or showing at least one.
Figure 38 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 38 explanation wherein operates the 1620 Multi-instance embodiment that can comprise at least one operation bidirectional.Figure 38 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2102, operation 2104 and/or operate 2106.
Operation 2102 explanations based near the existence of the second individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.In addition, operation 2104 explanations based near the existence of the 3rd individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.
Operation 2106 explanations provide this demonstration based on the position of the second individuality and for this individuality.
Figure 39 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 39 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the Multi-instance embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2202 and/or operate 2204.
Operate 2202 declare records to the duration that provides of the appreciable demonstration of individuality.In addition, operation 2204 explanations a monetary value is assigned to appreciable this demonstration of this individuality based on this this duration that records that provides of this demonstration this provide.
Figure 40 illustrates an operations flows 4000, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, stop for this individuality provide in this demonstration or this content at least one and be identified in this demonstration and this individuality between know the example operation that sight line is relevant.Figure 40 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 4010, operation 2302, operation 2304 and/or operate 2306.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4000 moves to operation 4010.Operation 4010 explanations are identified in the sight line of knowing between this demonstration and this individuality.For instance, as shown in Fig. 1 to 15, the first display module 54 can utilize various technology to recognize the sight line of knowing of the first individuality 52.
Operation 2302 explanations are identified at least one individual characteristic from approaching the position that shows via facial identification.
Operation 2304 explanations to individuality, and detect light from light source from approaching the reflection of the position that shows with light source-guide.
Operation 2306 explanations based on show, at least one position in individual, the second individuality of approaching or the object that approaches predicts at least one sight line characteristic.
Figure 41 illustrates an operations flows 4100, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality a demonstration having at least in part based on the content of these one or more characteristics of this individuality is provided, stop for this individuality provide in this demonstration or this content at least one and be identified in this demonstration and this individuality between know the example operation that sight line is relevant.Figure 41 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2410.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4100 moves to operation 2410.Operation 2410 explanations are based on identifying knowing not existing of sight line and stop to provide demonstration for this individuality between this demonstration and this individuality.
Figure 42 illustrates a plurality of alternate embodiment of the example operation stream 3300 of Figure 33.Figure 42 explanation wherein operates the 3330 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 2512 and/or operate 2514.
Operation 2512 explanations stop to provide demonstration for the first individuality based at least one characteristic that automatically remotely identifies the second individuality.
Operation 2514 explanations stop to provide demonstration for the first individuality based on automatically remotely identifying the second higher priority individuality.
Figure 43 illustrates an operations flows 4300, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.Figure 43 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2610 and/or operate 2612.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4300 moves to operation 2610.Operation 2610 explanations stop to provide demonstration for this individuality based on identifying at least one observability characteristic of this demonstration of this individuality.
Operation 2612 explanations based in institute's perceived resolution of watching angle, scope, angle size or showing at least one and stop to provide demonstration for individuality.
Figure 44 illustrates an operations flows 4400, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.Figure 44 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2710 and/or operate 2712.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4400 moves to operation 2710.Operation 2710 explanations based near the existence of the second individuality of the first individuality or in not existing at least one and stop to provide demonstration for the first individuality.
Operation 2712 explanations based near the existence of the 3rd individuality of the first individuality or in not existing at least one and stop to provide demonstration for the first individuality.
Figure 45 illustrates an operations flows 4500, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.Figure 45 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2810.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4500 moves to operation 2810.Operation 2810 illustrates based on the position of the second individuality and stops to provide demonstration for the first individuality.
Figure 46 illustrates an operations flows 4600, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.Figure 46 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2910.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4600 moves to operation 2910.Operate 2910 declare records providing this demonstration of being used for this individuality is provided.
Figure 47 illustrates an operations flows 4700, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.Figure 47 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 3010.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4700 moves to operation 3010.Operation 3010 explanations are at least in part based on identifying and the staring an object being associated of orientation and be this individual choice content of individuality.
Figure 48 illustrates an operations flows 4800, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, provide at least one relevant example operation in this demonstration or this content for this individuality provides of having at least in part based on the content of these one or more characteristics of this individuality to show and stop for this individuality.Figure 48 illustrates that the example operation stream 3300 of Figure 33 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 3110, operation 3112 and/or operate 3114.
Beginning operation, operation 1610, operation 1620 and operating after 3330, operations flows 4800 moves to operation 3110.Operation 3110 explanations at least in part based on occupy with the general areas of the first individuality or in walking together with the first individuality at least one the second individuality at least one characteristic and be the first individual chosen content.
Operation 3112 explanations are the first individual chosen content based on the acoustic characteristic of the second individuality at least in part.
Operation 3114 explanations are the first individual chosen content based on the facial characteristics of the second individuality at least in part.
Figure 49 illustrates a plurality of alternate embodiment of the example operation stream 4800 of Figure 48.Figure 49 explanation wherein operates the 3110 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 3202.
Operation 3202 explanations are the first individual chosen content based on the identity of the second individuality at least in part.
Figure 50 illustrates an operations flows 5000, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality provide of having at least in part based on the content of these one or more characteristics of this individuality to show and at least in part based on identify with this individuality to stare the object that orientation is associated be the relevant example operation of this content of this individual choice.In Figure 50 and in figure below of each example that comprises operations flows, will and/or provide about other examples and context and discuss and explaination about the above-mentioned example of Fig. 1 to 15.However, it should be understood that can be in many other environment and context and/or in Fig. 1 these operations flows of execution in revision to Figure 15.And, although present each operations flows with this illustrated (a bit) sequence, should be understood that and can carry out each operation with other order different from illustrated order, perhaps can carry out simultaneously these operations.
After beginning operation, operations flows 5000 moves to operation 1610.Operation 1610 is described automatically remotely to identify one by one at least one characteristic of body via facial identification.
Subsequently, operation 1620 is depicted as this individuality a demonstration is provided, and this demonstration has at least in part the content based on this at least one characteristic of identifying of this individuality.
Subsequently, operation 5030 explanations are at least in part based on identifying and the staring an object being associated of orientation and be this individual choice content of individuality.For instance, as shown in Fig. 1 to 15, can be chosen as based on an action of individual 62 the first individual 52 content of selecting.This action of individual 62 can comprise one or more in the orientation 74 of at least a portion of the motion 72 of at least a portion of staring orientation 64, gesture 66, audio sound 68, spoken sound 70, health or health.In one embodiment, stare orientation 64 and can comprise that (for example) have a look article but do not move towards that article.
Figure 51 illustrates a plurality of alternate embodiment of the example operation stream 5000 of Figure 50.Figure 51 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1702, operation 1704 and/or operate 1706.
Operation 1702 explanations are identified this individuality based on this at least one characteristic of identifying of this individuality at least in part.In addition, operation 1704 illustrates and utilizes the database of this at least one characteristic identified that comprises this individuality to identify this individuality.In addition, operation 1706 illustrates and utilizes the database that comprises at least one individual facial characteristics to identify this individuality.
Figure 52 illustrates a plurality of alternate embodiment of the example operation stream 5000 of Figure 50.Figure 52 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1802 and/or operate 1804.In addition, operation 1802 illustrates and utilizes at least one facial characteristics of this individuality that provides via the data transmission to identify this individuality.
Operation 1804 explanations are identified this individuality based on the face of this individuality with respect to the orientation of this demonstration at least in part.
Figure 53 illustrates a plurality of alternate embodiment of the example operation stream 5000 of Figure 50.Figure 53 explanation wherein operates the 1610 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 1902.
Operation 1902 explanations are identified this individuality based on these individual eyes with respect to the orientation of this demonstration at least in part.
Figure 54 illustrates a plurality of alternate embodiment of the example operation stream 5000 of Figure 50.Figure 54 explanation wherein operates the 1620 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 2002 and/or operate 2004.
Operation 2002 explanations provide this demonstration based on identifying at least one observability characteristic of this demonstration of this individuality for this individuality.In addition, operation 2004 explanations come to provide this demonstration for this individuality based in institute's perceived resolution of watching angle, scope, angle size or showing at least one.
Figure 55 illustrates a plurality of alternate embodiment of the example operation stream 5000 of Figure 50.Figure 55 explanation wherein operates the 1620 Multi-instance embodiment that can comprise at least one operation bidirectional.Figure 55 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2102, operation 2104 and/or operate 2106.
Operation 2102 explanations based near the existence of the second individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.In addition, operation 2104 explanations based near the existence of the 3rd individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.
Operation 2106 explanations provide this demonstration based on the position of the second individuality and for this individuality.
Figure 56 illustrates a plurality of alternate embodiment of the example operation stream 5000 of Figure 50.Figure 56 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the Multi-instance embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2202 and/or operate 2204.
Operate 2202 declare records to the duration that provides of the appreciable demonstration of individuality.In addition, operation 2204 explanations a monetary value is assigned to appreciable this demonstration of this individuality based on this this duration that records that provides of this demonstration this provide.
Figure 57 illustrates an operations flows 5700, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality provide have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identify with this individuality stare a object that orientation is associated be this content of this individual choice and be identified in this demonstration and this individuality between know the example operation that sight line is relevant.Figure 57 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 1630, operation 2302, operation 2304 and/or operate 2306.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 5700 moves to operation 1630.Operation 1630 explanations are identified in the sight line of knowing between this demonstration and this individuality.
Operation 2302 explanations are identified at least one individual characteristic from approaching the position that shows via facial identification.
Operation 2304 explanations to individuality, and detect light from light source from approaching the reflection of the position that shows with light source-guide.
Operation 2306 explanations based on show, at least one position in individual, the second individuality of approaching or the object that approaches predicts at least one sight line characteristic.
Figure 58 illustrates an operations flows 5800, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.Figure 58 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2410.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 5800 moves to operation 2410.Operation 2410 explanations are based on identifying knowing not existing of sight line and stop to provide demonstration for this individuality between this demonstration and this individuality.
Figure 59 illustrates an operations flows 5900, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.Figure 59 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2510, operation 2512 and/or operate 2514.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 5900 moves to operation 2510.Operation 2510 explanations based in the environment of individuality or the individual state at least one change and stop to provide demonstration for this individuality.
Operation 2512 explanations stop to provide demonstration for the first individuality based at least one characteristic that automatically remotely identifies the second individuality.
Operation 2514 explanations stop to provide demonstration for the first individuality based on automatically remotely identifying the second higher priority individuality.
Figure 60 illustrates an operations flows 6000, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.Figure 60 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2610 and/or operate 2612.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 6000 moves to operation 2610.Operation 2610 explanations stop to provide demonstration for this individuality based on identifying at least one observability characteristic of this demonstration of this individuality.
Operation 2612 explanations based in institute's perceived resolution of watching angle, scope, angle size or showing at least one and stop to provide demonstration for individuality.
Figure 61 illustrates an operations flows 6100, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.Figure 61 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2710 and/or operate 2712.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 6100 moves to operation 2710.Operation 2710 explanations based near the existence of the second individuality of the first individuality or in not existing at least one and stop to provide demonstration for the first individuality.
Operation 2712 explanations based near the existence of the 3rd individuality of the first individuality or in not existing at least one and stop to provide demonstration for the first individuality.
Figure 62 illustrates an operations flows 6200, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.Figure 62 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2810.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 6200 moves to operation 2810.Operation 2810 illustrates based on the position of the second individuality and stops to provide demonstration for the first individuality.
Figure 63 illustrates an operations flows 6300, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identifying and the staring a object that orientation is associated for this content of this individual choice and stop to provide this demonstration relevant example operation for this individuality of this individuality for this individuality provides.Figure 63 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 2910.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 6300 moves to operation 2910.Operate 2910 declare records providing this demonstration of being used for this individuality is provided.
Figure 64 illustrates an operations flows 6400, this operations flows represent with utilize facial identification come automatically remotely to identify body one by one one or more characteristics, for this individuality provide have at least in part based on a demonstration of the content of these one or more characteristics of this individuality, at least in part based on identify with this individuality to stare a object that orientation is associated be this content of this individual choice and be the relevant example operation of this this content of the first individual choice based at least one characteristic of second individuality at least in part.Figure 64 illustrates that the example operation stream 5000 of Figure 50 wherein can comprise the example embodiment of at least one operation bidirectional.Operation bidirectional can comprise operation 3110, operation 3112 and/or operate 3114.
Beginning operation, operation 1610, operation 1620 and operating after 5030, operations flows 6400 moves to operation 3110.Operation 3110 explanations at least in part based on occupy with the general areas of the first individuality or in walking together with the first individuality at least one the second individuality at least one characteristic and be the first individual chosen content.
Operation 3112 explanations are the first individual chosen content based on the acoustic characteristic of the second individuality at least in part.
Operation 3114 explanations are the first individual chosen content based on the facial characteristics of the second individuality at least in part.
Figure 65 illustrates a plurality of alternate embodiment of the example operation stream 6400 of Figure 64.Figure 65 explanation wherein operates the 3110 Multi-instance embodiment that can comprise at least one operation bidirectional.Operation bidirectional can comprise operation 3202.
Operation 3202 explanations are the first individual chosen content based on the identity of the second individuality at least in part.
The various embodiment that aforementioned detailed description has been stated a plurality of devices and/or a plurality of processes via the use of block diagram, process flow diagram and/or example.So far, these block diagrams, process flow diagram and/or example contain one or more functions and/or operation, it will be apparent to one skilled in the art that can by hardware widely, software, firmware or in fact its any combination come individually and/or jointly to be implemented in each function and/or operation in these block diagrams, process flow diagram or the example.In one embodiment, can implement via special IC (ASIC), field programmable gate array (FPGA), digital signal processor (DSP) or other integrated forms some parts of subject matter described here.Yet, person of skill in the art will appreciate that, some aspects of a plurality of embodiment disclosed here can be whole or in part as the one or more computer programs that move at one or more computing machines (for example, as one or more programs of moving in one or more computer systems), as one or more programs of moving at one or more processors (for example, as one or more programs of moving at one or more microprocessors), as firmware, perhaps how to make up and be implemented in a plurality of integrated circuit as its actual taking up an official post, and Given this disclose, for this software and or firmware come design circuit and/or write code suitably in those skilled in the art's the technical ability.In addition, one of skill in the art will appreciate that, the mechanism of subject matter described here can distribute as program product in a variety of forms, and an illustrative embodiment of subject matter described here is applicable, and no matter be used in fact implementing this distribution signal-bearing media particular type how.The Multi-instance of signal-bearing media includes but not limited to following each person: record able-type media, such as floppy disk, hard disk drive, compact disk (CD), digital video disk (DVD), number tape, computer memory etc.; And mode transmission medium, for example numeral and/or analogue communication medium (for example, fiber optic cables, waveguide, wire communication link, wireless communication link (for example, transmitter, receiver, transmission logic, receive logic etc.) etc.).
In general sense, person of skill in the art will appreciate that, described here can by widely hardware, software, firmware and/or its any combination comes individually and/or the various aspects jointly implemented can be regarded as being comprised of various types of " circuit ".Therefore, as used herein, " circuit " includes but not limited to: the circuit with at least one discrete circuit, circuit with at least one integrated circuit, circuit with at least one special IC, the circuit of formation by the general-purpose calculating appts of computer program configuration (for example, the multi-purpose computer of the computer program configuration by at least part of enforcement process described here and/or device, the perhaps microprocessor of the configuration of the computer program by at least part of enforcement process described here and/or device), (for example form storage arrangement, the storer of various ways (for example, random access, quickflashing, read-only etc.)) circuit, and/or circuit (for example, the modulator-demodular unit of formation communicator, communication switchboard, optoelectronic device etc.).Person of skill in the art will appreciate that, subject matter described here can be implemented with simulation or digital form or its a certain combination.
Person of skill in the art will appreciate that, at least a portion of device described here and/or process can be integrated in the data disposal system.Person of skill in the art will appreciate that, data handling system comprises one or more in the following substantially: the system unit shell, video display devices, the storer of volatibility or nonvolatile memory for example, the processor of microprocessor or digital signal processor for example, operating system for example, driver, the computational entity of graphical user interface and application program, one or more interactive devices (for example, touch pad, touch-screen, antenna etc.), and/or comprise backfeed loop and control the motor (feedback that for example, is used for sense position and/or speed, the control motor that is used for mobile and/or adjustment component and/or amount) control system.Can utilize suitable commercially available parts, for example typically calculate in data/those parts seen in communication and/or the network calculations/communication system, come the implementation data disposal system.
Person of skill in the art will appreciate that parts described here (for example, operation), device, object and be used as example with discussion that these parts, device, object accompany with the purpose for clear concept, and are expected various configuration modifications.Therefore, as used herein, the particular example of stating and the discussion of following are intended to represent the more generally classification of these examples.In general, use any particular example to be intended to represent the classification of this example, and do not comprise that particular elements (for example, operation), device and object should not be considered to be restrictive.
About using substantially any plural number and/or singular references at this, those skilled in the art can be in the situation that context and/or use and suitably change odd number into and/or change plural number into from odd number from plural number.For the sake of clarity, do not state clearly various singular/plural arrangements at this.
Subject matter described here illustrates different parts that different miscellaneous parts contains or the different parts that connect from different miscellaneous part sometimes.Should be understood that these frameworks of describing only are exemplary, and in fact can implement to realize many other frameworks of same functionality.On the concept meaning, realize that many parts arrangements of same functionality are actually " being associated ", so that realize desirable functional.Therefore, can be regarded as each other " being associated " through combination with any two parts of realizing particular functionality herein, thus realize desirable functional, and no matter framework or intermediate member how.Equally, it is desirable functional to realize that any two parts that so are associated also can be regarded as each other " being operably connected " or " operationally coupling ", and any two parts that can so be associated also can be regarded as each other " operationally can be coupled " desirable functional to realize.Be that the instantiation that operationally can be coupled includes, but is not limited to physically can cooperate and/or physically carry out mutual parts, and/or can be wirelessly mutual and/or wirelessly carry out mutual parts, and/or logically carry out mutual and/or can be logically mutual parts.
In some instances, one or more parts can be known as at this " being configured to ", " by ... configure ", " can be configured to ", " can operate/be operable to ", " be adapted to/can be adaptive ", " can ", " can meet/meet " etc.Person of skill in the art will appreciate that, active state parts and/or inactive state parts and/or stand-by state parts can be contained substantially in these terms (" for example, being configured to "), unless context has requirement in addition.
Although illustrated and described the particular aspects of described here subject matter, but it will be appreciated by those skilled in the art that, based on teaching at this, in the situation that do not break away from subject matter described here and more extensive areas can make a change and revise, and therefore, appended claims will be in the true spirit of subject matter described here and all these changes and modification in the scope in its encompasses.It should be appreciated by those skilled in the art that, in general, at this and especially at appended claims (for example, the body of appended claims) term that uses in as " open " term (is for example generally wished, term " comprises " should be interpreted as " including but not limited to ", term " has " should be interpreted as " having at least ", and term " comprises " should be interpreted as " including but not limited to " etc.).It will be further appreciated by those skilled in the art that if the optional network specific digit of a claim narration of introducing is set, will in this claim, enunciate so this intention, and in the situation that there is not this narration, do not have this intention.For instance, in order to help to understand, appended appended claims may contain the introductory phrase " at least one " of use and " one or more " introduce the claim statement.Yet, any specific rights requirement that the introduction that the use of these phrases should not be interpreted as the claim narration that hint introduces by indefinite article " " will contain this claim narration of introducing all is limited to the some claims that only contain this kind narration, even when same claim comprises introductory phrase " one or more " or " at least one " and also is so (for example, " one " typically should be interpreted as meaning " at least one " or " one or more ") during such as indefinite articles such as " one "; For the use of the definite article that is used for introducing the claim narration, situation also is like this.In addition, even enunciate the optional network specific digit of a claim narration of introducing, but person of skill in the art will appreciate that, this narration should be interpreted as typically also referring at least that the numeral narrated (for example, there is not the narration that only has of " two narrations " of other modifiers typically to refer at least two narrations, perhaps two or more narrations).In addition, be similar in use in those examples of convention of " at least one among A, B and the C etc. ", in general, it is set (for example, " at least one a system that has among A, B and the C " will include but not limited to have separately A, has separately B, has separately C, jointly has A and B, jointly has A and C, jointly has B and C and/or jointly have the system of A, B and C etc.) that this kind is configured on the meaning that it should be appreciated by those skilled in the art that this convention.Be similar in use in those examples of convention of " at least one among A, B or the C etc. ", in general, it is set (for example, " at least one a system that has among A, B or the C " will include but not limited to have separately A, has separately B, has separately C, jointly has A and B, jointly has A and C, jointly has B and C and/or jointly have the system of A, B and C etc.) that this kind is configured on the meaning that it should be appreciated by those skilled in the art that this convention.Those skilled in the art will be further understood that, typically, no matter be in description, claims or accompanying drawing, present the separation property word of two or more substituting terms and/or phrase and all be appreciated that and consider the one that comprises in these terms, any one or the possibility of these two terms in these terms, unless context has regulation in addition.For instance, phrase " A or B " will typically be understood to include the possibility of " A " or " B " or " A and B ".
About appended claims, those skilled in the art will appreciate that the operation of wherein narrating can carry out with any order substantially.And, although be to present various operations flows with (a plurality of) sequence, should be understood that and can carry out various operations with other order that are different from illustrated order, perhaps can side by side carry out various operations.These examples that substitute ordering can comprise overlapping, staggered, that interrupt, that reorder, that increase progressively, preparation, that replenish, simultaneously, orderings that put upside down or that other are different are unless context has regulation in addition.In addition, such as " in response to ", " relating to " or the adjectival term of other past tenses do not wish to get rid of these variants substantially, unless context has regulation in addition.
Although disclosed various aspects and embodiment at this, it will be appreciated by those skilled in the art that other aspects and embodiment.Various aspects disclosed here and embodiment are for illustrative purposes, and do not wish it is restrictive, and wherein real scope and spirit are to be indicated by appended claims.

Claims (16)

1. method comprises:
Come automatically remotely to identify one by one at least one characteristic of body via facial identification;
For this individuality provides a demonstration, this demonstration has at least in part a kind of content based on this at least one characteristic of identifying of this individuality; And
At least in part based on identifying that with this individuality one stares the object that orientation is associated and being this content of this individual choice.
2. the method for claim 1, wherein coming automatically remotely to identify one by one via facial identification, at least one characteristic of body comprises:
Identify this individuality based on this at least one characteristic of identifying of this individuality at least in part.
3. the method for claim 1, wherein for this individuality provides a demonstration, this demonstration has at least in part a kind of content based on this at least one characteristic of identifying of this individuality, and this comprises:
Based near the existence of second individuality of the first individuality or in not existing at least one and provide this demonstration for this individuality.
4. the method for claim 1 further comprises:
Know sight line for one that is identified between this demonstration and this individuality.
5. method as claimed in claim 4, know that sight line comprises for one that wherein is identified between this demonstration and this individuality:
With lead this individuality and detecting from the light of this light source from the reflection near a position of this demonstration of a light source.
6. the method for claim 1 further comprises:
Based on this individual environment or should the state of individuality at least one change and stop to provide this demonstration for this individuality.
7. the method for claim 1 further comprises:
At least in part based on occupying with a general areas of this first individuality or at least one characteristic of second individuality at least a situation in walking together with this first individuality and be this this content of the first individual choice.
8. system comprises:
Be used for coming automatically remotely to identify the one by one device of at least one characteristic of body via facial identification;
Be used to this individuality to provide the device of a demonstration, this demonstration to have at least in part a kind of content based on this at least one characteristic of identifying of this individuality; And
Be used at least in part based on identifying that with this individuality one stares the object that orientation is associated and being the device of this this content of individual choice.
9. system as claimed in claim 8 is used for wherein automatically remotely identifying one by one via facial identification that the device of at least one characteristic of body comprises:
Be used for identifying based on this at least one characteristic of identifying of this individuality at least in part the device of this individuality.
10. system as claimed in claim 8 is used for wherein automatically remotely identifying one by one via facial identification that the device of at least one characteristic of body comprises:
Be used for identifying with respect to an orientation of this demonstration based on the face of this individuality at least in part the device of this individuality.
11. system as claimed in claim 8 wherein is used to this individuality to provide the device of a demonstration, this demonstration to have at least in part a kind of content based on this at least one characteristic of identifying of this individuality, comprising:
Be used for the device that provides this demonstration for this individuality at least one observability characteristic of this demonstration of this individuality based on identifying.
12. system as claimed in claim 8 wherein is used to this individuality to provide the device of a demonstration, this demonstration to have at least in part a kind of content based on this at least one characteristic of identifying of this individuality, comprising:
Be used for based near the existence of second individuality of the first individuality or do not exist at least one and the device of this demonstration is provided for this individuality.
13. system as claimed in claim 8, wherein this content comprises at least one in advertisement, amusement or the information.
14. system as claimed in claim 8 further comprises:
Be used for being identified in a device of knowing sight line between this demonstration and this individuality.
15. system as claimed in claim 14, a device of knowing sight line that wherein is used for being identified between this demonstration and this individuality comprises:
Be used for lead this individuality and detecting from the light of this light source from the device near the reflection of a position of this demonstration of a light source.
16. system as claimed in claim 8 further comprises:
Be used for based on this individual environment or state that should individuality at least one change and stop to provide for this individuality the device of this demonstration.
CN201280006179.0A 2011-01-25 2012-01-24 Identify the characteristic of individual using face recognition and provide a display for the individual Expired - Fee Related CN103329146B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US12/931,157 2011-01-25
US12/931,145 US20110211738A1 (en) 2009-12-23 2011-01-25 Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual
US12/931,156 US20110211739A1 (en) 2009-12-23 2011-01-25 Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual
US12/931,157 US20110206245A1 (en) 2009-12-23 2011-01-25 Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual
US12/931,156 2011-01-25
US12/931,145 2011-01-25
PCT/US2012/000043 WO2012102828A1 (en) 2011-01-25 2012-01-24 Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual

Publications (2)

Publication Number Publication Date
CN103329146A true CN103329146A (en) 2013-09-25
CN103329146B CN103329146B (en) 2018-02-06

Family

ID=46581107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280006179.0A Expired - Fee Related CN103329146B (en) 2011-01-25 2012-01-24 Identify the characteristic of individual using face recognition and provide a display for the individual

Country Status (3)

Country Link
EP (1) EP2668616A4 (en)
CN (1) CN103329146B (en)
WO (1) WO2012102828A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018112820A1 (en) * 2016-12-22 2018-06-28 Motorola Solutions, Inc. Method and system for tracking an object of interest in a talkgroup

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050175218A1 (en) * 2003-11-14 2005-08-11 Roel Vertegaal Method and apparatus for calibration-free eye tracking using multiple glints or surface reflections
CN1866270A (en) * 2004-05-17 2006-11-22 香港中文大学 Face recognition method based on video frequency
US7305108B2 (en) * 2001-11-08 2007-12-04 Pelco Security identification system
US20090146779A1 (en) * 2007-12-07 2009-06-11 Cisco Technology, Inc. Home entertainment system providing presence and mobility via remote control authentication
US7634662B2 (en) * 2002-11-21 2009-12-15 Monroe David A Method for incorporating facial recognition technology in a multimedia surveillance system
CN101604382A (en) * 2009-06-26 2009-12-16 华中师范大学 A kind of learning fatigue recognition interference method based on human facial expression recognition
US7636456B2 (en) * 2004-01-23 2009-12-22 Sony United Kingdom Limited Selectively displaying information based on face detection
US7676150B2 (en) * 2005-07-11 2010-03-09 Fujifilm Corporation Image pickup apparatus, image pickup method and image pickup program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2033175A4 (en) * 2006-05-04 2011-07-06 Nat Ict Australia Ltd An electronic media system
US20090019472A1 (en) * 2007-07-09 2009-01-15 Cleland Todd A Systems and methods for pricing advertising

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7305108B2 (en) * 2001-11-08 2007-12-04 Pelco Security identification system
US7634662B2 (en) * 2002-11-21 2009-12-15 Monroe David A Method for incorporating facial recognition technology in a multimedia surveillance system
US20050175218A1 (en) * 2003-11-14 2005-08-11 Roel Vertegaal Method and apparatus for calibration-free eye tracking using multiple glints or surface reflections
US7636456B2 (en) * 2004-01-23 2009-12-22 Sony United Kingdom Limited Selectively displaying information based on face detection
CN1866270A (en) * 2004-05-17 2006-11-22 香港中文大学 Face recognition method based on video frequency
US7676150B2 (en) * 2005-07-11 2010-03-09 Fujifilm Corporation Image pickup apparatus, image pickup method and image pickup program
US20090146779A1 (en) * 2007-12-07 2009-06-11 Cisco Technology, Inc. Home entertainment system providing presence and mobility via remote control authentication
CN101604382A (en) * 2009-06-26 2009-12-16 华中师范大学 A kind of learning fatigue recognition interference method based on human facial expression recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018112820A1 (en) * 2016-12-22 2018-06-28 Motorola Solutions, Inc. Method and system for tracking an object of interest in a talkgroup
US10580146B2 (en) 2016-12-22 2020-03-03 Motorola Solutions, Inc. Method and system for tracking an object of interest in a talkgroup

Also Published As

Publication number Publication date
CN103329146B (en) 2018-02-06
WO2012102828A1 (en) 2012-08-02
EP2668616A1 (en) 2013-12-04
EP2668616A4 (en) 2017-02-08

Similar Documents

Publication Publication Date Title
Aggarwal et al. Augmented Reality and its effect on our life
US11656677B2 (en) Planar waveguide apparatus with diffraction element(s) and system employing same
CN102591016B (en) Optimized focal area for augmented reality displays
CN102566756B (en) Comprehension and intent-based content for augmented reality displays
CN109564620B (en) Augmented reality identity verification
AU2015274283B2 (en) Methods and systems for creating virtual and augmented reality
US10203762B2 (en) Methods and systems for creating virtual and augmented reality
CN105075246B (en) The method that Tele-immersion formula is experienced is provided using mirror metaphor
US9165381B2 (en) Augmented books in a mixed reality environment
US10235693B2 (en) Method and system for providing advertisement based on gaze of user
CN102419631B (en) Fusing virtual content into real content
CN102591449A (en) Low-latency fusing of virtual and real content
Coppens Merging real and virtual worlds: An analysis of the state of the art and practical evaluation of Microsoft Hololens
CN106462233A (en) Display device viewer gaze attraction
CN106104423A (en) Pose parameter is regulated
CN106576156B (en) Wearable mediation reality system and method
CN109478096A (en) Display communication
KR20230026503A (en) Augmented reality experiences using social distancing
CN112789543B (en) Electronic equipment
CN108475114B (en) Feedback for object pose tracker
CN103329146A (en) Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual
KR20200029291A (en) Device for providing content and processing method thereof
Krasyuk et al. Ar/vr technologies and their applications in procurement
US20230004214A1 (en) Electronic apparatus and controlling method thereof
US11042345B2 (en) Systems, devices, and methods for interactive visual displays

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180109

Address after: Washington State

Applicant after: Jill Bos limited liability company

Address before: Washington State

Applicant before: Searete LLC A. Ltd Liability Co.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180206

Termination date: 20210124