WO2017071733A1 - Augmented reality stand for items to be picked-up - Google Patents

Augmented reality stand for items to be picked-up Download PDF

Info

Publication number
WO2017071733A1
WO2017071733A1 PCT/EP2015/074776 EP2015074776W WO2017071733A1 WO 2017071733 A1 WO2017071733 A1 WO 2017071733A1 EP 2015074776 W EP2015074776 W EP 2015074776W WO 2017071733 A1 WO2017071733 A1 WO 2017071733A1
Authority
WO
WIPO (PCT)
Prior art keywords
shelving
hand
stand according
control unit
multimedia content
Prior art date
Application number
PCT/EP2015/074776
Other languages
French (fr)
Inventor
Carlo Filippo Ratti
Original Assignee
Carlorattiassociati S.R.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carlorattiassociati S.R.L. filed Critical Carlorattiassociati S.R.L.
Priority to PCT/EP2015/074776 priority Critical patent/WO2017071733A1/en
Priority to CN201580084194.0A priority patent/CN108292163A/en
Publication of WO2017071733A1 publication Critical patent/WO2017071733A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the present invention relates to a stand to be used in a selling area, such as a store, a supermarket, a shop or the like, where a consumer picks up from the stand one or more item for sale.
  • Items are normally labelled with required information that is prescribed according to national or regional laws. For example instructions for medicaments, ingredients for food, or the like.
  • the need is felt to inform the consumer with more detailed information about an item for sale in order to assist when the consumer chooses the item that best fits his/her needs.
  • the consumer may want to have additional information relating to the use of the product, such as examples, recipes or the like; after use additional information, for example relating to recycling; or pre-use additional information such as manufacturing technology, carbon dioxide equivalent of production process, origin of the product, price, nutritional, ingredients or the like.
  • the object of the present invention is achieved by a stand according to claim 1.
  • figure la is a left view of a stand assembly according to a first embodiment of the present invention ;
  • FIG. 1 - figure lb is a right view of a stand assembly according to a second embodiment of the present invention.
  • figure 2 is a front view of figures la, lb;
  • FIG. 3 is a schematic view of a contactless position detecting device provided in the stand assembly of figures la, lb. DETAILED DESCRIPTION OF THE DRAWINGS
  • numeral 1 refers to a stand assembly in a supermarket where items 2 for sale are placed waiting to be picked-up by a consumer.
  • Stand 1 comprises a shelving 3 where items 2 are placed; and a support, e.g. legs 4, to keep shelving 3 off ground G.
  • Preferably shelving 3 does not have lateral walls so that items 2 can also be viewed from a lateral orthogonal view (see figure 1) .
  • Stand assembly 1 also comprises a contactless position detector 5 placed in a top position with respect to shelving 3 and monitoring, according to a first embodiment (figure la) a picking up space S to intercept a picker, for example a person's hand 7, in particular a consumer's hand, indicating a target item 2 e.g. by stopping hand 7 in a physical target position, e.g. on top of target item 2 before picking it up to put e.g. in a shopping trolley.
  • a picker for example a person's hand 7, in particular a consumer's hand
  • a target item 2 e.g. by stopping hand 7 in a physical target position, e.g. on top of target item 2 before picking it up to put e.g. in a shopping trolley.
  • an optical axis 0 of position detector 5 intercepts shelving 3.
  • position detector 5 is also configured to detect a pointing configuration of hand 7.
  • a pointing configuration detected by the position detector 5 is when hand 7 points a physical target position on shelving 3 with an index finger.
  • optical axis 0 is inclined with respect to ground G and does not intersect shelving 3.
  • Picking up space S (figure la) is where items 2 are placed and shall be accessed by hand 7 in order to pick up item 2.
  • picking up space S is laterally bound by external perimeter, in a plan view, of shelving 3.
  • picking up space S is delimited by the highest edge of all items 2 on shelving 3 or, as an alternative, by the vertical position of detector 5.
  • Contactless position detector 5 senses the access of hand 7 inside picking up space S, recognizes hand 7 and produces a signal that is elaborated by a control unit 8, to establish also the position of hand 7 in a predefined tridimensional or bidimensional reference frame.
  • the signal from position detector 5 is elaborated by control unit 8 to be associated with a pre-stored 3D mapping of shelving 3 where a 3D point is associated to an item.
  • signal from position detector 5 is further elaborated by control unit 8 in order to identify the position of hand 7 and also to identify a pointing direction, e.g. a direction that is parallel to an extended index finger of consumer's hand 7.
  • control unit 8 elaborates pre- stored 3D mapping data relating to a predefined position of items 2 on shelving 3 in the same reference frame as that of the position of hand 7. In this way control unit 8 is able to match the physical target position indicated or pointed by hand 7 and predefined positions associated to items 2 placed on shelving 3.
  • control unit 8 is configured to compare the position of hand 7 or the physical target position pointed by hand 7, both sensed by the contactless position detector 5 with respect to the stored position of items 2 within 3D mapping of shelving 3 and to select from a database an information or content that is associated to each physical target position. Such information is displayed on a display 10 that is preferably located on top of shelving 3 above detector 5. Association of information may be biunivocal, i.e. to each predefined position of item
  • Sub-area A may be a portion or a shelf as a whole of shelving 3.
  • shelving 3 is descending towards the consumer so that monitoring and recognition of pointing hand 7 and pointing direction by detector 5 and control unit 8 is more precise and without interferences or hidden areas.
  • shelving 3 may comprise a single large horizontal platform or shelf where homogeneous groups of items 2 are placed in respective sub-areas A.
  • stand assembly 1 is basket-like, for example is a horizontal freezer having an horizontal access opening or door.
  • picking-up space is laterally bound by the basket-like structure and contactless position detector 5 intercepts hand 7 when accessing picking-up space through the main opening of the basket-like stand.
  • position of hand 7 is also calculated by control unit 8 on the basis of data from contactless position detector 5 in order to select an information to be displayed on the basis of the position of hand 7 when accessing the picking-up space .
  • contactless position detector 5 comprises a first sensor unit SI and a second sensor unit S2 different from first sensor unit SI.
  • Sensor units SI, S2 are different in the sense that they respectively detect the same physical parameter, e.g. an electromagnetic radiation, in different wavelength bands, e.g. visible light and infrared light.
  • sensor units SI, S2 respectively detect a different physical parameter, e.g. sound or other air pressure waves and electromagnetic radiation respectively.
  • stand 1 comprises an emitter to emit an energy wave that propagates over the picking-up space S (figure la) or towards the consumer (figure lb) and sensor units SI, S2 detect the energy reflected in order to recognize hand 7 and/or pointing hand 7.
  • sensor units SI, S2 may detect an energy wave emitted by hand 7.
  • the provision of two different sensor units SI, S2 increases the reliability of detection because sets of data respectively coming from different fields, when matched, may be elaborated by known algorithms to enhance detection of errors or to implement recognition strategies of hand 7 and/or of the position of hand 7 and/or recognition of the direction pointed by hand 7.
  • contactless position detector 5 is on the same side of shelving 3 with respect to a consumer in a picking-up position.
  • stand 1 comprises a consumer facing side 12 opposite to a back face that, for example, contacts the wall of a shop or, as shown in figure 1, another stand assembly.
  • Consumer facing front side 12 provides a furthermost front wall or portion or edge 13 that is the furthermost frontal projection proximal along the horizontal direction to the consumer in a picking-up position (figure la) .
  • contactless position detector 5 is on the same side of shelving 3 and items 2 with respect to furthermost wall or portion or edge 13.
  • optical axis 0 of detector 5 is on the same side of shelving 3 with respect to furthermost wall or portion or edge 13.
  • optical axis 0 is on the opposite side of shelving 3 with respect to furthermost wall or portion or edge 13 and is inclined towards ground G in order to detect pointing hand 7.
  • sensor unit SI is a 3D depth sensor and sensor unit S2 is a colour sensor.
  • sensor unit S2 captures color 2D images of the user so that contactless position detector 5 registers and synchronizes the depth maps with the color images, and generates a data stream that includes the depth maps and image data for output to control unit 8.
  • the depth maps, color images is output to the control unit 8 via a single port, for example, a Universal Serial Bus (USB) port.
  • USB Universal Serial Bus
  • color images are useful to recognize with the required level of precision hand 7 and pointing finger, i.e. index finger, of hand 7.
  • Control unit 8 processes the data generated by contactless position detector 5 in order to extract 3D image information. For example, control unit 8 may segment the depth map in order to recognize the parts of the body of the consumer, in particular hand 7, and find their 3D locations. Control unit 8 uses this information to select from a database a multimedia content to be displayed on the basis of the position of hand 7 within the picking-up space S (figure la) or on the basis of a physical target position pointed by hand 7 (figure lb) on shelving 3.
  • control unit 8 comprises a general-purpose computer processor, which is programmed in software to carry out these functions.
  • the software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on tangible media, such as optical, magnetic, or electronic memory media .
  • sensor unit SI comprises an illumination subassembly 15 illuminates the object, e.g. hand 7 with an appropriate pattern, such as a speckle pattern.
  • the depth-related image data include an image of the pattern on the object, i.e. hand 7, and the processing circuitry is configured to generate the depth map by measuring shifts in the pattern relative to a reference image.
  • Kinect TM
  • Microsoft Registered trademark
  • subassembly 15 typically comprises a suitable radiation source 16, such as a diode laser, LED or other light source, and optics, such as a diffuser 17 or a diffractive optical element, for creating the pattern on the object, i.e. hand 7, items 2, shelving 3 and other objects within the working range of illumination subassembly 15.
  • Sensor unit SI also comprises a depth image capture subassembly 18 captures an image of the pattern on the object surface.
  • Subassembly 18 typically comprises objective optics 19, which image the object surface onto a detector 20, such as a CMOS image sensor.
  • Radiation source 16 typically emits IR radiation, although other radiation bands, in the visible or ultraviolet range, for example, may also be used.
  • Detector 20 may comprise a monochrome image sensor, without an IR- cutoff filter, in order to detect the image of the projected pattern with high sensitivity.
  • optics 19 or the detector itself may comprise a bandpass filter, which passes the wavelength of radiation source 16 while blocking ambient radiation in other bands.
  • Sensor unit S2 comprises a color image capture subassembly 25 captures color images of the object.
  • Subassembly 25 typically comprises objective optics 26, which image the object surface onto a detector 27, such as a CMOS color mosaic image sensor.
  • Optics 26 or detector 27 may comprise a filter, such as an IR-cutoff filter, so that the pattern projected by illumination subassembly 15 does not appear in the color images captured by detector 27.
  • a processing device 28 receives and processes image inputs from subassemblies 18 and 25. Details of these processing functions are presented for example in US8456517 and are implemented in Kinect (TM) devices by Microsoft (Registered trademark) . Briefly put, processing device 28 compares the image provided by subassembly 18 to a reference image of the pattern projected by subassembly 15 onto a plane 30, at a known distance Dl from contactless position detector 5. The reference image may be captured as part of a calibration procedure and stored in a memory 31, such as a flash memory, for example. The processing device 28 matches the local patterns in the captured image to those in the reference image and thus finds the transverse shift for each pixel 32, or group of pixels, within plane 30.
  • the processing device Based on these transverse shifts and on the known distance D2 between the optical axes of subassemblies 15 and 18, the processing device computes a depth (Z) coordinate for each pixel.
  • Known distance D3 between the optical axes of subassemblies 18 and 25 is used by processing device to compute a shift between the color and depth images.
  • Processing device 28 synchronizes and registers the depth coordinates in each such 3D map with appropriate pixels in the color images captured by subassembly 25.
  • the registration typically involves a shift of the coordinates associated with each depth value in the 3D map.
  • the shift includes a static component, based on the known distance D2 between the optical axes of subassemblies 18 and 25 and any misalignment between the detectors, as well as a dynamic component that is dependent on the depth coordinates themselves.
  • An example of the registration process is also described in US8456517 and is implemented in Kinect (TM) devices by Microsoft (Registered trademark) .
  • processing device 28 After registering the depth maps and color images, processing device 28 outputs the depth and color data via a port, such as a USB port, to control unit 8.
  • the output data may be compressed in order to conserve bandwidth.
  • the consumer visually interacts with shelving 3 in order to select a target item 2.
  • Display 10 is above shelving 3 in order to avoid any interference with the visual interaction of the consumer with shelving 3 during selection of the target item 2.
  • the consumer shall move his/her eyes away from shelving 3 in order to watch multimedia contents on display 10.
  • the latter further shows a visual feedback after control unit 8 has elaborated the position of hand 7 and has matched such position with a pre-loaded 3D mapping of shelving 3 in order to select the physical target position and, thus, the target item 2.
  • Augmented reality provides the combination of multimedia contents and position detectors that interact with the consumer placed in a physical environment. Therefore multimedia contents are added to the physical environment, in the present case stand 1 with its items 2, and the consumer is not exposed to a complete virtual environment, as it happens in virtual reality systems.
  • control unit 8 selects information from a pre-defined database where contents are stored and associated to a respective tag corresponding to either a punctual position on shelving 3 or a range of positions corresponding to sub-areas A.
  • the relative content in the database appears on display 10 so that the user can read in larger font information on the item 2 placed in the punctual position or in sub-area A on shelving 3.
  • Control unit 8 is programmed to select and show a visual feedback on display 10 when a physical target position on shelving 3 is identified.
  • the visual feedback preferably comprises an image representing item 2 associated to the physical target position identified by control unit 8.
  • the visual feedback is kept for a relatively short time on display 10, e.g. for no more than 4 seconds, in order to give the consumer a chance to adjust the pointing direction in case the desired item by the consumer is not the one detected by position detector 5.
  • Control unit 8 further recognizes that physical target position is not changed by the consumer within a predefined time span, e.g. 3 seconds, and, in such a case, first sub ⁇ group of multimedia contents is shown on display 10.
  • a predefined time span e.g. 3 seconds
  • a second sub- group of multimedia contents is shown after a predefined time span during which the physical target position does not change.
  • display 10 also shows a visual feedback of the time remaining before switching to the second sub group of multimedia contents.
  • Items 2 shall be placed in the correct position, i.e. to match with the pre-stored database and the tags, on stand assembly 1 by the staff of the shop or supermarket or store.
  • database is updated so that tags correspond to the position of items 2 on stand assembly 1.
  • control unit 8 is programmed to recognize a pointing hand 7 and a pointing direction also when hand 7 enters picking up space S.

Abstract

An augmented reality stand assembly comprises a shelving (3) to support items (2) displayed for picking up, a contactless position detecting device (5) placed on top of the shelving (3) configured to recognize a hand (7) indicating or pointing a physical target position on shelving (3) corresponding to a target item (2) and to determine a position of the hand (7); and a control unit (8) configured to select a multimedia content associated to the physical target position on shelving (3) associated to the target item (2) on the basis of the position of the hand (7) detected by the contactless position detecting device (5).

Description

"AUGMENTED REALITY STAND FOR ITEMS TO BE PICKED-UP"
The present invention relates to a stand to be used in a selling area, such as a store, a supermarket, a shop or the like, where a consumer picks up from the stand one or more item for sale.
BACKGROUND OF THE INVENTION
Items are normally labelled with required information that is prescribed according to national or regional laws. For example instructions for medicaments, ingredients for food, or the like.
The need is felt to inform the consumer with more detailed information about an item for sale in order to assist when the consumer chooses the item that best fits his/her needs. For example the consumer may want to have additional information relating to the use of the product, such as examples, recipes or the like; after use additional information, for example relating to recycling; or pre-use additional information such as manufacturing technology, carbon dioxide equivalent of production process, origin of the product, price, nutritional, ingredients or the like.
All the above required and non-required information cannot be shown on the packaging of all items either because, in some cases, such packaging has a limited area for printing additional information or because packaging is absent, as it happens in the case of stands for fresh food such as vegetables, fruits or in self service areas where a consumer chooses his/her own portion of an item. It is known to have staff providing additional information on items to be sold or for sale.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide an alternative way, in particular using multimedia contents and augmented reality, to provide required and/or non- required information about an item to be picked up by a person, in particular a consumer when buying the item.
The object of the present invention is achieved by a stand according to claim 1.
Additional features of the invention are comprised in the dependent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, the latter will further be disclosed with reference to the accompanying figures in which:
figure la is a left view of a stand assembly according to a first embodiment of the present invention ;
- figure lb is a right view of a stand assembly according to a second embodiment of the present invention;
figure 2 is a front view of figures la, lb; and
- figure 3 is a schematic view of a contactless position detecting device provided in the stand assembly of figures la, lb. DETAILED DESCRIPTION OF THE DRAWINGS
In figure la, numeral 1 refers to a stand assembly in a supermarket where items 2 for sale are placed waiting to be picked-up by a consumer.
Stand 1 comprises a shelving 3 where items 2 are placed; and a support, e.g. legs 4, to keep shelving 3 off ground G.
Preferably shelving 3 does not have lateral walls so that items 2 can also be viewed from a lateral orthogonal view (see figure 1) .
Stand assembly 1 also comprises a contactless position detector 5 placed in a top position with respect to shelving 3 and monitoring, according to a first embodiment (figure la) a picking up space S to intercept a picker, for example a person's hand 7, in particular a consumer's hand, indicating a target item 2 e.g. by stopping hand 7 in a physical target position, e.g. on top of target item 2 before picking it up to put e.g. in a shopping trolley. In order to fully cover picking-up space S, an optical axis 0 of position detector 5 intercepts shelving 3. As an alternative (figure lb) , position detector 5 is also configured to detect a pointing configuration of hand 7. For example, a pointing configuration detected by the position detector 5 is when hand 7 points a physical target position on shelving 3 with an index finger. In order to detect the pointing hand, optical axis 0 is inclined with respect to ground G and does not intersect shelving 3. Picking up space S (figure la) is where items 2 are placed and shall be accessed by hand 7 in order to pick up item 2. In particular, picking up space S is laterally bound by external perimeter, in a plan view, of shelving 3. Along the vertical direction, picking up space S is delimited by the highest edge of all items 2 on shelving 3 or, as an alternative, by the vertical position of detector 5. Contactless position detector 5 senses the access of hand 7 inside picking up space S, recognizes hand 7 and produces a signal that is elaborated by a control unit 8, to establish also the position of hand 7 in a predefined tridimensional or bidimensional reference frame. When the user stops the hand on top of a target item 2, the signal from position detector 5 is elaborated by control unit 8 to be associated with a pre-stored 3D mapping of shelving 3 where a 3D point is associated to an item. As an alternative (figure lb) , signal from position detector 5 is further elaborated by control unit 8 in order to identify the position of hand 7 and also to identify a pointing direction, e.g. a direction that is parallel to an extended index finger of consumer's hand 7. The direction is then associated to the position of hand 7 in order to identify the physical target position on shelving 3 and, thus, target item 2 that the finger is pointing. Preferably the signal from position detector 5 is such to enable control unit 8 to distinguish between a hand or a pointing hand and another object. In both embodiments, control unit 8 elaborates pre- stored 3D mapping data relating to a predefined position of items 2 on shelving 3 in the same reference frame as that of the position of hand 7. In this way control unit 8 is able to match the physical target position indicated or pointed by hand 7 and predefined positions associated to items 2 placed on shelving 3.
In particular, control unit 8 is configured to compare the position of hand 7 or the physical target position pointed by hand 7, both sensed by the contactless position detector 5 with respect to the stored position of items 2 within 3D mapping of shelving 3 and to select from a database an information or content that is associated to each physical target position. Such information is displayed on a display 10 that is preferably located on top of shelving 3 above detector 5. Association of information may be biunivocal, i.e. to each predefined position of item
2 corresponds one and only one information associated to that position. As an alternative, identical items 2 are placed close to each other within a sub-area A of shelving
3 and a specific information tagged for sub-area A is selected by control unit 8 to be displayed whenever physical target position indicated or pointed by hand 7 falls within the sub-area A. For example, when hand 7 is detected in or points a sub-area A where apples are standing on shelving 3 and information or another multimedia content about apples is associated or tagged for that sub-area A, then the information or multimedia content about apples appears on display 10. When the user moves his/her hand on another sub-area A or points another sub- area A where items 2 of another omogeneous food kind or product is placed, information or multimedia content about the other food product is displayed. Sub-area A may be a portion or a shelf as a whole of shelving 3. Preferably shelving 3 is descending towards the consumer so that monitoring and recognition of pointing hand 7 and pointing direction by detector 5 and control unit 8 is more precise and without interferences or hidden areas.
As an alternative layout where position detector is oriented in a similar way to that of figure la, shelving 3 may comprise a single large horizontal platform or shelf where homogeneous groups of items 2 are placed in respective sub-areas A. According to a non-illustrated embodiment of the present invention where position detector is oriented in a similar way to that of figure la, stand assembly 1 is basket-like, for example is a horizontal freezer having an horizontal access opening or door. In such a case, picking-up space is laterally bound by the basket-like structure and contactless position detector 5 intercepts hand 7 when accessing picking-up space through the main opening of the basket-like stand. Furthermore, position of hand 7 is also calculated by control unit 8 on the basis of data from contactless position detector 5 in order to select an information to be displayed on the basis of the position of hand 7 when accessing the picking-up space .
According to a preferred embodiment of the present invention applicable to both layouts of figures la, lb, contactless position detector 5 comprises a first sensor unit SI and a second sensor unit S2 different from first sensor unit SI. Sensor units SI, S2 are different in the sense that they respectively detect the same physical parameter, e.g. an electromagnetic radiation, in different wavelength bands, e.g. visible light and infrared light. As an alternative, sensor units SI, S2 respectively detect a different physical parameter, e.g. sound or other air pressure waves and electromagnetic radiation respectively. It is also possible that stand 1 comprises an emitter to emit an energy wave that propagates over the picking-up space S (figure la) or towards the consumer (figure lb) and sensor units SI, S2 detect the energy reflected in order to recognize hand 7 and/or pointing hand 7. In alternative or in combination, sensor units SI, S2 may detect an energy wave emitted by hand 7. The provision of two different sensor units SI, S2 increases the reliability of detection because sets of data respectively coming from different fields, when matched, may be elaborated by known algorithms to enhance detection of errors or to implement recognition strategies of hand 7 and/or of the position of hand 7 and/or recognition of the direction pointed by hand 7.
It is also preferable that contactless position detector 5 is on the same side of shelving 3 with respect to a consumer in a picking-up position. In particular, stand 1 comprises a consumer facing side 12 opposite to a back face that, for example, contacts the wall of a shop or, as shown in figure 1, another stand assembly. Consumer facing front side 12 provides a furthermost front wall or portion or edge 13 that is the furthermost frontal projection proximal along the horizontal direction to the consumer in a picking-up position (figure la) . It is preferable that contactless position detector 5 is on the same side of shelving 3 and items 2 with respect to furthermost wall or portion or edge 13. According to such layout the designer of stand 1 has a greater freedom to obtain a minimal layout that brings out items 2 for sale in order to capture the consumer's attention. Furthermore, when stand 1 has the layout of figure la, at the vertical level of furthermost wall or portion or edge 13 optical axis 0 of detector 5 is on the same side of shelving 3 with respect to furthermost wall or portion or edge 13. As an alternative, according to layout of figure lb, optical axis 0 is on the opposite side of shelving 3 with respect to furthermost wall or portion or edge 13 and is inclined towards ground G in order to detect pointing hand 7. When detector 5 comprises more than one optical axis, each optical axis shall comply with the respective configurations discusses above.
According to a preferred embodiment of the present invention sensor unit SI is a 3D depth sensor and sensor unit S2 is a colour sensor.
In particular, sensor unit S2 captures color 2D images of the user so that contactless position detector 5 registers and synchronizes the depth maps with the color images, and generates a data stream that includes the depth maps and image data for output to control unit 8. In some embodiments, as described hereinbelow, the depth maps, color images is output to the control unit 8 via a single port, for example, a Universal Serial Bus (USB) port. In particular color images are useful to recognize with the required level of precision hand 7 and pointing finger, i.e. index finger, of hand 7.
Control unit 8 processes the data generated by contactless position detector 5 in order to extract 3D image information. For example, control unit 8 may segment the depth map in order to recognize the parts of the body of the consumer, in particular hand 7, and find their 3D locations. Control unit 8 uses this information to select from a database a multimedia content to be displayed on the basis of the position of hand 7 within the picking-up space S (figure la) or on the basis of a physical target position pointed by hand 7 (figure lb) on shelving 3.
Generally, control unit 8 comprises a general-purpose computer processor, which is programmed in software to carry out these functions. The software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on tangible media, such as optical, magnetic, or electronic memory media .
For 3D mapping, sensor unit SI comprises an illumination subassembly 15 illuminates the object, e.g. hand 7 with an appropriate pattern, such as a speckle pattern. In this way the depth-related image data include an image of the pattern on the object, i.e. hand 7, and the processing circuitry is configured to generate the depth map by measuring shifts in the pattern relative to a reference image. An example of this is discussed in greater detail in US8456517 and is implemented in Kinect (TM) devices by Microsoft (Registered trademark) . In particular, for this purpose, subassembly 15 typically comprises a suitable radiation source 16, such as a diode laser, LED or other light source, and optics, such as a diffuser 17 or a diffractive optical element, for creating the pattern on the object, i.e. hand 7, items 2, shelving 3 and other objects within the working range of illumination subassembly 15. Sensor unit SI also comprises a depth image capture subassembly 18 captures an image of the pattern on the object surface. Subassembly 18 typically comprises objective optics 19, which image the object surface onto a detector 20, such as a CMOS image sensor.
Radiation source 16 typically emits IR radiation, although other radiation bands, in the visible or ultraviolet range, for example, may also be used. Detector 20 may comprise a monochrome image sensor, without an IR- cutoff filter, in order to detect the image of the projected pattern with high sensitivity. To enhance the contrast of the image captured by detector 20, optics 19 or the detector itself may comprise a bandpass filter, which passes the wavelength of radiation source 16 while blocking ambient radiation in other bands.
Sensor unit S2 comprises a color image capture subassembly 25 captures color images of the object. Subassembly 25 typically comprises objective optics 26, which image the object surface onto a detector 27, such as a CMOS color mosaic image sensor. Optics 26 or detector 27 may comprise a filter, such as an IR-cutoff filter, so that the pattern projected by illumination subassembly 15 does not appear in the color images captured by detector 27.
A processing device 28 receives and processes image inputs from subassemblies 18 and 25. Details of these processing functions are presented for example in US8456517 and are implemented in Kinect (TM) devices by Microsoft (Registered trademark) . Briefly put, processing device 28 compares the image provided by subassembly 18 to a reference image of the pattern projected by subassembly 15 onto a plane 30, at a known distance Dl from contactless position detector 5. The reference image may be captured as part of a calibration procedure and stored in a memory 31, such as a flash memory, for example. The processing device 28 matches the local patterns in the captured image to those in the reference image and thus finds the transverse shift for each pixel 32, or group of pixels, within plane 30. Based on these transverse shifts and on the known distance D2 between the optical axes of subassemblies 15 and 18, the processing device computes a depth (Z) coordinate for each pixel. Known distance D3 between the optical axes of subassemblies 18 and 25 is used by processing device to compute a shift between the color and depth images.
Processing device 28 synchronizes and registers the depth coordinates in each such 3D map with appropriate pixels in the color images captured by subassembly 25. The registration typically involves a shift of the coordinates associated with each depth value in the 3D map. The shift includes a static component, based on the known distance D2 between the optical axes of subassemblies 18 and 25 and any misalignment between the detectors, as well as a dynamic component that is dependent on the depth coordinates themselves. An example of the registration process is also described in US8456517 and is implemented in Kinect (TM) devices by Microsoft (Registered trademark) .
After registering the depth maps and color images, processing device 28 outputs the depth and color data via a port, such as a USB port, to control unit 8. The output data may be compressed in order to conserve bandwidth.
According to the present invention, the consumer visually interacts with shelving 3 in order to select a target item 2. Display 10 is above shelving 3 in order to avoid any interference with the visual interaction of the consumer with shelving 3 during selection of the target item 2. In particular, the consumer shall move his/her eyes away from shelving 3 in order to watch multimedia contents on display 10. The latter further shows a visual feedback after control unit 8 has elaborated the position of hand 7 and has matched such position with a pre-loaded 3D mapping of shelving 3 in order to select the physical target position and, thus, the target item 2. Augmented reality provides the combination of multimedia contents and position detectors that interact with the consumer placed in a physical environment. Therefore multimedia contents are added to the physical environment, in the present case stand 1 with its items 2, and the consumer is not exposed to a complete virtual environment, as it happens in virtual reality systems.
In particular hand 7 and/or pointing hand 7 is recognized with respect to other objects and control unit 8 selects information from a pre-defined database where contents are stored and associated to a respective tag corresponding to either a punctual position on shelving 3 or a range of positions corresponding to sub-areas A. When hand 7 is located by detector 5 in the punctual position or the sub-area A, the relative content in the database appears on display 10 so that the user can read in larger font information on the item 2 placed in the punctual position or in sub-area A on shelving 3.
Preferably information or multimedia content for each kind of item 2 is divided in two or more sub-groups. Control unit 8 is programmed to select and show a visual feedback on display 10 when a physical target position on shelving 3 is identified. The visual feedback preferably comprises an image representing item 2 associated to the physical target position identified by control unit 8. The visual feedback is kept for a relatively short time on display 10, e.g. for no more than 4 seconds, in order to give the consumer a chance to adjust the pointing direction in case the desired item by the consumer is not the one detected by position detector 5.
Control unit 8 further recognizes that physical target position is not changed by the consumer within a predefined time span, e.g. 3 seconds, and, in such a case, first sub¬ group of multimedia contents is shown on display 10. Preferably, after a predefined time span during which the physical target position does not change, a second sub- group of multimedia contents is shown. In this case it is preferred that, when showing the first sub-group of multimedia contents, display 10 also shows a visual feedback of the time remaining before switching to the second sub group of multimedia contents.
Items 2 shall be placed in the correct position, i.e. to match with the pre-stored database and the tags, on stand assembly 1 by the staff of the shop or supermarket or store. As an alternative, after the staff placed items 2 on stand assembly 1, database is updated so that tags correspond to the position of items 2 on stand assembly 1.
Finally it is clear that modifications may be made to stand assembly 1 disclosed and shown herein without departing from the scope of protection defined by the appended claims.
For example it is possible that control unit 8 is programmed to recognize a pointing hand 7 and a pointing direction also when hand 7 enters picking up space S.

Claims

1. An augmented reality stand assembly comprising a shelving (3) to support items (2) displayed for picking up, a contactless position detecting device (5) placed on top of the shelving (3) configured to recognize a hand (7) indicating or pointing a physical target position on shelving (3) corresponding to a target item (2) and to determine a position of the hand (7); and a control unit (8) configured to select a multimedia content associated to the physical target position on shelving (3) associated to the target item (2) on the basis of the position of the hand (7) detected by the contactless position detecting device (5) .
2. The stand according to claim 1, characterized in that control unit (8) is configured to elaborate a direction pointed by the hand (7) and to determine the physical target position on the shelving (3) on the basis of the position of the hand (7) and of the pointing direction .
3. The stand according to any of the claims 1 or 2, characterized by comprising a consumer facing front side 12 having a furthermost edge (13) proximal in horizontal direction to the consumer and in that position detector (5) is on the same side of the shelving (3) with respect to furthermost edge (13) .
4. The stand according to claims 2 and 3, characterized in that the position detector (5) has at least one optical axis (0) and in that, at the horizontal level of the furthermost edge (13), the optical axis (0) is opposite to the shelving (3) with respect to the furthermost edge (13) and is inclined towards the ground (G) .
5. The stand according to any of the preceding claims, characterized in that the shelving (3) is descending.
6. The stand according to any of the preceding claims, characterized by comprising at least a display (10) placed above the shelving (3) and in that control unit (8) is programmed to show a feedback multimedia content on the display (10) after the physical target position on shelving (3) is identified, the feedback multimedia content being associated to the physical target position on shelving (3) .
7. The stand according to claim 6, characterized in that the feedback multimedia content is kept for no more than 4 seconds .
8. The stand according either claim 6 or 7, characterized in that control unit (8) is programmed to change the multimedia content on the basis of the time lapsed in which the hand (7) is detected to remain in the same position.
9. The stand according to claim 8, characterized in that control unit (8) is programmed to visualize on display (10) a visual feedback of time remaining before changing to the next multimedia content.
10. The stand according to any of the preceding claims, characterized in that position detector (5) comprises a first sensor unit (SI) for 3D mapping and a second sensor unit (S2) for color 2D mapping.
11. The stand according to any of the preceding claims, characterized in that the physical target position is identified by control unit (8) on the basis of a 3D mapping of shelving (3) .
12. The stand according to any of the preceding claims, characterized in that it is a shop or supermarket stand.
PCT/EP2015/074776 2015-10-26 2015-10-26 Augmented reality stand for items to be picked-up WO2017071733A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/EP2015/074776 WO2017071733A1 (en) 2015-10-26 2015-10-26 Augmented reality stand for items to be picked-up
CN201580084194.0A CN108292163A (en) 2015-10-26 2015-10-26 Augmented reality exhibition booth for article to be selected

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/074776 WO2017071733A1 (en) 2015-10-26 2015-10-26 Augmented reality stand for items to be picked-up

Publications (1)

Publication Number Publication Date
WO2017071733A1 true WO2017071733A1 (en) 2017-05-04

Family

ID=54366199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/074776 WO2017071733A1 (en) 2015-10-26 2015-10-26 Augmented reality stand for items to be picked-up

Country Status (2)

Country Link
CN (1) CN108292163A (en)
WO (1) WO2017071733A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10085571B2 (en) 2016-07-26 2018-10-02 Perch Interactive, Inc. Interactive display case
US11488235B2 (en) 2019-10-07 2022-11-01 Oculogx Inc. Systems, methods, and devices for utilizing wearable technology to facilitate fulfilling customer orders

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448612A (en) * 2018-12-21 2019-03-08 广东美的白色家电技术创新中心有限公司 Product display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141034A (en) * 1995-12-15 2000-10-31 Immersive Media Co. Immersive imaging method and apparatus
WO2014124612A2 (en) * 2013-02-18 2014-08-21 Valencia Zapata Pablo Andrés Intelligent shelves for points of sale
US20150002388A1 (en) * 2013-06-26 2015-01-01 Float Hybrid Entertainment Inc Gesture and touch-based interactivity with objects using 3d zones in an interactive system
US20150102047A1 (en) * 2013-10-15 2015-04-16 Utechzone Co., Ltd. Vending apparatus and product vending method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5355399B2 (en) * 2006-07-28 2013-11-27 コーニンクレッカ フィリップス エヌ ヴェ Gaze interaction for displaying information on the gazeed product
US20110069869A1 (en) * 2008-05-14 2011-03-24 Koninklijke Philips Electronics N.V. System and method for defining an activation area within a representation scenery of a viewer interface
US20110141011A1 (en) * 2008-09-03 2011-06-16 Koninklijke Philips Electronics N.V. Method of performing a gaze-based interaction between a user and an interactive display system
US20130316767A1 (en) * 2012-05-23 2013-11-28 Hon Hai Precision Industry Co., Ltd. Electronic display structure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141034A (en) * 1995-12-15 2000-10-31 Immersive Media Co. Immersive imaging method and apparatus
WO2014124612A2 (en) * 2013-02-18 2014-08-21 Valencia Zapata Pablo Andrés Intelligent shelves for points of sale
US20150002388A1 (en) * 2013-06-26 2015-01-01 Float Hybrid Entertainment Inc Gesture and touch-based interactivity with objects using 3d zones in an interactive system
US20150102047A1 (en) * 2013-10-15 2015-04-16 Utechzone Co., Ltd. Vending apparatus and product vending method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10085571B2 (en) 2016-07-26 2018-10-02 Perch Interactive, Inc. Interactive display case
US11488235B2 (en) 2019-10-07 2022-11-01 Oculogx Inc. Systems, methods, and devices for utilizing wearable technology to facilitate fulfilling customer orders

Also Published As

Publication number Publication date
CN108292163A (en) 2018-07-17

Similar Documents

Publication Publication Date Title
US11143736B2 (en) Detector for determining a position of at least one object comprising at least one device to determine relative spatial constellation from a longitudinal coordinate of the object and the positions of reflection image and reference image
US20230404296A1 (en) Image display device, image display system, image display method, and program
CN111033300B (en) Distance measuring device for determining at least one item of geometric information
US11908156B2 (en) Detector for determining a position of at least one object
US10282034B2 (en) Touch sensitive curved and flexible displays
US20110141011A1 (en) Method of performing a gaze-based interaction between a user and an interactive display system
CN109001748B (en) Target object and article association method, device and system
US20180088668A1 (en) Gaze direction mapping
EP3676629A1 (en) Detector for determining a position of at least one object
US10198080B1 (en) Virtual user interface
US20220138674A1 (en) System and method for associating products and product labels
EP3542183A1 (en) Detector for optically detecting at least one object
WO2019011803A1 (en) Detector for optically detecting at least one object
US20060001543A1 (en) Interactive wireless tag location and identification system
CN113498530A (en) Object size marking system and method based on local visual information
WO2017151669A1 (en) System and method for assisted 3d scanning
WO2014034188A1 (en) Clothing image-processing device, clothing image display method and program
Kurz Thermal touch: Thermography-enabled everywhere touch interfaces for mobile augmented reality applications
CN109416249A (en) Detector at least one object of optical detection
WO2017071733A1 (en) Augmented reality stand for items to be picked-up
WO2019222541A1 (en) Smart platform counter display system and method
US11640198B2 (en) System and method for human interaction with virtual objects
CN113544543A (en) Detector for determining a position of at least one object
KR20120090630A (en) Apparatus and method for measuring 3d depth by an infrared camera using led lighting and tracking hand motion
RU2541192C1 (en) Device for determination of characteristics of materials

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15788366

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15788366

Country of ref document: EP

Kind code of ref document: A1