WO2000005639A2 - Method and system for providing an avatar interactive computer guide system - Google Patents

Method and system for providing an avatar interactive computer guide system Download PDF

Info

Publication number
WO2000005639A2
WO2000005639A2 PCT/US1999/016808 US9916808W WO0005639A2 WO 2000005639 A2 WO2000005639 A2 WO 2000005639A2 US 9916808 W US9916808 W US 9916808W WO 0005639 A2 WO0005639 A2 WO 0005639A2
Authority
WO
WIPO (PCT)
Prior art keywords
displayed
screen element
avatar
screen
user
Prior art date
Application number
PCT/US1999/016808
Other languages
French (fr)
Inventor
Terry A. Thompson
James W. Osborne
Petra F. Varney
Helene R. Dahlander
Original Assignee
Originet
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Originet filed Critical Originet
Priority to AU57705/99A priority Critical patent/AU5770599A/en
Publication of WO2000005639A2 publication Critical patent/WO2000005639A2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
    • G06F3/04895Guidance during keyboard input operation, e.g. prompting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems

Definitions

  • the present invention relates generally to computer help systems, and more particularly to providing an avatar guide system that is integrated with an interactive computer program.
  • help systems are typically separate from the computer program, requiring the user to leave the normal UI for accessing functionality of the computer program and to interact with a separate UI for the help system.
  • help systems include a tutorial program which a new user can consult for basic information about how to operate the computer program.
  • help systems make various documentation (e.g., a user's manual) available electronically, including features such as indexes, tables of contents, and facilities for searching for particular terms.
  • Such help systems suffer from a variety of problems.
  • avatar i.e., a graphical representation of an entity
  • a graphical representation of an entity For example, when multiple users interact in a networked multi-user game, it is common for each user to choose an avatar to visually represent them when their character is present on a screen.
  • Such avatars are typically still images (e.g., a celebrity's face or an animal) or animated characters which can move about the screen.
  • avatars can also be displayed to represent the computer or an executing program, including the operating system or an application program.
  • an executing program could include an animated avatar face that is displayed on the computer screen, and if the program is executing on a sound-capable computer, the avatar could include an audio greeting to the user with the audio component synchronized with the moving animated face.
  • Such avatars encourage users to anthropomorphize the computer or computer program, thus making the user less leery about interacting with the computer.
  • Such avatars have a variety of drawbacks.
  • animated avatars it can require a significant amount of time for software developers to create the avatar, particularly if the avatar has details such as moving lips that are to be synchronized with audio.
  • some users will find such animations childish and tiresome.
  • rendering the series of frames necessary for realistic animations requires significant processing power by the computer because typical animations include only a few predefined still images, and use tween software to calculate intermediary images between predefined images.
  • the requirement for significant processing power limits the use of such avatars to high-powered computers and restricts the processing power available to execute other computer programs.
  • still-image avatars their use requires less development time and less computer processing power, but such avatars are also less effective at conveying information to the user (e.g., they cannot demonstrate human-like movements) and are less effective at providing an anthropomorphic facade to the computer or program.
  • informational kiosks with touch-sensitive screens can be used to convey information.
  • Such systems have become common in public places such as airports and visitor information centers.
  • Such kiosks typically remove most of the computer system from the view of the user, leaving only the display accessible.
  • the kiosk can then display various screens of information and instructions, with each screen including at least one user-activatable button screen element. The user can then select such a button (i.e., make it the current focus) and activate it (i.e., invoke the corresponding action) by touching the portion of the screen on which the button is displayed.
  • buttons e.g., a picture of a bus to access bus schedules
  • instructional text on the screen
  • kiosk systems can be effective at presenting limited amounts of information to users, these systems have various drawbacks. Since such kiosks must be useful to first-time users, the UIs for such kiosks must be extremely limited. For example, kiosk UIs typically include a few user-activatable buttons with intuitive icons. If the user wishes to locate information that is not accessible from a button on the current screen, it can be very difficult to locate the information. In addition, such kiosks typically require the user to select and activate a UI button to begin a session. For novice users leery of computers, the need to immediately interact with the kiosk in order to use it can be daunting.
  • Some embodiments of the present invention provide a method and system for providing an avatar computer guide system that is integrated with an interactive computer program.
  • the system provides a human or anthropomorphic avatar that moves about the UI screens of a computer program and interacts with various displayed screen elements.
  • the user-activatable screen elements with which the avatar interacts are the same screen elements with which the user can interact, and these screen elements are selectable by the user at any time.
  • the avatar can act as a guide to operating the functioning interactive computer program. This type of guidance from the avatar can supplement a separate help system for the computer program or eliminate the need for such a help system.
  • the avatar can attract new users and interest them in using the computer program.
  • the system plays an audio segment synchronized with lips on the displayed human avatar to assist the user by audibly explaining to the user how to use the input device to activate the displayed product screen element in order to obtain information about the physical product represented by the displayed product screen element and how to use the input device to activate the displayed control screen element in order to show on the display the screen represented by the displayed control screen element.
  • the human avatar is displayed in such a manner that the current screen resembles a television program.
  • the avatar can be implemented in a variety of ways.
  • a human avatar is implemented as a pre-recorded full-body, full- motion video image of a live person, and in an alternate embodiment a human avatar is implemented as a computer-generated representation of a human.
  • an avatar can be a recorded animation of an anthropomorphic figure that is played at the time of execution.
  • the system can be used in a variety of settings.
  • the system is used as part of a kiosk for consumers in a retail store, providing product and store information as well as displaying various advertisements.
  • an avatar may interact with and demonstrate how to use various products in the store and how to move about the store to find products.
  • the system can be used as a guide to a physical location, as a guide to goods or services not located in a single place, as a virtual shopping cart, as a guide to a virtual location, in movie theaters, on interactive airplane screens, etc.
  • the avatar can be customized in a variety of ways, such as to reflect the current user or the current geographic location of the system.
  • the customization can include both alterations to the appearance of the avatar as well as to auditory capabilities of the avatar.
  • the system includes one or more input devices which can receive information from or related to a user and then vary the avatar or computer program to reflect the information, such as a bar code scanner so that a user can scan the UPC code on a product to immediately obtain information about that item.
  • the system provides one or more wait screens which can be displayed when the computer system is not currently interacting with a user.
  • FIGS 1 through 5 are examples of user interface screens of a computer program containing an Integrated Avatar Guide (IAG) system.
  • Figure 6 is a block diagram illustrating an embodiment of the IAG system of the present invention.
  • IAG Integrated Avatar Guide
  • Figure 7 is a flow diagram of an embodiment of the IAG Displayer routine 700.
  • Figure 8 is a flow diagram of an embodiment of the IAG Constructor routine 800.
  • Figure 9 is a flow diagram of an embodiment of the Construct Main Screen subroutine 820.
  • Figure 10 is a flow diagram of an embodiment of the Construct Wait
  • Figure 11 is a flow diagram of an embodiment of the Construct Other Screen subroutine 835.
  • An embodiment of the present invention provides a method and system for providing an avatar computer guide system that is integrated with an interactive computer program.
  • an Integrated Avatar Guide (IAG) system provides a human or anthropomorphic avatar that moves about the UI screens of a computer program and interacts with (i.e., selects and/or activates) various displayed screen elements.
  • IAG Integrated Avatar Guide
  • the user-activatable screen elements with which the avatar interacts are the same screen elements with which the user can interact, and these screen elements are selectable by the user at any time.
  • the avatar acts as a guide to operating the functioning interactive computer program.
  • This guidance from the avatar of the IAG system can supplement a separate help system or eliminate the need for such a system.
  • the IAG system is used as part of a kiosk for consumers in a retail store, providing product (i.e., goods or services) and store information as well as displaying various advertisements.
  • the avatar may interact with and demonstrate how to use various goods or services available in or through the store, as well as how to move about the store to find products.
  • the avatar is a full-body, full-motion video clip of a live person, presented in a manner so that the human avatar seamlessly interacts with screen elements that are not part of the video clip and provides a computer program UI that resembles a television program.
  • a computer program using the IAG system to supplement its UI has a main screen that is initially displayed to the user, and typically has a variety of other screens accessible by user interaction with the program.
  • a consumer kiosk at a retail store would typically have informational screens about the various departments in the store (e.g., music and home electronics) as well as informational screens for each product which is available at or through the store.
  • Each screen has a background (e.g., a solid color or an image) and a variety of screen elements displayed on top of the background.
  • most screens include instructional screen elements that provide information about using the program, control screen elements that can be used to access program functionality, and information screen elements related to the type of information the program is providing.
  • a screen element can be displayed using any type of media which can be presented to the user, such as an icon, an image, a video clip, an audio clip, an animation, a display screen of an executing software program, etc.
  • Some screen elements, such as control screen elements are user- activatable, allowing a user to select and activate them and thus affect the execution of the program (e.g., changing screens).
  • a user-activatable screen element will "flare" (i.e., change appearance) to indicate when it is selected. For example, in some UIs a single-click on a screen element will select it, and a double-click or an additional click on the screen element will activate it. For touch-sensitive screens, placing a finger on the screen over a screen element will typically select it and removing the finger will typically activate it.
  • Instructional screen elements are not user-activatable, instead merely providing an informational source (e.g., "Touch A Displayed Product Below").
  • Information screen elements such as representations of products or store departments in a retail store, can be either user-activatable or non-user-activatable. When a user- activatable screen element is activated, the execution of the program is accordingly modified.
  • the main screen of a computer program typically includes a variety of instructional, information, and control screen elements.
  • the IAG avatar moves about the screen, often accompanied by appropriate corresponding audio, explaining and demonstrating how to interact with the user-activatable screen elements. For example, the avatar may walk over to an information screen element and explain that activating the screen element will display particular information. The avatar can also demonstrate how to select the screen element, with it flaring in response, and how to activate the screen element to display the information. If the IAG system is implemented on a computer system with a touch-sensitive screen, the avatar can demonstrate activating a displayed screen element by touching the edge or the interior of the screen element. The avatar could also approach and select a control screen element, such as a Main Screen button that returns the UI to the main screen. At any time, the user can select any user-activatable screen element to exert control over the UI.
  • a control screen element such as a Main Screen button that returns the UI to the main screen.
  • the avatar can interact with screen elements in other ways.
  • the avatar could approach a screen element that represents a physical product and that currently displays the front of the product.
  • the avatar could interact with the product screen element to cause the display of the product to rotate, thus displaying the sides or the back of the product.
  • the IAG system also supports layering in which a screen element can be placed on top of another screen element, thus allowing the avatar to move in front of or behind other screen elements.
  • the avatar can build a screen by carrying various screen elements (e.g., control and product buttons) and placing them about the screen. If the screen element carried by the avatar is a user- activatable screen element, the user can select and activate the screen element as soon as it is present on the screen.
  • the avatar can also demonstrate how to interact with the type of information being presented.
  • the avatar could carry and use a product such as a cellular phone, thus demonstrating the benefits of the product.
  • the avatar could select a product that can be previewed via the UI, such as a music selection, a video tape, a video game, or a software program. In this situation, the avatar could play or use the corresponding product, thus demonstrating not only how to use the product but also how the product functions.
  • the avatar could select locations on a map, change the background of the screen to display that location, and demonstrate how and where to accomplish a goal at that location (e.g., buying theater tickets at a downtown ticket window).
  • the behavior of the avatar is pre-defined, such as by pre-recording a live human actor who walks about and makes various planned gestures or by pre-generating the frames of an anthropomorphic animation.
  • the IAG system can implement the avatar by playing a pre-recorded video clip, such as with streaming media or by pre-loading the clip.
  • the video can be displayed as full-motion video (i.e., at least 30 frames a second) or as freeze-frame video (i.e., more than one frame a second but less than 30 frames a second).
  • Video clips can be stored in a variety of video formats, such as MPEG, Motion JPEG, QuickTime and AVI.
  • video images are displayed using a codec that converts digital information to an analog signal.
  • the IAG system When the avatar is implemented using a pre-recorded or pre-generated video clip, the IAG system will place appropriate screen elements so as to correspond to the gestures of the avatar. For example, if it is known that the avatar will walk to the upper right corner of the screen and gesture to its right 15 seconds into the video clip, the IAG system can place a user-activatable button at that location and cause it to flare at that time. In this embodiment, the interaction of the avatar with other screen elements that are not part of the video clip can be performed sufficiently seamlessly so that the UI resembles a television program.
  • the avatar may not be present at all, or may be present in a manner other than full-body, full-motion video (e.g., audio only, head only, or hand only).
  • the avatar could also provide non-audible explanations such as through the use of sign language.
  • one or more avatars can be displayed in a variety of ways, and can interact with other screen elements in a variety of ways.
  • the IAG system also provides one or more wait screens which can be displayed when the computer system is not currently interacting with a user.
  • the computer program can be executing and displaying a screen before a user arrives, thus eliminating the need for the user to locate and execute the program.
  • the wait screens can be used for a variety of purposes such as to display default information (e.g., a map of the current location), to maintain a particular ambience (e.g., a soothing background image and corresponding music), or to attract a user to the computer (e.g., display advertising information).
  • the program can be configured to display either the main screen or a wait screen.
  • the wait screen When a user begins to interact with the computer (e.g., by touching a touch-sensitive computer screen when a wait screen is displayed), the wait screen will be replaced with an appropriate non-wait screen.
  • This non-wait-screen will typically be determined by previous user interactions with the computer. For example, if the program has displayed a screen other than the wait screen longer than a predetermined amount of time without user input, the IAG system may assume that no user is present and thus change the display from the current screen to a wait screen. However, if the user promptly indicated that they wished to continue using the program (e.g., by touching a touch-sensitive screen), the previously displayed screen would be redisplayed.
  • the IAG system would instead assume that the next user input would correspond to a new user and would thus display the main screen when the user input was received. If the IAG system is able to sense whether a user is present or not, then the appropriate screens can be displayed accordingly, such as always returning to the previously displayed screen or never switching to the wait screen while a user remains at or near the computer.
  • the IAG system can also provide various types of customization for different executing copies of the same computer program, or for different users of a particular copy. For example, the same program can be customized for different sublocations at a particular location (e.g.
  • the avatar can also be customized to reflect different situations, such as using different races and genders for the avatar at different locations, or varying the audio portion of the avatar only to reflect different languages or regional accents. If the avatar demonstrates use of a product specific to a certain type of user, such as children's toys or clothes, the avatar may morph to a child of the appropriate age. The persona of the avatar can also be modified to suit different situations.
  • the avatar can take on the persona of an expert (e.g., Bob Vila for home improvement tools), a celebrity (e.g., Michael Jordan to endorse shoes), or a friend (e.g., advising you on relative advantages of different products based on their features).
  • the avatar may be varied based on the user, such as presenting a child avatar for a child user.
  • User information can be gathered in a variety of ways, such as determining demographic information about the user based on their selections.
  • the computer system may be able to sense the identity of the user (e.g., via a magnetic-strip card or information entered by the user).
  • the avatar can change to best demonstrate the information being conveyed and to best assist the user in retrieving the information they desire.
  • the IAG system can be used in a variety of other ways. These include the following non-exhaustive list of computer displays using the IAG system.
  • Units on shopping carts that allow a consumer to receive information about products, including their location and availability. These units could also track purchases.
  • IAG system can be used in a variety of other applications.
  • FIGs 1 through 5 are examples of user interface screens of a computer program containing an Integrated Avatar Guide (IAG) system.
  • Figure 1 illustrates an IAG Main Screen 100 for a consumer kiosk with a touch-sensitive screen at a retail store.
  • IAG Integrated Avatar Guide
  • FIG. 1 illustrates an IAG Main Screen 100 for a consumer kiosk with a touch-sensitive screen at a retail store.
  • These screens are presented for illustrative purposes only, and those skilled in the art will appreciate that elements of the screens can be varied in a variety of ways, including location and appearance.
  • the use of the IAG system is not limited to the various elements of this exemplary embodiment, including use as a consumer kiosk, at a retail store, or with a touch-sensitive screen.
  • the main screen includes a background 105 and various screen elements superimposed on top of the background.
  • a human avatar screen element 110 serves as an interactive guide to the operation of the computer program.
  • the human avatar is a full-body full- motion video clip that was filmed with a live actor, and the main screen is displayed as a World Wide Web page including containers for each of the screen elements.
  • the human avatar can be displayed in a variety of ways in this embodiment, such as with a pre-loaded video file in a format such as MPEG or as streaming video.
  • the same video clip is always used as the video clip for the main screen.
  • different video clips could be selected from, or multiple video clips could be used either simultaneously, sequentially, randomly from a fixed set of unlimited number, or based on any other algorithmic process.
  • the human avatar moves about a portion of the screen consisting of most of the left half of the screen.
  • a web page template for the main screen can implement the human avatar by defining a container for this left portion of the screen and associating a corresponding video file with that container.
  • the human avatar moves about her portion of the screen and audibly explains how to touch user-activatable screen elements in order to retrieve additional information about the retail store and the products it contains.
  • the avatar can also walk out of the video and back into it. If the edge of the container is near the edge of the screen, the avatar will thus appear to be walking on and off the screen.
  • Most of the right portion of the screen consists of four containers for user-activatable product information screen elements 125.
  • the IAG system or the program can specify any four products for the product information screen elements each time the main screen is generated.
  • the current products include a videotape, a television, a phone, and a CD.
  • other types of products such as goods not physically present in the store or various services available to the user, can be displayed.
  • information other than products can also be made available to users, such as a directory of store contents.
  • the product information screen element can de-flare when the human avatar no longer points to it. This interaction can be based upon a predetermined time in the video clip in which the human avatar points toward a particular container, or the flaring can be coordinated with the video clip in a variety of other ways. Moreover, even if the particular products are not known at the time of filming, it is still possible for the avatar to interact with screen elements in other ways, such as carrying them and placing them about the screen. For example, the human actor at the time of filming can carry a white circle, and the IAG system at program execution time can layer a particular screen element on top of the circle. This can be performed in a variety of ways, such as pre-calculating the position of the white circle over time as the avatar moves, or by tracking the white circle in real-time as it is displayed.
  • the main screen includes various non-user-activatable information screen elements 115.
  • These screen elements provide information to the user but cannot be activated to affect the operation of the program.
  • the EnterVision information screen element can provide a trade name for this specific computer program or IAG system
  • the credit card icon information screen element can indicate that the store accepts credit cards or that a particular form of electronic commerce is available using the computer.
  • screen elements which are not user- activatable on one screen may be user-activatable on another screen, and that the screen to be displayed upon selection of a particular user-activatable screen element can vary at different times. For example, from one screen a user-activatable credit card icon screen element could display a printable credit card application, while the same icon screen element may not be user-activatable from other screens.
  • the template for the main screen also includes a series of user- activatable product type information screen elements 130 that are placed near the bottom of the main screen.
  • these product type information screen elements correspond to different departments within the retail store.
  • a user can select any of these product type information screen elements by pressing the touch-sensitive screen where the corresponding screen element is displayed.
  • the program will replace the main screen with a product type information screen corresponding to the selected product type.
  • the human avatar can discuss and interact with the displayed product type information screen elements.
  • the exemplary embodiment also includes a screen of advertisements for daily specials at the store.
  • the user can access this screen by selecting the user-activatable advertisement information screen element 135 labeled "Today's Specials.”
  • the program replaces the main screen with a daily special screen including product information and advertisements for the various products on special.
  • Each screen of the exemplary program also includes at least one user- activatable control screen element 140.
  • the Main Screen control screen element is the only such screen element.
  • These control screen elements allow the user to control the operation of the program. For example, selecting the Main Screen control screen element will always return the UI of the program to the main screen, regardless of the screen displayed when the control screen element is selected. As with other user-activatable screen elements, the human avatar moves near the screen element and demonstrates how to select and activate the screen element. Selecting the Main Screen control screen element 140 as it is currently displayed in the exemplary embodiment will have no effect since the main screen is already displayed. In an alternate embodiment, however, selecting the Main Screen control screen element could cause the main screen to be regenerated from the main screen template, thus causing new products to be selected for the product information screen elements 125.
  • the main screen could include a map of the retail store, thus showing the user their current physical location and having the avatar demonstrate how to move to a desired department.
  • the consumer kiosk could be located at the entrance of the store, with the main screen serving as a greeter to customers as they enter.
  • any screen element could be located at any location, could be user-activatable or not, and could vary in appearance.
  • the container for the human avatar could fill the entire screen, with other screen elements displayed on top of the video clip for the avatar.
  • the IAG product type information screen 200 for the Home Video department is displayed. This screen is displayed when the home video product type information screen element 130 is activated by either the user or the human avatar. As is apparent, some screen elements remain constant between screens 100 and 200, while other screen elements change. Thus, a product type information screen could be generated completely anew from a product type information screen template, or the screen could instead be created by modifying portions of the main screen as necessary. While the human avatar screen element 210 appears similar to the human avatar on the main screen, the exemplary embodiment uses a different video clip that is associated with this screen template. Thus, a completely different human could be used as the human avatar on different screens.
  • the human avatar can instead discuss the particular products shown, such as how to use the products or the benefits of a particular product.
  • the human avatar 210 is shown holding a videotape which she is discussing. This videotape could be one of the four videotapes shown in the product information screen elements, or it could be a different videotape.
  • the human avatar can acquire a product to discuss in a variety of ways. For example, the human avatar could walk to the edge of the screen and appear to reach off the screen to pick up a product. Alternately, the human avatar could walk over to a displayed product information screen element, reach out, and appear to retrieve the product.
  • the retrieved video could disappear from its corresponding product information screen element while the human avatar holds the product.
  • the human avatar could populate the product information screen elements with products by retrieving a video from off screen, carrying the product over to a product information screen element, and placing the product into the screen element.
  • information for the product could appear in that product information screen element.
  • the product type information screen also displays a series of user- activatable product category information screen elements 230. These screen elements correspond to different categories of products within the home video type of product. In response to a selection of a particular product category information screen element, products of the selected category can be displayed on either a new screen or in the product information screen elements 225 on the current screen.
  • the current screen includes a new user-activatable control screen element 240 that is labeled "back.” Selection of this control screen element removes the current screen and displays the previously displayed screen.
  • FIG. 3 illustrates an example of IAG product category information screen 300. While a video image of the human avatar is not displayed on this screen, an audio-only human avatar screen element 310 allows the human avatar to audibly provide information about products and instructions on how to use the program.
  • the left-most column of the screen includes several user-activatable product category information screen elements 330. These screen elements indicate various product categories for the music type of product. In the exemplary embodiment, product categories consist of different musical artists. Alternately, music could be categorized into a variety of groupings such as types of music (e.g., jazz or pop) or musical eras (e.g. , the 60's or disco). Since more artists are available than there is room for product category information screen elements, the user-activatable control screen elements 340 of an up arrow and a down arrow are displayed. These arrow control screen elements allow the user to scroll through the entire list of artists.
  • types of music e.g., jazz or pop
  • musical eras e.g. , the 60's or disco
  • the user-activatable control screen elements 340 of an up arrow and a down arrow are displayed. These arrow control screen elements allow the user to scroll through the entire list of artists.
  • the bottom-most artist is removed from the screen and each other displayed artist is moved to the next lower product category information screen element. A new artist is then added to the top most product category information screen element. Selection of the down arrow would then return the screen to the previous display. If the user instead selects a particular artist, such as Peter Gabriel, a list of products for that artist are then displayed in the user-activatable product information screen elements 325 in the next column to the right. User selection of a particular product results in the display of additional information about the product.
  • a particular artist such as Peter Gabriel
  • product category information screen could group products other than music, and can group music in ways other than by artist.
  • product types may have different numbers and types of product category levels within that product type, including none and more than one.
  • Product category screens could also include information for goods and services not currently available in the store.
  • any screen element could be located at any location, could be user-activatable or not, and could vary in appearance.
  • IAG product information screen 400 is displayed.
  • a particular entertainment software product has been selected, and various information about the product is displayed.
  • product information screen element 425 on the right middle side of the screen can include information such as a video clip of a user interacting with the software, or the UI for the software product as it actually executes. If no additional information is available about the product, the screen element will not be user-activatable.
  • Various other types of product-related information can be displayed in other screen elements.
  • user-activatable screen elements can include product and advertisement information for other related products (e.g., related entertainment software products, related non-entertainment software by the same manufacturer, related non-software products such as hardware or clothing with a software logo, etc.), or even for unrelated products.
  • the IAG system does not display a human avatar on this screen. This can be predefined as a part of the screen, or it can be determined on a dynamic basis.
  • the screen could display the avatar using the product, could include warranty information for the product or reviews of the product, etc.
  • Product screens could similarly include information such as the number of items in stock for a product or an expected delivery date if back-ordered, as well as pricing and a physical store location of the product.
  • any screen element could be located at any location, could be user-activatable or not, and could vary in appearance.
  • IAG product comparison screen 500 is shown. This screen may be displayed if the user explicitly selects multiple products and indicates a desire to see a feature comparison. If so, the features of the various products can be compared by the program. Alternately, if the program detects that the user has selected product information for multiple related products in succession, the program could request if the user wishes to view such a screen or could display the screen of its own accord.
  • the consumer kiosk could be located at the entrance of the store, serving as a greeter to customers as they enter.
  • the human avatar can move about the screens and demonstrate to the user how to use the program to retrieve information and how to use products.
  • the similarity of the full-body full-motion video image of a human to a television program will ease the concerns of users leery of computers, and the integrated human avatar guide system can remove the need for a separate help system for the program.
  • the IAG Constructor also adds control and instructional screen elements that correspond to the retrieved template, and selects an appropriate instance of the avatar for this template. Finally, the IAG Constructor combines the retrieved information to create the screen, including links for each user-activatable screen element to other appropriate screen templates in the interface tables. The IAG Constructor then sends the completed screen information to the IAG Displayer for display on the display 667.
  • FIG. 6 is a flow diagram of an embodiment of the IAG Displayer routine 700.
  • the IAG Displayer routine executes on an IAG client and retrieves information from the IAG server to create an appropriate main screen for a computer program, display the main screen to a user, use a avatar guide system to explain the operation of the program to a user, respond to user requests for additional information, and after a period of nonuse switch to a wait screen designed to attract users to the display.
  • the avatar is a human avatar that is implemented using pre-recorded video of a live person.
  • the IAG Displayer routine begins at step 705 where the routine contacts an IAG server to retrieve the main screen for this client. In one embodiment, each screen is a web page.
  • the routine continues to step 710 where it receives the main screen from the IAG server.
  • step 715 the routine stores the main screen and then continues to step 720 to display the screen.
  • step 725 the routine then sets a timer and continues to step 730 to wait for a user selection or the expiration of the timer. After one of these events occurs, the routine continues to 735 to determine if there was a user selection.
  • step 740 the routine continues to step 740 to retrieve a current wait screen from the IAG server.
  • the wait screen displays a series of video clips beginning with a promotional video for the store and then a promotional video for the creators of the IAG system.
  • the wait screen then continues through a series of advertisements which can vary with each construction of the wait screen.
  • the wait screen can include an unlimited number of separate advertisements which can be shown sequentially or concurrently, and that the particular advertisements shown for a constructed wait screen can be chosen in a variety of ways.
  • step 745 the routine displays the received wait screen and then continues to step 750 to set a second timer.
  • step 755 the routine waits for either a user selection or the expiration of the second timer.
  • the wait screen When the wait screen is first constructed, it includes a link so that any selection by the user will return the display to the previously displayed screen.
  • the routine continues to step 765 to change all links on the wait screen to point to the stored main screen. Alternately, the main screen could be regenerated each time it is displayed rather than storing and using a single main screen.
  • step 765 the routine continues to step 767 to wait for a user selection.
  • step 770 retrieves from the IAG server the screen indicated by the current link on the wait screen. For example, if the screens are web pages, the link could indicate the appropriate web page or an executable that will construct the page.
  • the routine then continues to step 790 to display the retrieved screen, and then returns to step 725. If it was instead determined in step 735 that a user selection had occurred, the routine continues to step 775 to determine if the user requested the program to exit. If not, the routine continues to step 785 to retrieve the screen indicated by the link associated with the selection made by the user, and the routine continues to step 790. If the routine instead determined in step 775 that the user had requested the program to exit, the routine ends at step 780.
  • FIG. 8 is a flow diagram of an embodiment of the IAG Constructor routine 800.
  • the IAG Constructor routine executes on an IAG server and receives requests from an IAG client to create a new screen, determines whether the screen is a main screen, a wait screen, or another screen, constructs the appropriate screen, and sends the screen to the client.
  • the routine begins at step 805 where it receives a request for a screen from an IAG client.
  • the routine then continues to step 810 to identify the particular IAG client. Identifying the client can be done in a variety of ways, such as by an explicit identification along with the screen request, by a cryptographic identifier, or by analyzing the network source of the request.
  • the routine determines whether the screen request is for a main screen.
  • step 820 to invoke the Construct Main Screen subroutine to construct the screen. If not, the routine continues to step 825 to determine if the screen request is for a wait screen. If so, the routine continues to step 830 to invoke the Construct Wait Screen subroutine to construct the screen. If not, the routine continues to step 835 to invoke the Construct Other Screen subroutine to construct the screen. After constructing the requested screen in one of steps 820, 830, or 835, the routine continues to step 840 to send the constructed screen to the IAG client. The routine then stores the constructed screen in step 845 as the last used screen for the IAG client. In step 850, the routine determines whether there are more client requests to receive. If so, the routine returns to step 805, and if not the routine ends at step 855.
  • FIG. 9 is a flow diagram of an embodiment of the Construct Main Screen subroutine 820.
  • the Construct Main Screen subroutine is invoked for a particular client, and it retrieves the appropriate screen template for that client, retrieves the appropriate media files to fill the various containers defined in the screen template, includes links for user-activatable screen elements to the appropriate screen templates, and combines the retrieved information to create a main screen.
  • the subroutine begins at step 905 where it retrieves the main screen template for the identified IAG client.
  • the subroutine then continues to step 907 to determine if the template has been customized for this client. If so, the subroutine substitutes the appropriate customized information in each succeeding step of the subroutine.
  • the subroutine retrieves the background image for the template.
  • identifying media files that do not change can be accomplished in a variety of ways, such as by including identifiers or pathnames in the template for such media files.
  • the specific types may be explicitly defined in the template. Alternately, a current list of product types may be retrieved from a database during each generation of the main screen.
  • one or more specific media files for the human avatar may be identified in the template. If specific media files are identified in the template, customization information can indicate alternate files to be used in place of these pre-specified default files.
  • the subroutine then continues to step 930 to determine if there are product containers on the main screen whose contents vary with each main screen generation. If so.
  • step 935 to select products for each such container, and then continues to step 940 to retrieve the appropriate media file for each selected product.
  • identifying media files that change with each screen generation can be accomplished in a variety of ways. For example, a group of media files may be pre-selected, and for each generation a particular media file may be randomly chosen from the group. Alternately, a particular container could indicate a category or type of product, and a particular product from that category or type could be selected for each generation. Yet another alternative is to perform calculations to determine appropriate contents of containers based on the context of the contents of other displayed screen elements (e.g., displaying advertising information for products related to the primary product displayed on a screen).
  • the human avatar includes product-specific information (e.g., demonstrates a product)
  • this product will need to be pre-specified to ensure that the contents of the appropriate container matches the actions of the avatar.
  • media files which are actually displayed can be tracked so that during later screen generations other media files are displayed before the previously displayed media files are re-displayed.
  • step 940 or if the subroutine determined in step 930 that there were not any variable product containers, the subroutine continues to step 945 to determine if there are containers for the main screen template that include advertisements which vary with each generation of the main screen. If so, the subroutine continues to step 950 to select advertisements for each such container, and then continues to step 955 to retrieve appropriate media files for each selected advertisement.
  • step 955 the subroutine continues to step 960 to combine the template and the retrieved material to create the main screen.
  • the subroutine then includes links for each user-activatable screen element on the main screen, with the links pointing to the IAG server template for the corresponding product, product type, or advertisement. Links to the appropriate template can be determined in a variety of ways, such as maintaining reference information (e.g., in a database) for each product or advertisement that indicates one or more corresponding media files and the appropriate template, or by always indicating a single executable program that can generate different screens.
  • step 970 the subroutine returns.
  • FIG 10 is a flow diagram of an embodiment of the Construct Wait Screen subroutine 830.
  • the Construct Wait Screen subroutine retrieves the wait screen template for the identified IAG client, retrieves the media files for the non-changing containers, determines a series of media files which are always to be displayed on the wait screen and retrieves those files, selects a series of advertisements that vary with each generation of the wait screen and retrieves corresponding media files, and combines the information to create a wait screen in which the various always displayed and variably displayed media files are played in succession.
  • the subroutine begins at step 1005 where it retrieves the template for the wait screen for the identified IAG client.
  • the subroutine then continues to step 1007 to determine if the template has been customized for this client.
  • step 1025 the subroutine then retrieves a series of media files that are always displayed when the wait screen is displayed.
  • the subroutine selects in step 1030 a series of advertisements which vary with each generation of the wait screen.
  • identifying media files which are always displayed and media files that change with each screen generation can be accomplished in a variety of ways. For example, various media files can be pre-selected and assigned different priorities and orders, with some media files to always be displayed if the succession of displayed media files reaches their place in the order before a user selection causes the wait screen to be removed.
  • the subroutine then continues to step 1035 to retrieve the media files for the selected advertisements.
  • step 1040 the subroutine combines the template and the other various retrieved material to create a screen, and in step 1045 the subroutine includes a link to the last used screen for this client for each user-activatable container on the screen.
  • the entire screen is filled with a user-activatable screen element and the various advertisements and other media files are displayed in a portion of the screen on top of the user- activatable container.
  • the subroutine returns in step 1050.
  • FIG 11 is a flow diagram of an embodiment of the Construct Other Screen subroutine 835.
  • the Construct Other Screen subroutine retrieves the template indicated by the link for the selected screen element, retrieves the various media files for containers on the screen which do not change, retrieves a video clip for the human avatar if appropriate, selects and retrieves media files for any containers whose contents vary with each generation of the screen, and combines all of the material to create a current instantiation of the screen.
  • the subroutine begins at step 1 105 where it retrieves the template indicated by the link for the selected screen element.
  • the subroutine then continues to step 1107 to determine if the template has been customized for this client. If so, the subroutine substitutes the appropriate customized information in each succeeding step of the subroutine.
  • step 1110 the subroutine then retrieves a background for the template, and continues to step 1115 to retrieve button icons for control buttons on the screen.
  • step 1120 the subroutine retrieves other media files for containers in the template whose contents do not change with generation of different screens.
  • step 1 125 the subroutine determines if the template includes a container for the human avatar UI guide. If so, the subroutine continues to step 1130 to retrieve an appropriate video clip for the current template. In alternate embodiments, audio clips or other types of media could instead be retrieved. After step 1130, or if the subroutine determined in step 1125 that there was not a human avatar UI guide, the subroutine continues to step 1135 to select products, advertisements, and product categories for all other containers in the template.
  • step 1140 the subroutine then continues to step 1140 to retrieve media files for the selections.
  • step 1 145 the subroutine combines the template and the other retrieved material to create the screen.
  • the subroutine then includes links in step 1 150 for each container with a user-activatable screen element, with the links to the appropriate IAG server templates for the corresponding product, product type, or advertisement displayed in the screen element.
  • step 1155 the subroutine returns.

Abstract

A system for providing an avatar computer guide system that is integrated with an interactive computer program. The system provides a human or anthropomorphic avatar that moves about the UI screens of a computer program and interacts with various displayed screen elements. As the avatar interacts with the screen elements, it explains and demonstrates how to use the user-activatable screen elements to perform various operations with the computer program. The user-activatable screen elements with which the avatar interacts are the same screen elements with which the user can interact, and these screen elements are selectable by the user at any time. Thus, the avatar acts as a guide to operating the functioning interactive computer program. This guidance from the avatar of the system can supplement a separate help system or eliminate the need for such a system.

Description

METHOD AND SYSTEM FOR PROVIDING AN AVATAR INTERACTIVE COMPUTER GUIDE SYSTEM
TECHNICAL FIELD
The present invention relates generally to computer help systems, and more particularly to providing an avatar guide system that is integrated with an interactive computer program.
BACKGROUND OF THE INVENTION
While computer system user interfaces (UIs) have improved in recent years, personal computer systems remain difficult to use in many circumstances. Determining how to efficiently interact with an unfamiliar computer program can be difficult for even an experienced computer user. For those people generally unfamiliar with personal computer systems, and particularly for those people leery of computers, interacting with an unfamiliar computer program can be a daunting task.
Many computer programs on personal computer systems provide access to an electronic computer help system in order to assist users in using the computer program. These help systems are typically separate from the computer program, requiring the user to leave the normal UI for accessing functionality of the computer program and to interact with a separate UI for the help system. In addition, the capabilities and design of such help facilities can vary widely. Some help systems include a tutorial program which a new user can consult for basic information about how to operate the computer program. Other types of help systems make various documentation (e.g., a user's manual) available electronically, including features such as indexes, tables of contents, and facilities for searching for particular terms. Unfortunately, such help systems suffer from a variety of problems. For example, a user must locate and execute the computer program, locate the separate help system, and invoke the appropriate help system functionality to obtain the information desired. A novice user may not even be able to navigate the operating system of the computer sufficiently well to locate and execute the program initially, not to mention invoking and using a separate help system. In addition, the need to switch back and forth between a computer program and its separate help system can be confusing and time consuming. Finally, even if the help system can instruct a user how to perform a series of steps in order to accomplish a particular goal, it may be difficult for users to remember these steps after they leave the help system and return to the computer program.
One method of assisting users who are leery of computer systems involves the use of an avatar (i.e., a graphical representation of an entity). For example, when multiple users interact in a networked multi-user game, it is common for each user to choose an avatar to visually represent them when their character is present on a screen. Such avatars are typically still images (e.g., a celebrity's face or an animal) or animated characters which can move about the screen. In addition to representing users, avatars can also be displayed to represent the computer or an executing program, including the operating system or an application program. For example, an executing program could include an animated avatar face that is displayed on the computer screen, and if the program is executing on a sound-capable computer, the avatar could include an audio greeting to the user with the audio component synchronized with the moving animated face. Such avatars encourage users to anthropomorphize the computer or computer program, thus making the user less leery about interacting with the computer. However, such avatars have a variety of drawbacks. For animated avatars, it can require a significant amount of time for software developers to create the avatar, particularly if the avatar has details such as moving lips that are to be synchronized with audio. In addition, some users will find such animations childish and tiresome. Moreover, rendering the series of frames necessary for realistic animations requires significant processing power by the computer because typical animations include only a few predefined still images, and use tween software to calculate intermediary images between predefined images. The requirement for significant processing power limits the use of such avatars to high-powered computers and restricts the processing power available to execute other computer programs. As for still-image avatars, their use requires less development time and less computer processing power, but such avatars are also less effective at conveying information to the user (e.g., they cannot demonstrate human-like movements) and are less effective at providing an anthropomorphic facade to the computer or program.
For environments where information is to be presented to passers-by, informational kiosks with touch-sensitive screens can be used to convey information. Such systems have become common in public places such as airports and visitor information centers. Such kiosks typically remove most of the computer system from the view of the user, leaving only the display accessible. The kiosk can then display various screens of information and instructions, with each screen including at least one user-activatable button screen element. The user can then select such a button (i.e., make it the current focus) and activate it (i.e., invoke the corresponding action) by touching the portion of the screen on which the button is displayed. In order to provide a rudimentary guide system for assisting users in navigating the screens, such kiosks typically use either self-explanatory icons for buttons (e.g., a picture of a bus to access bus schedules) or a variety of instructional text on the screen to indicate the functionality of the various buttons.
While such kiosk systems can be effective at presenting limited amounts of information to users, these systems have various drawbacks. Since such kiosks must be useful to first-time users, the UIs for such kiosks must be extremely limited. For example, kiosk UIs typically include a few user-activatable buttons with intuitive icons. If the user wishes to locate information that is not accessible from a button on the current screen, it can be very difficult to locate the information. In addition, such kiosks typically require the user to select and activate a UI button to begin a session. For novice users leery of computers, the need to immediately interact with the kiosk in order to use it can be daunting.
SUMMARY OF THE INVENTION
Some embodiments of the present invention provide a method and system for providing an avatar computer guide system that is integrated with an interactive computer program. The system provides a human or anthropomorphic avatar that moves about the UI screens of a computer program and interacts with various displayed screen elements. As the avatar interacts with the screen elements, it explains and demonstrates how to use the user-activatable screen elements to perform various operations with the computer program. The user-activatable screen elements with which the avatar interacts are the same screen elements with which the user can interact, and these screen elements are selectable by the user at any time. Thus, the avatar can act as a guide to operating the functioning interactive computer program. This type of guidance from the avatar can supplement a separate help system for the computer program or eliminate the need for such a help system. In addition, the avatar can attract new users and interest them in using the computer program.
In one embodiment, the system is integrated with a computer program displaying a current screen. In this embodiment, the system displays at least one product screen element on the current screen, with each product screen element representing a physical product. The system then displays at least one control screen element on the current screen, with each control screen element representing a screen other than the current screen. After the displaying of the product screen and control screen elements, the system displays a full-motion video image of a human avatar on the current screen such that the displayed human avatar interacts with the displayed product screen and control screen elements. In addition, the system plays an audio segment synchronized with lips on the displayed human avatar to assist the user by audibly explaining to the user how to use the input device to activate the displayed product screen element in order to obtain information about the physical product represented by the displayed product screen element and how to use the input device to activate the displayed control screen element in order to show on the display the screen represented by the displayed control screen element. In this embodiment, the human avatar is displayed in such a manner that the current screen resembles a television program.
The avatar can be implemented in a variety of ways. For example, in one embodiment, a human avatar is implemented as a pre-recorded full-body, full- motion video image of a live person, and in an alternate embodiment a human avatar is implemented as a computer-generated representation of a human. In other embodiments, an avatar can be a recorded animation of an anthropomorphic figure that is played at the time of execution.
In addition, the system can be used in a variety of settings. In one embodiment, the system is used as part of a kiosk for consumers in a retail store, providing product and store information as well as displaying various advertisements. In this embodiment, an avatar may interact with and demonstrate how to use various products in the store and how to move about the store to find products. Alternately, the system can be used as a guide to a physical location, as a guide to goods or services not located in a single place, as a virtual shopping cart, as a guide to a virtual location, in movie theaters, on interactive airplane screens, etc.
In another embodiment, the avatar can be customized in a variety of ways, such as to reflect the current user or the current geographic location of the system. The customization can include both alterations to the appearance of the avatar as well as to auditory capabilities of the avatar. In still another embodiment, the system includes one or more input devices which can receive information from or related to a user and then vary the avatar or computer program to reflect the information, such as a bar code scanner so that a user can scan the UPC code on a product to immediately obtain information about that item. In yet another embodiment, the system provides one or more wait screens which can be displayed when the computer system is not currently interacting with a user.
BRIEF DESCRIPTION OF THE DRAWINGS
Figures 1 through 5 are examples of user interface screens of a computer program containing an Integrated Avatar Guide (IAG) system. Figure 6 is a block diagram illustrating an embodiment of the IAG system of the present invention.
Figure 7 is a flow diagram of an embodiment of the IAG Displayer routine 700. Figure 8 is a flow diagram of an embodiment of the IAG Constructor routine 800.
Figure 9 is a flow diagram of an embodiment of the Construct Main Screen subroutine 820. Figure 10 is a flow diagram of an embodiment of the Construct Wait
Screen subroutine 830.
Figure 11 is a flow diagram of an embodiment of the Construct Other Screen subroutine 835.
DETAILED DESCRIPTION OF THE INVENTION An embodiment of the present invention provides a method and system for providing an avatar computer guide system that is integrated with an interactive computer program. In particular, an Integrated Avatar Guide (IAG) system provides a human or anthropomorphic avatar that moves about the UI screens of a computer program and interacts with (i.e., selects and/or activates) various displayed screen elements. As the avatar interacts with the screen elements, it explains and demonstrates how to use the user-activatable screen elements to perform various operations with the computer program. The user-activatable screen elements with which the avatar interacts are the same screen elements with which the user can interact, and these screen elements are selectable by the user at any time. Thus, the avatar acts as a guide to operating the functioning interactive computer program. This guidance from the avatar of the IAG system can supplement a separate help system or eliminate the need for such a system. In one embodiment, the IAG system is used as part of a kiosk for consumers in a retail store, providing product (i.e., goods or services) and store information as well as displaying various advertisements. In this embodiment, the avatar may interact with and demonstrate how to use various goods or services available in or through the store, as well as how to move about the store to find products. In one embodiment, the avatar is a full-body, full-motion video clip of a live person, presented in a manner so that the human avatar seamlessly interacts with screen elements that are not part of the video clip and provides a computer program UI that resembles a television program. A computer program using the IAG system to supplement its UI has a main screen that is initially displayed to the user, and typically has a variety of other screens accessible by user interaction with the program. For example, a consumer kiosk at a retail store would typically have informational screens about the various departments in the store (e.g., music and home electronics) as well as informational screens for each product which is available at or through the store. Each screen has a background (e.g., a solid color or an image) and a variety of screen elements displayed on top of the background. In addition to the avatar screen element, most screens include instructional screen elements that provide information about using the program, control screen elements that can be used to access program functionality, and information screen elements related to the type of information the program is providing. A screen element can be displayed using any type of media which can be presented to the user, such as an icon, an image, a video clip, an audio clip, an animation, a display screen of an executing software program, etc. Some screen elements, such as control screen elements, are user- activatable, allowing a user to select and activate them and thus affect the execution of the program (e.g., changing screens). Typically, a user-activatable screen element will "flare" (i.e., change appearance) to indicate when it is selected. For example, in some UIs a single-click on a screen element will select it, and a double-click or an additional click on the screen element will activate it. For touch-sensitive screens, placing a finger on the screen over a screen element will typically select it and removing the finger will typically activate it. Instructional screen elements are not user-activatable, instead merely providing an informational source (e.g., "Touch A Displayed Product Below"). Information screen elements, such as representations of products or store departments in a retail store, can be either user-activatable or non-user-activatable. When a user- activatable screen element is activated, the execution of the program is accordingly modified.
Like other screens, the main screen of a computer program typically includes a variety of instructional, information, and control screen elements. The IAG avatar moves about the screen, often accompanied by appropriate corresponding audio, explaining and demonstrating how to interact with the user-activatable screen elements. For example, the avatar may walk over to an information screen element and explain that activating the screen element will display particular information. The avatar can also demonstrate how to select the screen element, with it flaring in response, and how to activate the screen element to display the information. If the IAG system is implemented on a computer system with a touch-sensitive screen, the avatar can demonstrate activating a displayed screen element by touching the edge or the interior of the screen element. The avatar could also approach and select a control screen element, such as a Main Screen button that returns the UI to the main screen. At any time, the user can select any user-activatable screen element to exert control over the UI.
In addition to selecting and activating user-activatable screen elements, the avatar can interact with screen elements in other ways. For example, the avatar could approach a screen element that represents a physical product and that currently displays the front of the product. The avatar could interact with the product screen element to cause the display of the product to rotate, thus displaying the sides or the back of the product. The IAG system also supports layering in which a screen element can be placed on top of another screen element, thus allowing the avatar to move in front of or behind other screen elements. In one embodiment, the avatar can build a screen by carrying various screen elements (e.g., control and product buttons) and placing them about the screen. If the screen element carried by the avatar is a user- activatable screen element, the user can select and activate the screen element as soon as it is present on the screen.
In addition to demonstrating how to interact with and operate the UI of the computer program, the avatar can also demonstrate how to interact with the type of information being presented. For example, in a consumer kiosk for a retail store, the avatar could carry and use a product such as a cellular phone, thus demonstrating the benefits of the product. Alternately, the avatar could select a product that can be previewed via the UI, such as a music selection, a video tape, a video game, or a software program. In this situation, the avatar could play or use the corresponding product, thus demonstrating not only how to use the product but also how the product functions. If the IAG system is instead a visitor kiosk in an airport, the avatar could select locations on a map, change the background of the screen to display that location, and demonstrate how and where to accomplish a goal at that location (e.g., buying theater tickets at a downtown ticket window).
In one embodiment, the behavior of the avatar is pre-defined, such as by pre-recording a live human actor who walks about and makes various planned gestures or by pre-generating the frames of an anthropomorphic animation. In this embodiment, the IAG system can implement the avatar by playing a pre-recorded video clip, such as with streaming media or by pre-loading the clip. When playing a video image (i.e., a series of predefined individual frames), the video can be displayed as full-motion video (i.e., at least 30 frames a second) or as freeze-frame video (i.e., more than one frame a second but less than 30 frames a second). Full-motion video, such as that typically seen on televisions, is sufficiently rapid so that individual frames are not distinguishable to the average viewer during normal playback. Video clips can be stored in a variety of video formats, such as MPEG, Motion JPEG, QuickTime and AVI. Typically, video images are displayed using a codec that converts digital information to an analog signal.
When the avatar is implemented using a pre-recorded or pre-generated video clip, the IAG system will place appropriate screen elements so as to correspond to the gestures of the avatar. For example, if it is known that the avatar will walk to the upper right corner of the screen and gesture to its right 15 seconds into the video clip, the IAG system can place a user-activatable button at that location and cause it to flare at that time. In this embodiment, the interaction of the avatar with other screen elements that are not part of the video clip can be performed sufficiently seamlessly so that the UI resembles a television program. Alternately, in other contexts or on certain screens the avatar may not be present at all, or may be present in a manner other than full-body, full-motion video (e.g., audio only, head only, or hand only). Instead of or in addition to audio explanations, the avatar could also provide non-audible explanations such as through the use of sign language. Those skilled in the art will appreciate that one or more avatars can be displayed in a variety of ways, and can interact with other screen elements in a variety of ways.
The user can also interact with an IAG system in a variety of ways in addition to selecting and activating user-activatable screen elements. In one embodiment, the computer includes a bar code scanner and the user can scan the UPC code on a physical item to immediately obtain information about that item. Alternately, other input devices such as a keyboard can be used to enter text that can be displayed or searched for. If the computer system includes the appropriate input devices, an image of the user can be entered into the computer and displayed as the avatar (e.g., superimposing the user's face over that of the default avatar). The IAG system could then demonstrate how selected products would appear on that user (e.g., by superimposing sunglasses or a baseball cap over the user's image). Other input means (e.g., voice recognition, a numeric keypad, a mouse, a motion detector, an optical eye, a virtual movement detector, a card reader, handwriting recognition, etc.) can also be used to obtain information from or about the user.
In one embodiment, the IAG system also provides one or more wait screens which can be displayed when the computer system is not currently interacting with a user. In this way, the computer program can be executing and displaying a screen before a user arrives, thus eliminating the need for the user to locate and execute the program. The wait screens can be used for a variety of purposes such as to display default information (e.g., a map of the current location), to maintain a particular ambiance (e.g., a soothing background image and corresponding music), or to attract a user to the computer (e.g., display advertising information). When first executed, the program can be configured to display either the main screen or a wait screen. When a user begins to interact with the computer (e.g., by touching a touch-sensitive computer screen when a wait screen is displayed), the wait screen will be replaced with an appropriate non-wait screen. This non-wait-screen will typically be determined by previous user interactions with the computer. For example, if the program has displayed a screen other than the wait screen longer than a predetermined amount of time without user input, the IAG system may assume that no user is present and thus change the display from the current screen to a wait screen. However, if the user promptly indicated that they wished to continue using the program (e.g., by touching a touch-sensitive screen), the previously displayed screen would be redisplayed. If no user input was received for another predetermined period of time, however, the IAG system would instead assume that the next user input would correspond to a new user and would thus display the main screen when the user input was received. If the IAG system is able to sense whether a user is present or not, then the appropriate screens can be displayed accordingly, such as always returning to the previously displayed screen or never switching to the wait screen while a user remains at or near the computer. The IAG system can also provide various types of customization for different executing copies of the same computer program, or for different users of a particular copy. For example, the same program can be customized for different sublocations at a particular location (e.g. different departments at a retail store or different retail stores in a chain of stores) in terms of the types of screen elements displayed or the background for any screen. The avatar can also be customized to reflect different situations, such as using different races and genders for the avatar at different locations, or varying the audio portion of the avatar only to reflect different languages or regional accents. If the avatar demonstrates use of a product specific to a certain type of user, such as children's toys or clothes, the avatar may morph to a child of the appropriate age. The persona of the avatar can also be modified to suit different situations. For example, the avatar can take on the persona of an expert (e.g., Bob Vila for home improvement tools), a celebrity (e.g., Michael Jordan to endorse shoes), or a friend (e.g., advising you on relative advantages of different products based on their features). In addition, the avatar may be varied based on the user, such as presenting a child avatar for a child user. User information can be gathered in a variety of ways, such as determining demographic information about the user based on their selections. Alternately, the computer system may be able to sense the identity of the user (e.g., via a magnetic-strip card or information entered by the user). With these various customizations, the avatar can change to best demonstrate the information being conveyed and to best assist the user in retrieving the information they desire. In addition to using the IAG system as part of a kiosk for consumers in a retail store, the IAG system can be used in a variety of other ways. These include the following non-exhaustive list of computer displays using the IAG system.
• Airplane seat back or arm displays that replace the in-flight magazine or movie.
• Units on shopping carts that allow a consumer to receive information about products, including their location and availability. These units could also track purchases.
• Displays in movie theater lobbies highlighting upcoming movies and offering information, ads and promotions to people waiting.
• A dedicated kiosk for information at a college, civic center or any such venue.
• A kiosk that allows a company to make detailed technical information available to its dealers as well as technical manuals available to technicians and service people.
• An off-line ordering system for custom or large ticket items. • A membership kiosk in a bank or credit union.
• An information kiosk in a hotel lobby or room where patrons could look up services, room availability, make reservations, find civic information about shows, etc. in the area. This will include the ability to make reservations also.
Those skilled in the art will appreciate that the IAG system can be used in a variety of other applications.
Figures 1 through 5 are examples of user interface screens of a computer program containing an Integrated Avatar Guide (IAG) system. Figure 1 illustrates an IAG Main Screen 100 for a consumer kiosk with a touch-sensitive screen at a retail store. These screens are presented for illustrative purposes only, and those skilled in the art will appreciate that elements of the screens can be varied in a variety of ways, including location and appearance. In addition, the use of the IAG system is not limited to the various elements of this exemplary embodiment, including use as a consumer kiosk, at a retail store, or with a touch-sensitive screen. The main screen includes a background 105 and various screen elements superimposed on top of the background. A human avatar screen element 110 serves as an interactive guide to the operation of the computer program. In this exemplary embodiment, the human avatar is a full-body full- motion video clip that was filmed with a live actor, and the main screen is displayed as a World Wide Web page including containers for each of the screen elements. The human avatar can be displayed in a variety of ways in this embodiment, such as with a pre-loaded video file in a format such as MPEG or as streaming video. In this exemplary embodiment, the same video clip is always used as the video clip for the main screen. In other embodiments, different video clips could be selected from, or multiple video clips could be used either simultaneously, sequentially, randomly from a fixed set of unlimited number, or based on any other algorithmic process. In the exemplary embodiment, the human avatar moves about a portion of the screen consisting of most of the left half of the screen. Thus, a web page template for the main screen can implement the human avatar by defining a container for this left portion of the screen and associating a corresponding video file with that container. As the video file is played, the human avatar moves about her portion of the screen and audibly explains how to touch user-activatable screen elements in order to retrieve additional information about the retail store and the products it contains. The avatar can also walk out of the video and back into it. If the edge of the container is near the edge of the screen, the avatar will thus appear to be walking on and off the screen. Most of the right portion of the screen consists of four containers for user-activatable product information screen elements 125. Unlike the main screen container for the human avatar, there is not a static connection for these containers such that the same products are displayed each time the main screen is generated from the main screen template. Instead, the IAG system or the program can specify any four products for the product information screen elements each time the main screen is generated. On main screen 100, the current products include a videotape, a television, a phone, and a CD. Those skilled in the art will appreciate that other types of products, such as goods not physically present in the store or various services available to the user, can be displayed. In addition, information other than products can also be made available to users, such as a directory of store contents.
Since the products to be displayed in the exemplary embodiment will be determined after the video clip for the human avatar is filmed, the specific products to be shown will not be known at the time of filming. However, the human avatar can walk to the center of the screen near the product information screen elements and explain how the user can touch the screen over a product information screen element to retrieve information about that product. The human avatar can even demonstrate such a selection by pointing to a product information screen element at the edge of the video clip and appearing to touch the edge of the product information screen element. In response, the product information screen element will flare as if it was selected by the user. In one embodiment, the human avatar will demonstrate how to activate a product information screen element, and in response, the main screen will then change to a product information screen for the selected product. Alternately, the product information screen element can de-flare when the human avatar no longer points to it. This interaction can be based upon a predetermined time in the video clip in which the human avatar points toward a particular container, or the flaring can be coordinated with the video clip in a variety of other ways. Moreover, even if the particular products are not known at the time of filming, it is still possible for the avatar to interact with screen elements in other ways, such as carrying them and placing them about the screen. For example, the human actor at the time of filming can carry a white circle, and the IAG system at program execution time can layer a particular screen element on top of the circle. This can be performed in a variety of ways, such as pre-calculating the position of the white circle over time as the avatar moves, or by tracking the white circle in real-time as it is displayed.
In addition to the user-activatable product information screen elements, the main screen includes various non-user-activatable information screen elements 115. These screen elements provide information to the user but cannot be activated to affect the operation of the program. For example, the EnterVision information screen element can provide a trade name for this specific computer program or IAG system, while the credit card icon information screen element can indicate that the store accepts credit cards or that a particular form of electronic commerce is available using the computer. Those skilled in the art will appreciate that screen elements which are not user- activatable on one screen may be user-activatable on another screen, and that the screen to be displayed upon selection of a particular user-activatable screen element can vary at different times. For example, from one screen a user-activatable credit card icon screen element could display a printable credit card application, while the same icon screen element may not be user-activatable from other screens.
The instructional screen elements 120 are also not selectable by the user, but do provide information to the user about the operation of the program. For example, the "touch one" instructional screen element indicates to the user that the four product information screen elements are activatable by the user to retrieve additional information. Similarly, the "scan it below" instructional screen element indicates to the user that a product's UPC bar code can be scanned by the scanner attached to the consumer kiosk, and that if a scan occurs the program will then display product information for the corresponding product. The human avatar can walk near these screen elements and indicate them to the user, but the instructional screen elements do not flare in response because they are not selectable by the user.
The template for the main screen also includes a series of user- activatable product type information screen elements 130 that are placed near the bottom of the main screen. In the exemplary embodiment, these product type information screen elements correspond to different departments within the retail store. A user can select any of these product type information screen elements by pressing the touch-sensitive screen where the corresponding screen element is displayed. In response, the program will replace the main screen with a product type information screen corresponding to the selected product type. In a manner similar to the product information screen elements, the human avatar can discuss and interact with the displayed product type information screen elements.
In addition to the other user-activatable information screen elements, the exemplary embodiment also includes a screen of advertisements for daily specials at the store. The user can access this screen by selecting the user-activatable advertisement information screen element 135 labeled "Today's Specials." In response, the program replaces the main screen with a daily special screen including product information and advertisements for the various products on special.
Each screen of the exemplary program also includes at least one user- activatable control screen element 140. On the main screen, the Main Screen control screen element is the only such screen element. These control screen elements allow the user to control the operation of the program. For example, selecting the Main Screen control screen element will always return the UI of the program to the main screen, regardless of the screen displayed when the control screen element is selected. As with other user-activatable screen elements, the human avatar moves near the screen element and demonstrates how to select and activate the screen element. Selecting the Main Screen control screen element 140 as it is currently displayed in the exemplary embodiment will have no effect since the main screen is already displayed. In an alternate embodiment, however, selecting the Main Screen control screen element could cause the main screen to be regenerated from the main screen template, thus causing new products to be selected for the product information screen elements 125.
Those skilled in the art will appreciate that a variety of other types of information can be displayed on a main screen. For example, the main screen could include a map of the retail store, thus showing the user their current physical location and having the avatar demonstrate how to move to a desired department. Alternately, the consumer kiosk could be located at the entrance of the store, with the main screen serving as a greeter to customers as they enter. In addition, any screen element could be located at any location, could be user-activatable or not, and could vary in appearance. For example, the container for the human avatar could fill the entire screen, with other screen elements displayed on top of the video clip for the avatar.
Referring now to Figure 2, the IAG product type information screen 200 for the Home Video department is displayed. This screen is displayed when the home video product type information screen element 130 is activated by either the user or the human avatar. As is apparent, some screen elements remain constant between screens 100 and 200, while other screen elements change. Thus, a product type information screen could be generated completely anew from a product type information screen template, or the screen could instead be created by modifying portions of the main screen as necessary. While the human avatar screen element 210 appears similar to the human avatar on the main screen, the exemplary embodiment uses a different video clip that is associated with this screen template. Thus, a completely different human could be used as the human avatar on different screens.
The human avatar on screen 200 interacts with the other screen elements in a manner similar to that of main screen 100. Since product information for the home video department is displayed, each of the user-activatable product information screen elements 225 include information related to videos. On the current screen, a still image of the product is shown for each product information screen element. However, those skilled in the art will appreciate that any type of media file could be displayed for a product, such as a video clip (e.g., an advertisement or a portion of the displayed product), an audio clip, etc. Thus, the human avatar walks near the displayed videos and discusses how the user can select a displayed product information screen element to retrieve product information about the corresponding video.
Alternately, rather than discussing how to use the UI, the human avatar can instead discuss the particular products shown, such as how to use the products or the benefits of a particular product. The human avatar 210 is shown holding a videotape which she is discussing. This videotape could be one of the four videotapes shown in the product information screen elements, or it could be a different videotape. In addition, the human avatar can acquire a product to discuss in a variety of ways. For example, the human avatar could walk to the edge of the screen and appear to reach off the screen to pick up a product. Alternately, the human avatar could walk over to a displayed product information screen element, reach out, and appear to retrieve the product. In response, the retrieved video could disappear from its corresponding product information screen element while the human avatar holds the product. In a similar manner, the human avatar could populate the product information screen elements with products by retrieving a video from off screen, carrying the product over to a product information screen element, and placing the product into the screen element. In response, information for the product could appear in that product information screen element.
The product type information screen also displays a series of user- activatable product category information screen elements 230. These screen elements correspond to different categories of products within the home video type of product. In response to a selection of a particular product category information screen element, products of the selected category can be displayed on either a new screen or in the product information screen elements 225 on the current screen. In addition, the current screen includes a new user-activatable control screen element 240 that is labeled "back." Selection of this control screen element removes the current screen and displays the previously displayed screen.
Those skilled in the art will appreciate that a variety of other types of information can be displayed on a product type information screen. For example, the screen could group types of products other than by store department, such as by particular aisles, by manufacturer, by price, by in-store vs. out-of-store availability, by services vs. goods, or by any other conceptual grouping of items. In addition, any screen element could be located at any location, could be user-activatable or not, and could vary in appearance. Figure 3 illustrates an example of IAG product category information screen 300. While a video image of the human avatar is not displayed on this screen, an audio-only human avatar screen element 310 allows the human avatar to audibly provide information about products and instructions on how to use the program. The left-most column of the screen includes several user-activatable product category information screen elements 330. These screen elements indicate various product categories for the music type of product. In the exemplary embodiment, product categories consist of different musical artists. Alternately, music could be categorized into a variety of groupings such as types of music (e.g., jazz or pop) or musical eras (e.g. , the 60's or disco). Since more artists are available than there is room for product category information screen elements, the user-activatable control screen elements 340 of an up arrow and a down arrow are displayed. These arrow control screen elements allow the user to scroll through the entire list of artists. In response to selection of the up arrow, the bottom-most artist is removed from the screen and each other displayed artist is moved to the next lower product category information screen element. A new artist is then added to the top most product category information screen element. Selection of the down arrow would then return the screen to the previous display. If the user instead selects a particular artist, such as Peter Gabriel, a list of products for that artist are then displayed in the user-activatable product information screen elements 325 in the next column to the right. User selection of a particular product results in the display of additional information about the product.
Those skilled in the art will appreciate that a variety of other types of information can be displayed on a product category information screen. For example, the screen could group products other than music, and can group music in ways other than by artist. In addition, those skilled in the art will appreciate that different product types may have different numbers and types of product category levels within that product type, including none and more than one. Product category screens could also include information for goods and services not currently available in the store. In addition, any screen element could be located at any location, could be user-activatable or not, and could vary in appearance.
Referring now to Figure 4, IAG product information screen 400 is displayed. In this exemplary embodiment, a particular entertainment software product has been selected, and various information about the product is displayed. For example, product information screen element 425 on the right middle side of the screen can include information such as a video clip of a user interacting with the software, or the UI for the software product as it actually executes. If no additional information is available about the product, the screen element will not be user-activatable. Various other types of product-related information can be displayed in other screen elements. For example, as is shown along the bottom of the screen, user-activatable screen elements can include product and advertisement information for other related products (e.g., related entertainment software products, related non-entertainment software by the same manufacturer, related non-software products such as hardware or clothing with a software logo, etc.), or even for unrelated products. In this embodiment, the IAG system does not display a human avatar on this screen. This can be predefined as a part of the screen, or it can be determined on a dynamic basis. Those skilled in the art will appreciate that a variety of other types of information can be displayed on a product information screen. For example, the screen could display the avatar using the product, could include warranty information for the product or reviews of the product, etc. Product screens could similarly include information such as the number of items in stock for a product or an expected delivery date if back-ordered, as well as pricing and a physical store location of the product. In addition, any screen element could be located at any location, could be user-activatable or not, and could vary in appearance.
Referring now to Figure 5, IAG product comparison screen 500 is shown. This screen may be displayed if the user explicitly selects multiple products and indicates a desire to see a feature comparison. If so, the features of the various products can be compared by the program. Alternately, if the program detects that the user has selected product information for multiple related products in succession, the program could request if the user wishes to view such a screen or could display the screen of its own accord.
Those skilled in the art will appreciate that a variety of other types of information can be displayed on program screens. For example, the consumer kiosk could be located at the entrance of the store, serving as a greeter to customers as they enter. In addition, the human avatar can move about the screens and demonstrate to the user how to use the program to retrieve information and how to use products. The similarity of the full-body full-motion video image of a human to a television program will ease the concerns of users leery of computers, and the integrated human avatar guide system can remove the need for a separate help system for the program.
Figure 6 is a block diagram illustrating an embodiment of the IAG system of the present invention. Figure 6 includes an IAG server computer system 600 that provides information to IAG client kiosk computer systems 660, 670, and 680 over network 690. Each IAG client kiosk computer system includes a CPU 661, a memory 662, and input/output devices 666 including a touch-sensitive display 667 and a bar code scanner 668. In the illustrated embodiment, the IAG client kiosk computer systems are located at a retail location in order to provide information to consumers at that location about products offered for sale. The IAG Displayer program 664 is executed in memory, filling the display with a UI screen that includes an integrated IAG system. When an IAG Displayer program is first invoked, it contacts an IAG Constructor program 632 that is executing in the memory 630 of the IAG server computer system.
When the IAG Constructor is contacted by an IAG client, it retrieves various information for that client from the IAG Database 640 on the storage device 626. The IAG Constructor first retrieves various information from the administrative tables 642, such as the main screen template for this client and any available customization information. If the IAG Constructor is to create a screen for the IAG client, the IAG Constructor then retrieves from interface tables 641 the screen templates indicated by the client or the administrative tables. The IAG Constructor then analyzes the containers in the retrieved template and gathers appropriate information from the product tables 643 and advertisement tables 644 in order to populate the template with appropriate screen elements. The IAG Constructor also adds control and instructional screen elements that correspond to the retrieved template, and selects an appropriate instance of the avatar for this template. Finally, the IAG Constructor combines the retrieved information to create the screen, including links for each user-activatable screen element to other appropriate screen templates in the interface tables. The IAG Constructor then sends the completed screen information to the IAG Displayer for display on the display 667.
Those skilled in the art will appreciate that an IAG Displayer and IAG Constructor can be implemented in a variety of ways, such as by using a web browser for the IAG Displayer. In addition, various components and their functionality could be combined or separated, such as allowing the IAG Displayer to retrieve information directly from the IAG Database and to combine the retrieved information to create a screen. Thus, computer systems 600 and 660 are merely illustrative and are not intended to limit the scope of the present invention. The present invention may be practiced with other computer system configurations. Figure 7 is a flow diagram of an embodiment of the IAG Displayer routine 700. The IAG Displayer routine executes on an IAG client and retrieves information from the IAG server to create an appropriate main screen for a computer program, display the main screen to a user, use a avatar guide system to explain the operation of the program to a user, respond to user requests for additional information, and after a period of nonuse switch to a wait screen designed to attract users to the display. In the illustrated embodiment, the avatar is a human avatar that is implemented using pre-recorded video of a live person. The IAG Displayer routine begins at step 705 where the routine contacts an IAG server to retrieve the main screen for this client. In one embodiment, each screen is a web page. The routine continues to step 710 where it receives the main screen from the IAG server. In step 715, the routine stores the main screen and then continues to step 720 to display the screen. In step 725, the routine then sets a timer and continues to step 730 to wait for a user selection or the expiration of the timer. After one of these events occurs, the routine continues to 735 to determine if there was a user selection.
If there was no user selection before the timer expires (e.g. , 3 minutes), the routine continues to step 740 to retrieve a current wait screen from the IAG server. In the illustrated embodiment, the wait screen displays a series of video clips beginning with a promotional video for the store and then a promotional video for the creators of the IAG system. The wait screen then continues through a series of advertisements which can vary with each construction of the wait screen. Those skilled in the art will appreciate that the wait screen can include an unlimited number of separate advertisements which can be shown sequentially or concurrently, and that the particular advertisements shown for a constructed wait screen can be chosen in a variety of ways. In step 745 the routine displays the received wait screen and then continues to step 750 to set a second timer. In step 755 the routine waits for either a user selection or the expiration of the second timer. When the wait screen is first constructed, it includes a link so that any selection by the user will return the display to the previously displayed screen. However, if it is determined in step 760 that the second timer has expired before a user selection is received (e.g., after 1 minute), the routine continues to step 765 to change all links on the wait screen to point to the stored main screen. Alternately, the main screen could be regenerated each time it is displayed rather than storing and using a single main screen. After step 765, the routine continues to step 767 to wait for a user selection. After a user selection in step 767 or if it was determined in step 760 that a user selection occurred, the routine continues to step 770 to retrieve from the IAG server the screen indicated by the current link on the wait screen. For example, if the screens are web pages, the link could indicate the appropriate web page or an executable that will construct the page. The routine then continues to step 790 to display the retrieved screen, and then returns to step 725. If it was instead determined in step 735 that a user selection had occurred, the routine continues to step 775 to determine if the user requested the program to exit. If not, the routine continues to step 785 to retrieve the screen indicated by the link associated with the selection made by the user, and the routine continues to step 790. If the routine instead determined in step 775 that the user had requested the program to exit, the routine ends at step 780.
Figure 8 is a flow diagram of an embodiment of the IAG Constructor routine 800. The IAG Constructor routine executes on an IAG server and receives requests from an IAG client to create a new screen, determines whether the screen is a main screen, a wait screen, or another screen, constructs the appropriate screen, and sends the screen to the client. The routine begins at step 805 where it receives a request for a screen from an IAG client. The routine then continues to step 810 to identify the particular IAG client. Identifying the client can be done in a variety of ways, such as by an explicit identification along with the screen request, by a cryptographic identifier, or by analyzing the network source of the request. In step 815, the routine determines whether the screen request is for a main screen. If so, the routine continues to step 820 to invoke the Construct Main Screen subroutine to construct the screen. If not, the routine continues to step 825 to determine if the screen request is for a wait screen. If so, the routine continues to step 830 to invoke the Construct Wait Screen subroutine to construct the screen. If not, the routine continues to step 835 to invoke the Construct Other Screen subroutine to construct the screen. After constructing the requested screen in one of steps 820, 830, or 835, the routine continues to step 840 to send the constructed screen to the IAG client. The routine then stores the constructed screen in step 845 as the last used screen for the IAG client. In step 850, the routine determines whether there are more client requests to receive. If so, the routine returns to step 805, and if not the routine ends at step 855.
Figure 9 is a flow diagram of an embodiment of the Construct Main Screen subroutine 820. The Construct Main Screen subroutine is invoked for a particular client, and it retrieves the appropriate screen template for that client, retrieves the appropriate media files to fill the various containers defined in the screen template, includes links for user-activatable screen elements to the appropriate screen templates, and combines the retrieved information to create a main screen. The subroutine begins at step 905 where it retrieves the main screen template for the identified IAG client. The subroutine then continues to step 907 to determine if the template has been customized for this client. If so, the subroutine substitutes the appropriate customized information in each succeeding step of the subroutine. In step 910 the subroutine then retrieves the background image for the template. The subroutine then in step 915 retrieves the video clip for the human avatar UI guide for the main screen. Button icons are then retrieved in step 920 for each product type at the current client location. In step 925, the subroutine then retrieves other media files for the main screen containers whose contents do not change with different generations of the screen (e.g., a Back control screen element).
Those skilled in the art will appreciate that identifying media files that do not change can be accomplished in a variety of ways, such as by including identifiers or pathnames in the template for such media files. In the case of product types, the specific types may be explicitly defined in the template. Alternately, a current list of product types may be retrieved from a database during each generation of the main screen. Similarly to product types, one or more specific media files for the human avatar may be identified in the template. If specific media files are identified in the template, customization information can indicate alternate files to be used in place of these pre-specified default files. The subroutine then continues to step 930 to determine if there are product containers on the main screen whose contents vary with each main screen generation. If so. the subroutine continues to step 935 to select products for each such container, and then continues to step 940 to retrieve the appropriate media file for each selected product. Those skilled in the art will appreciate that identifying media files that change with each screen generation can be accomplished in a variety of ways. For example, a group of media files may be pre-selected, and for each generation a particular media file may be randomly chosen from the group. Alternately, a particular container could indicate a category or type of product, and a particular product from that category or type could be selected for each generation. Yet another alternative is to perform calculations to determine appropriate contents of containers based on the context of the contents of other displayed screen elements (e.g., displaying advertising information for products related to the primary product displayed on a screen). If the human avatar includes product-specific information (e.g., demonstrates a product), this product will need to be pre-specified to ensure that the contents of the appropriate container matches the actions of the avatar. In addition, those skilled in the art will appreciate that media files which are actually displayed can be tracked so that during later screen generations other media files are displayed before the previously displayed media files are re-displayed. After step 940, or if the subroutine determined in step 930 that there were not any variable product containers, the subroutine continues to step 945 to determine if there are containers for the main screen template that include advertisements which vary with each generation of the main screen. If so, the subroutine continues to step 950 to select advertisements for each such container, and then continues to step 955 to retrieve appropriate media files for each selected advertisement. After step 955, or if the subroutine determined in step 945 that there were not variable advertisement containers, the subroutine continues to step 960 to combine the template and the retrieved material to create the main screen. In step 965 the subroutine then includes links for each user-activatable screen element on the main screen, with the links pointing to the IAG server template for the corresponding product, product type, or advertisement. Links to the appropriate template can be determined in a variety of ways, such as maintaining reference information (e.g., in a database) for each product or advertisement that indicates one or more corresponding media files and the appropriate template, or by always indicating a single executable program that can generate different screens. In step 970 the subroutine returns.
Figure 10 is a flow diagram of an embodiment of the Construct Wait Screen subroutine 830. The Construct Wait Screen subroutine retrieves the wait screen template for the identified IAG client, retrieves the media files for the non-changing containers, determines a series of media files which are always to be displayed on the wait screen and retrieves those files, selects a series of advertisements that vary with each generation of the wait screen and retrieves corresponding media files, and combines the information to create a wait screen in which the various always displayed and variably displayed media files are played in succession. The subroutine begins at step 1005 where it retrieves the template for the wait screen for the identified IAG client. The subroutine then continues to step 1007 to determine if the template has been customized for this client. If so, the subroutine substitutes the appropriate customized information in each succeeding step of the subroutine. In step 1010, the subroutine retrieves the background for the wait template. In step 1015, the subroutine then retrieves a button icon that indicates to the user that user selection of the screen will return the user to a non-wait screen. The subroutine then continues to step 1020 to retrieve media files for wait screen containers which do not change with different wait screen generations.
In step 1025, the subroutine then retrieves a series of media files that are always displayed when the wait screen is displayed. The subroutine then selects in step 1030 a series of advertisements which vary with each generation of the wait screen. Those skilled in the art will appreciate that identifying media files which are always displayed and media files that change with each screen generation can be accomplished in a variety of ways. For example, various media files can be pre-selected and assigned different priorities and orders, with some media files to always be displayed if the succession of displayed media files reaches their place in the order before a user selection causes the wait screen to be removed. The subroutine then continues to step 1035 to retrieve the media files for the selected advertisements. In step 1040, the subroutine combines the template and the other various retrieved material to create a screen, and in step 1045 the subroutine includes a link to the last used screen for this client for each user-activatable container on the screen. In one embodiment, the entire screen is filled with a user-activatable screen element and the various advertisements and other media files are displayed in a portion of the screen on top of the user- activatable container. The subroutine returns in step 1050.
Figure 11 is a flow diagram of an embodiment of the Construct Other Screen subroutine 835. The Construct Other Screen subroutine retrieves the template indicated by the link for the selected screen element, retrieves the various media files for containers on the screen which do not change, retrieves a video clip for the human avatar if appropriate, selects and retrieves media files for any containers whose contents vary with each generation of the screen, and combines all of the material to create a current instantiation of the screen. The subroutine begins at step 1 105 where it retrieves the template indicated by the link for the selected screen element. The subroutine then continues to step 1107 to determine if the template has been customized for this client. If so, the subroutine substitutes the appropriate customized information in each succeeding step of the subroutine. In step 1110 the subroutine then retrieves a background for the template, and continues to step 1115 to retrieve button icons for control buttons on the screen. In step 1120, the subroutine retrieves other media files for containers in the template whose contents do not change with generation of different screens. In step 1 125, the subroutine determines if the template includes a container for the human avatar UI guide. If so, the subroutine continues to step 1130 to retrieve an appropriate video clip for the current template. In alternate embodiments, audio clips or other types of media could instead be retrieved. After step 1130, or if the subroutine determined in step 1125 that there was not a human avatar UI guide, the subroutine continues to step 1135 to select products, advertisements, and product categories for all other containers in the template. The subroutine then continues to step 1140 to retrieve media files for the selections. In step 1 145 the subroutine combines the template and the other retrieved material to create the screen. The subroutine then includes links in step 1 150 for each container with a user-activatable screen element, with the links to the appropriate IAG server templates for the corresponding product, product type, or advertisement displayed in the screen element. In step 1155 the subroutine returns.
From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.

Claims

1. A method for implementing a help system on a computer having a display and an input device, the help system to assist a user in obtaining information about physical products, the display showing a current screen, the method comprising: displaying at least one product screen element on the current screen, each product screen element representing a physical product; displaying at least one control screen element on the current screen, each control screen element representing a screen other than the current screen; after the displaying of the product screen and control screen elements, displaying a full-motion video image of a human actor on the current screen, the displayed human actor interacting with the displayed product screen and control screen elements; and playing an audio segment synchronized with lips on the displayed human actor to assist the user by, as the displayed human actor interacts with a displayed product screen element, audibly explaining to the user how to use the input device to activate the displayed product screen element in order to obtain information about the physical product represented by the displayed product screen element; and as the displayed human actor interacts with a displayed control screen element, audibly explaining to the user how to use the input device to activate the displayed control screen element in order to show on the display the screen represented by the displayed control screen element, in such a manner that the current screen resembles a television program.
2. The method of claim 1 wherein the display is touch-sensitive, wherein the input device is the touch-sensitive display, wherein the user activates a displayed screen element by touching where the screen element is displayed on the display, and including changing information shown on the display in response to the touching.
3. The method of claim 2 wherein the display is located in a retail store to benefit customers and wherein the physical products represented by the displayed product screen elements are available for purchase or order.
4. The method of claim 3 including: displaying on the current screen the human actor interacting with a physical product available in the store; and as the displayed human actor interacts with the physical product, audibly providing information about the physical product to the user.
5. The method of claim 1 including: displaying on the current screen the human actor activating a displayed screen element; and in response, changing information shown on the display.
6. The method of claim 1 including: determining a current geographic location of the display; and in response, customizing the audio segment to reflect linguistic characteristics of the determined geographic location.
7. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of an avatar interacting with a displayed screen element; and as the displayed avatar interacts with the displayed screen element, explaining to the user information related to the displayed screen element.
8. The method of claim 7 wherein the computer program is interactive, wherein the explaining includes how to select the displayed screen element, wherein the user selects a displayed screen element as explained by the avatar, and including changing execution of the computer program in response to the selecting by the user.
9. The method of claim 7 wherein the computer program is displaying a current screen including the displayed screen elements and the displayed avatar, and wherein the explaining describes how to select the displayed screen element so as to display a distinct screen.
10. The method of claim 7 wherein the computer program is executed on a computer system including a touch-sensitive display, wherein the displayed screen elements are displayed on the display, and wherein a displayed screen element is selected by touching where the displayed screen element is displayed.
11. The method of claim 7 wherein a variety of types of screen elements are displayed, wherein the displayed avatar is human, and wherein the displayed human avatar interacts with a displayed screen element in such a manner as to resemble a television program.
12. The method of claim 7 wherein the interacting by the displayed avatar with the displayed screen element includes visibly indicating the displayed screen element.
1 13. The methods of claim 7, claim 1 1, or claim 12 wherein in response to the
2 interacting by the displayed avatar with the displayed screen element, the displayed screen
J element changes.
14. The methods of claim 7, claim 12, or claim 13 wherein in response to the interacting by the displayed avatar with the displayed screen element, the displayed screen
J element flares.
1 15. The method of claim 7 wherein the interacting by the displayed avatar
2 with the displayed screen element includes touching an edge of the displayed screen element.
1 16. The method of claim 7 wherein the interacting by the displayed avatar
2 with the displayed screen element includes touching an interior of the displayed screen element.
1 17. The method of claim 7 wherein the interacting by the displayed avatar
2 with the displayed screen element includes walking behind the displayed screen element.
1 18. The method of claim 7 wherein the interacting by the displayed avatar
2 with the displayed screen element includes moving the displayed screen element.
1 19. The method of claim 7 wherein the interacting by the displayed avatar
2 with the displayed screen element includes carrying the displayed screen element.
1 20. The method of claim 19 wherein the displayed avatar constructs a current
2 screen by, for each of a plurality of displayed screen elements, carrying the displayed screen
3 element to a destination position on the current screen and placing the carried displayed screen
4 element at the destination position.
21. The method of claim 7 wherein the interacting by the displayed avatar with the displayed screen element includes demonstrating how to select the displayed screen element.
22. The method of claim 21 wherein the computer program responds to the demonstrated selecting as if the user had performed the selecting.
23. The method of claim 7 wherein the displayed avatar is a human avatar, wherein the displayed screen element is a control screen element, and wherein the interacting by the displayed avatar with the displayed screen element includes: selecting the displayed control screen element; as the displayed human avatar selects the displayed control screen element, audibly explaining to the user how the selecting affects execution of the computer program; and in response to the selecting, changing execution of the computer program as audibly explained.
24. The method of claim 7 wherein the computer program includes a World Wide Web page as a user interface screen, and wherein the displayed screen elements and the displayed avatar are part of the page.
25. The method of claim 7 wherein the computer program is executed on a computer system with a display and with audio capabilities, wherein the computer program displays a current screen on the display, wherein the displayed current screen includes the displayed screen elements and the displayed avatar, and wherein the explaining to the user is audible.
26. The method of claim 25 wherein the avatar moves about the current screen.
27. The method of claim 25 wherein in response to selection of a displayed screen element, the computer program displays a second screen on the display, the second screen distinct from the current screen.
28. The method of claim 27 wherein an audio representation only of the avatar is available on the second screen.
29. The method of claim 7 wherein the displaying of the video image of the avatar includes: selecting a pre-recorded video clip; loading the selected video clip when the avatar is first to be displayed; and after the selected video clip is loaded, playing the video clip to display the avatar.
30. The method of claim 29 wherein the selected video clip is an MPEG file.
31. The method of claim 7 wherein the displaying of the video image of the avatar uses streaming video.
32. The method of claim 7 wherein the displayed avatar is customized based on current conditions.
33. The method of claim 32 wherein the customizing is based on the user.
34. The method of claim 33 wherein the customizing is based on information supplied by the user.
35. The method of claim 34 wherein the computer program is executed on a computer system with an input device, wherein the supplied information is a request from the user via the input device, and wherein the customizing is in response to the request.
36. The method of claim 33 wherein the customizing is in response to information obtained about the user.
37. The method of claim 32 wherein the customizing is based on a geographic location of a display of the computer program.
38. The method of claim 32 wherein the customizing is based on current information available to be displayed.
39. The method of claim 32 wherein the customizing involves selecting a visual appearance for the avatar.
40. The method of claim 39 wherein the selected visual appearance is that of the user.
41. The method of claim 39 wherein the selected visual appearance is of a particular gender.
42. The method of claim 39 wherein the selected visual appearance is of a particular race.
43. The method of claim 39 wherein the selected visual appearance is of a particular age.
44. The method of claim 32 wherein the customizing involves selecting an audio element for the avatar.
45. The method of claim 44 wherein the selected audio element is of a particular language.
46. The method of claim 44 wherein the selected audio element is of a particular accent.
47. The method of claim 7 wherein the displayed screen element is a control screen element, wherein the explaining to the user relates to how to select the displayed control screen element to affect execution of the computer program, and including: displaying at least one object information screen element for the computer program, each object information screen element selectable by the user to display information about a corresponding object; displaying the avatar interacting with an object corresponding to a displayed object information screen element; and as the displayed avatar interacts with the displayed object, explaining to the user how to use the displayed object.
48. The method of claim 7 wherein the computer program is executed on a computer system with a scanner, wherein the avatar explains operation of the scanner, and including: receiving information from the scanner indicating a product; and in response, displaying information about the product.
49. The method of claim 7 wherein the displayed screen element is a control screen element, wherein the explaining to the user relates to how to select the displayed control screen element to affect execution of the computer program, and including: displaying at least one information screen element for the computer program, each information screen element selectable by the user to display corresponding information; displaying the avatar interacting with a displayed information screen element; and as the displayed avatar interacts with the displayed information screen element, explaining to the user how to select the displayed information screen element to display the corresponding information.
50. The method of claim 49 wherein the avatar is an expert in the information corresponding to the displayed information screen elements.
51. The method of claim 7 wherein the avatar is a celebrity.
52. The method of claim 7 wherein the displayed screen element is a control screen element, wherein the explaining to the user relates to how to select the displayed control screen element to affect execution of the computer program, and including: displaying at least one instructional screen element for the computer program, each instructional screen element providing instructions to the user but not selectable by the user; and as the displayed avatar indicates a displayed instructional screen element, explaining to the user how to follow the instructions for the displayed instructional screen element.
53. The method of claim 7 wherein when user interaction is not received within a specified period of time, the computer program displays a distinct wait screen designed to attract users to the computer program.
54. The method of claim 7 wherein the displayed video image is a full-motion, full-body image of the avatar.
55. The method of claim 7 wherein the avatar is displayed using a codec.
56. The method of claim 7 wherein the avatar is displayed using a pre- recorded video clip of a human.
57. The method of claim 7 wherein the avatar is displayed using a pre- generated video clip of an anthropomorphic figure, the video clip consisting of multiple frames to be displayed in rapid succession.
58. The method of claim 7 wherein the avatar is personified as a human.
59. The method of claim 7 wherein the displayed screen element represents a good.
60. The method of claim 7 wherein the displayed screen element represents a service.
61. The method of claim 7 wherein the avatar is part of a guide system that assists in using the computer program.
62. The method of claim 61 wherein the computer program includes a help facility separate from the guide system.
63. The method of claim 61 wherein the displayed avatar is integrated with operation of the computer program such that a separate help facility is not needed.
64. The method of claim 7 wherein the computer program is displayed in a retail store to assist customers.
65. The method of claim 7 wherein the computer program is displayed to assist users in obtaining goods or services.
66. The method of claim 7 wherein the computer program is displayed to users on personal airplane display screens.
67. The method of claim 7 wherein the computer program is displayed to a user at a kiosk to provide information about a nearby location.
68. The method of claim 7 wherein the explaining is via sign language performed by the displayed avatar.
69. A computer-implemented method for assisting a user in retrieving information about a location, the method comprising: displaying at least one location screen element, each location screen element representing a sub-location at the location; displaying an avatar interacting with a displayed location screen element; as the displayed avatar interacts with the displayed location screen element, explaining to the user information about selecting the displayed location screen element; and in response to selection of the displayed location screen element, presenting to the user information about the sub-location represented by the selected location screen element.
70. The method of claim 69 wherein the computer displays a current screen including the displayed screen elements and the displayed avatar, and wherein a map of the location is displayed as a background filling the current screen.
71. The method of claim 69 wherein a display for the computer is located in a store, and wherein the location screen elements are displayed to provide information about the store.
72. The method of claim 71 wherein the store is a retail store with a plurality of departments, and wherein the displayed location screen elements correspond to the departments.
73. The method of claim 71 wherein the displayed location screen elements correspond to aisles in the store.
74. The method of claim 71 wherein the store provides a plurality of goods, and including: displaying at least one screen element selectable by the user to display information about a corresponding provided good; displaying the avatar interacting with the good corresponding to a displayed screen element; and as the displayed avatar interacts with the displayed good, explaining to the user information about the displayed good.
75. The method of claim 71 wherein the store provides a plurality of services, and including: displaying at least one service screen element, each service screen element selectable by the user to display information about a corresponding provided service; displaying the avatar using the service corresponding to a displayed product screen element; and as the displayed avatar uses the service, explaining to the user information about the service.
76. The method of claim 71 wherein the display for the computer is located near an entrance of the store, and wherein the avatar displayed on the display acts as a greeter of people entering the store.
77. The method of claim 71 wherein the explaining provides directions for locating within the store the sub-location represented by the selected location screen element.
78. The method of claim 69 wherein the location is a virtual location.
79. The method of claim 69 wherein the explanation audibly provides information about a product available at the sub-location represented by the selected location screen element.
80. The method of claim 69 wherein the method is performed on a computer including a touch-sensitive display, wherein the displayed screen elements are displayed on the display, and wherein a displayed screen element is selected by touching where the screen element is displayed.
81. The method of claim 69 wherein a variety of types of screen elements are displayed, wherein the avatar is human, and wherein the displayed human avatar interacts with a displayed screen element in such a manner that the displayed human avatar interacting with the displayed screen element resembles a television program.
82. The method of claim 81 wherein the interacting by the displayed human avatar with the displayed screen element includes demonstrating how to select the displayed screen element.
83. The method of claim 82 wherein the computer program responds to the demonstrated selecting as if the user had performed the selecting.
84. The method of claim 69 wherein in response to the interacting by the displayed avatar with the displayed screen element, the displayed screen element flares.
85. The method of claim 69 wherein the displaying of the avatar includes: selecting a pre-recorded video clip; loading the selected video clip when the avatar is first to be displayed; and after the selected video clip is loaded, playing the video clip to display the avatar.
86. The method of claim 69 wherein the displayed avatar is customized based on current conditions.
87. The method of claim 86 wherein the customizing involves selecting a visual appearance for the avatar.
88. The method of claim 86 wherein the customizing involves selecting an audio element for the avatar, the audio element used for the explaining.
89. The method of claim 86 wherein the customizing is based on the user.
90. The method of claim 89 wherein the customizing based on the user is from information supplied by the user.
91. The method of claim 90 wherein the supplied information is a request from the user, and wherein the customizing is in response to the request.
92. The method of claim 89 wherein the customizing based on the user is in response to information obtained about the user.
93. The method of claim 69 wherein when user interaction is not received within a specified period of time, the guide system displays a wait screen designed to attract users.
94. A computer-implemented method for displaying on a computer an interactive guide system for physical objects, the guide system to assist a user in obtaining information about the physical objects, the method comprising: displaying at least one physical object screen element, each physical object screen element representing a distinct physical object; displaying a human avatar interacting with a displayed physical object screen elements; as the displayed human avatar interacts with the displayed physical object screen element, audibly explaining to the user information about selecting the displayed physical object screen element; and in response to selecting of the displayed physical object screen element by the user, presenting to the user information about the physical object represented by the selected physical object screen element.
95. The method of claim 94 wherein the physical objects are products available to customers.
96. The method of claim 95 wherein the computer includes a touch-sensitive display, wherein the displayed screen elements are displayed on the display, and wherein a displayed screen element is selected by touching where the screen element is displayed.
97. The method of claim 94 wherein a variety of types of screen elements are displayed, and wherein the displayed human avatar interacts with a displayed screen element in such a manner that the displayed human avatar interacting with the displayed screen element resembles a television program.
98. The method of claim 97 wherein the interacting by the displayed human avatar with the displayed screen element includes demonstrating how to select the displayed screen element.
99. The method of claim 97 wherein in response to the interacting by the displayed human avatar with the displayed screen element, the displayed screen element flares.
100. The method of claim 94 wherein the human avatar is a full-motion video image, and wherein the displaying of the full-motion video image of the human avatar includes: selecting a pre-recorded video clip; loading the selected video clip when the human avatar is to be displayed; and after the selected video clip is loaded, playing the video clip to display the human avatar.
101. The method of claim 94 wherein the displayed human avatar is customized based on current conditions.
102. The method of claim 101 wherein the customizing involves selecting a visual appearance for the human avatar.
103. The method of claim 101 wherein the customizing involves modifying audio used for the audible explanations.
104. The method of claim 101 wherein the customizing is based on the user.
105. The method of claim 104 wherein the customizing based on the user is from information supplied by the user.
106. The method of claim 104 wherein the customizing based on the user is in response to information obtained about the user.
107. The method of claim 94 including: displaying the human avatar interacting with the physical object represented by the displayed physical object information screen element; and as the displayed human avatar interacts with the displayed physical object, audibly explaining to the user information about the displayed physical object.
108. The method of claim 94 wherein the computer includes a scanner, wherein the human avatar explains operation of the scanner, and including: receiving information from the scanner indicating a product specified by a customer; and in response, displaying information about the product.
109. The method of claim 94 wherein the human avatar is a recognized expert regarding the physical objects corresponding to the displayed screen elements.
110. The method of claim 94 wherein the human avatar is a celebrity.
111. The method of claim 94 wherein when user interaction is not received within a pre-selected period of time, the guide system displays a distinct wait screen.
112. A method for implementing a computer program with an interactive guide system to assist a user in using the computer program, the computer program displaying a current screen including control screen elements and information screen elements, each control screen element selectable by the user to display a different screen, each information screen element selectable by the user to retrieve information, the method comprising: displaying at least one control screen element; displaying at least one information screen element; displaying a full-motion video image of a human avatar interacting with displayed control screen and information screen elements; as the displayed human avatar interacts with a displayed control screen element, playing audio synchronized with the displayed human avatar to explain to the user how to select the displayed control screen element in order to display a different screen; as the displayed human avatar interacts with a displayed information screen element, playing audio synchronized with the displayed human avatar to explain to the user how to select the displayed information screen element in order to retrieve the information represented by the displayed information screen element; when an indication is received of the user selecting the displayed control screen element with an input device, displaying a different screen in response to the selecting; and when an indication is received of the user selecting the displayed information screen element with the input device, retrieving the information represented by the displayed information screen element.
113. The method of claim 112 wherein the computer program is executed on a computer system including a touch-sensitive display, wherein the screen elements are displayed on the display, and wherein a displayed screen element is selected by touching where the control screen element is displayed.
114. The method of claim 112 wherein the displayed human avatar interacts with displayed screen elements in such a manner that the current screen resembles a television program.
115. The method of claim 1 14 wherein the interacting by the displayed human avatar with the displayed screen elements includes visibly indicating the displayed screen elements.
116. The method of claim 114 wherein the interacting by the displayed human avatar with the displayed screen element includes touching an edge of the displayed control screen element.
117. The method of claim 114 wherein the interacting by the displayed human avatar with the displayed screen element includes carrying the displayed screen element.
118. The method of claim 114 wherein the interacting by the displayed human avatar with the displayed screen element includes demonstrating how to select the displayed screen element.
119. The method of claim 112 wherein in response to the interacting by the displayed human avatar with a displayed screen element, the displayed screen element changes.
120. The method of claim 112 wherein in response to selection of a displayed screen element, the computer program displays a second screen on the display, the second screen distinct from the current screen.
121. The method of claim 120 wherein an audio representation only of the human avatar is available on the second screen.
122. The method of claim 112 including: displaying at least one object information screen element for the computer program, each object information screen element selectable by the user to display information about a corresponding object; displaying the human avatar interacting with an object corresponding to a displayed object information screen element; and as the displayed human avatar interacts with the displayed object, audibly explaining to the user how to use the displayed object.
123. The method of claim 112 wherein the guide system is integrated with operation of the computer program such that a separate help facility is not needed.
124. The method of claim 112 wherein the displayed avatar is customized based on current conditions.
125. The method of claim 124 wherein the customizing involves selecting a visual appearance for the avatar.
126. The method of claim 124 wherein the customizing involves selecting an audio element for the avatar, the audio element used as the played audio.
127. The method of claim 124 wherein the customizing is based on the user.
128. The method of claim 127 wherein the customizing based on the user is from information supplied by the user.
129. The method of claim 127 wherein the customizing based on the user is in response to information sensed about the user.
130. A computer system for executing a computer program with a guide system, the guide system to assist a user in using the computer program, comprising: a display for displaying at least one screen element for the computer program, each displayed screen element selectable by the user to affect execution of the computer program, and for displaying an avatar interacting with a displayed screen element; an input device for receiving selections by the user of displayed screen elements; and an audio mechanism for, as the displayed human avatar interacts with the displayed screen element, audibly explaining to the user how to select the displayed screen element to affect execution of the computer program.
131. The computer system of claim 130 wherein the display is touch-sensitive and the input device is the touch-sensitive display, and wherein a displayed screen element is selected by touching the display where the screen element is displayed.
132. The computer system of claim 130 wherein a variety of types of screen elements are displayed, and wherein the displayed avatar interacts with a displayed sςreen element in such a manner that the displayed avatar interacting with the displayed screen element resembles a television program.
133. The computer system of claim 132 wherein the interacting by the displayed avatar with the displayed screen element includes visibly indicating the displayed screen element.
134. The computer system of claim 132 wherein the interacting by the displayed human avatar with the displayed screen element includes carrying the displayed screen element.
135. The computer system of claim 132 wherein the interacting by the displayed human avatar with the displayed screen element includes demonstrating how to select the displayed screen element.
136. The computer system of claim 130 wherein in response to the interacting by the displayed human avatar with the displayed screen element, the displayed screen element flares.
137. The computer system of claim 130 wherein the avatar is human and wherein the displaying of the human avatar includes: selecting a pre-recorded video clip; loading the selected video clip when the human avatar is first to be displayed; and after the selected video clip is loaded, playing the video clip to display the human avatar.
138. The computer system of claim 130 wherein the displayed avatar is customized based on current conditions.
139. The computer system of claim 138 wherein the customizing is based on a geographic location of the computer system.
140. The computer system of claim 130 including a scanner for receiving information indicating a product.
141. The computer system of claim 130 wherein when user interaction is not received within a pre-selected period of time, the computer program displays a wait screen designed to attract users.
142. A computer system for executing an interactive computer program with a guide system, the guide system to assist a user in using the computer program, comprising: means for displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; means for displaying a video image of a human avatar interacting with a displayed screen element; and means for, as the displayed human avatar interacts with the displayed screen element, audibly explaining to the user how to select the displayed screen element to affect execution of the computer program.
143. The computer system of claim 142 wherein the means for displaying is a touch-sensitive display, wherein the displayed control screen elements are displayed on the display, and wherein a displayed screen element is selected by touching where the screen element is displayed.
144. The computer system of claim 142 wherein the displaying of the video image of the human avatar includes: selecting a pre-recorded video clip; loading the selected video clip; and playing the video clip to display the human avatar.
145. The computer system of claim 142 wherein the displayed human avatar is customized based on current conditions.
146. The computer system of claim 142 including a scanner for receiving information indicating a product.
147. A computer-readable medium containing instructions for controlling a computer system to assist a user in using the computer program, by: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of an avatar interacting with a displayed screen element; and as the displayed avatar interacts with the displayed screen element, explaining to the user how to select the displayed screen element to affect execution of the computer program.
148. The computer-readable medium of claim 147 wherein the computer system includes a touch-sensitive display, wherein the displayed screen elements are displayed on the display, and wherein a displayed screen element is selected by touching on the display where the screen element is displayed.
149. The computer-readable medium of claim 147 wherein a variety of types of screen elements are displayed, and wherein the displayed avatar is integrated with the displayed screen elements in such a manner as to resemble a television program.
150. The computer-readable medium of claim 147 wherein in response to the interacting by the displayed avatar with the displayed screen element, the displayed screen element flares.
151. The computer-readable medium of claim 147 wherein the displayed avatar is customized based on current conditions.
152. The computer-readable medium of claim 147 wherein the computer system includes a scanner, wherein the avatar explains operation of the scanner, and wherein the computer system is further controlled to perform the steps of: receiving information from the scanner indicating a product; and in response, displaying information about the product.
153. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a full-motion video image of an avatar interacting with a displayed screen element; and as the displayed avatar interacts with the displayed screen element, explaining to the user information about the displayed screen element.
154. The method of claim 153 wherein the video image comprises multiple individuals frames to be displayed in rapid succession, and wherein the full-motion displaying includes at least 30 of the frames each second.
155. The method of claim 153 wherein the computer program displays a current screen, customized based on the user.
156. The method of claim 153 wherein the interacting by the displayed avatar includes a gesture to the displayed screen element.
157. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of an avatar interacting with a displayed screen element, the avatar not an animation and the displayed screen element not part of the video image; and as the displayed avatar interacts with the displayed screen element, explaining to the user information about the displayed screen element.
158. The method of claim 157 wherein the explained information indicates how to select the displayed screen element to affect execution of the computer program.
159. The method of claim 157 wherein the computer program displays a current screen customized based on the user.
160. The method of claim 157 wherein the interacting by the displayed avatar includes visibly indicating the displayed screen element.
161. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of an avatar interacting with a displayed screen element, the video image displayed using a codec; and as the displayed avatar interacts with the displayed screen element, explaining to the user information about the displayed screen element.
162. The method of claim 161 wherein the explained information indicates how to select the displayed screen element to affect the computer program.
163. The method of claim 161 wherein the computer program display is customized based on the user.
164. The method of claim 161 wherein the interacting by the displayed avatar includes visibly indicating the displayed screen element.
165. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of a human avatar interacting with a displayed screen element; and as the displayed human avatar interacts with the displayed screen element, explaining to the user information about the displayed screen element.
166. The method of claim 165 wherein the human avatar is displayed using a pre-recorded video clip of a human.
167. The method of claim 165 wherein the explained information indicates how to select the displayed screen element to affect execution of the computer program.
168. The method of claim 165 wherein the computer program displays a current screen customized based on the user.
169. The method of claim 165 wherein the interacting by the displayed avatar includes gesturing to the displayed screen element.
170. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of an anthropomorphic avatar interacting with a displayed screen element; and as the displayed avatar interacts with the displayed screen element, explaining to the user information about the displayed screen element.
171. The method of claim 170 wherein the avatar is displayed using a pre- generated video clip, the video clip consisting of multiple frames to be displayed in rapid succession.
172. The method of claim 170 wherein the explained information indicates how to select the displayed screen element to affect execution of the computer program.
173. The method of claim 170 wherein the screen elements displayed are customized based on the user.
174. The method of claim 170 wherein the interacting by the displayed avatar includes a gesture to the displayed screen element.
175. A computer-implemented method for assisting a user in using a computer program, the method comprising: displaying at least one screen element for the computer program, each screen element selectable by the user to affect execution of the computer program; displaying a video image of an avatar personifying a human, the displayed avatar interacting with a displayed screen element; and as the displayed avatar interacts with the displayed screen element, explaining to the user information about the displayed screen element.
176. The method of claim 175 wherein the explained information indicates how to select the displayed screen element to display a distinct screen.
177. The method of claim 175 wherein the computer program displays a current screen, and wherein the current screen is customized based on the user.
178. The method of claim 175 wherein the interacting by the displayed avatar includes visibly indicating the displayed screen element.
PCT/US1999/016808 1998-07-21 1999-07-21 Method and system for providing an avatar interactive computer guide system WO2000005639A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU57705/99A AU5770599A (en) 1998-07-21 1999-07-21 Method and system for providing an avatar interactive computer guide system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12068798A 1998-07-21 1998-07-21
US09/120,687 1998-07-21

Publications (1)

Publication Number Publication Date
WO2000005639A2 true WO2000005639A2 (en) 2000-02-03

Family

ID=22391930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/016808 WO2000005639A2 (en) 1998-07-21 1999-07-21 Method and system for providing an avatar interactive computer guide system

Country Status (2)

Country Link
AU (1) AU5770599A (en)
WO (1) WO2000005639A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1160664A2 (en) * 2000-05-17 2001-12-05 Sharp Kabushiki Kaisha Agent display apparatus displaying personified agent for selectively executing process
KR20010109943A (en) * 2000-06-05 2001-12-12 원석규 Information Providing Method in Medical Information System
KR20020004578A (en) * 2000-07-06 2002-01-16 황인호 A service providing system and the providing method using virtual helper
KR20020022885A (en) * 2000-09-21 2002-03-28 이성웅 System and method for producing virtual character of mascot and method for computer service using virtual character
KR20020028085A (en) * 2000-10-06 2002-04-16 최인수 System for operating a research based on the internet surroundings and method for operating the research using the same
KR20020030224A (en) * 2000-10-16 2002-04-24 이세진 Method for providing information using a web guide
KR20020035706A (en) * 2000-11-07 2002-05-15 허용도 Chatting service synchronized with URL
KR20030066841A (en) * 2002-02-05 2003-08-14 보람연구소(주) Avatar agent system
KR20040045633A (en) * 2002-11-25 2004-06-02 주식회사 투윈테크 System and method for rearing cyber character capable of controlling application programs
US8136038B2 (en) * 2005-03-04 2012-03-13 Nokia Corporation Offering menu items to a user
US8176421B2 (en) 2008-09-26 2012-05-08 International Business Machines Corporation Virtual universe supervisory presence
EP2610724A1 (en) * 2011-12-27 2013-07-03 Tata Consultancy Services Limited A system and method for online user assistance
US9911351B2 (en) 2014-02-27 2018-03-06 Microsoft Technology Licensing, Llc Tracking objects during processes
US11488724B2 (en) 2018-06-18 2022-11-01 Electronic Caregiver, Inc. Systems and methods for a virtual, intelligent and customizable personal medical assistant
US11791050B2 (en) 2019-02-05 2023-10-17 Electronic Caregiver, Inc. 3D environment risks identification utilizing reinforced learning
US11923058B2 (en) 2018-04-10 2024-03-05 Electronic Caregiver, Inc. Mobile system for the assessment of consumer medication compliance and provision of mobile caregiving

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1160664A3 (en) * 2000-05-17 2003-09-17 Sharp Kabushiki Kaisha Agent display apparatus displaying personified agent for selectively executing process
EP1160664A2 (en) * 2000-05-17 2001-12-05 Sharp Kabushiki Kaisha Agent display apparatus displaying personified agent for selectively executing process
KR20010109943A (en) * 2000-06-05 2001-12-12 원석규 Information Providing Method in Medical Information System
KR20020004578A (en) * 2000-07-06 2002-01-16 황인호 A service providing system and the providing method using virtual helper
KR20020022885A (en) * 2000-09-21 2002-03-28 이성웅 System and method for producing virtual character of mascot and method for computer service using virtual character
KR20020028085A (en) * 2000-10-06 2002-04-16 최인수 System for operating a research based on the internet surroundings and method for operating the research using the same
KR20020030224A (en) * 2000-10-16 2002-04-24 이세진 Method for providing information using a web guide
KR20020035706A (en) * 2000-11-07 2002-05-15 허용도 Chatting service synchronized with URL
KR20030066841A (en) * 2002-02-05 2003-08-14 보람연구소(주) Avatar agent system
KR20040045633A (en) * 2002-11-25 2004-06-02 주식회사 투윈테크 System and method for rearing cyber character capable of controlling application programs
US8136038B2 (en) * 2005-03-04 2012-03-13 Nokia Corporation Offering menu items to a user
US9549059B2 (en) 2005-03-04 2017-01-17 Nokia Technologies Oy Offering menu items to a user
US8176421B2 (en) 2008-09-26 2012-05-08 International Business Machines Corporation Virtual universe supervisory presence
EP2610724A1 (en) * 2011-12-27 2013-07-03 Tata Consultancy Services Limited A system and method for online user assistance
US9911351B2 (en) 2014-02-27 2018-03-06 Microsoft Technology Licensing, Llc Tracking objects during processes
US11923058B2 (en) 2018-04-10 2024-03-05 Electronic Caregiver, Inc. Mobile system for the assessment of consumer medication compliance and provision of mobile caregiving
US11488724B2 (en) 2018-06-18 2022-11-01 Electronic Caregiver, Inc. Systems and methods for a virtual, intelligent and customizable personal medical assistant
US11791050B2 (en) 2019-02-05 2023-10-17 Electronic Caregiver, Inc. 3D environment risks identification utilizing reinforced learning

Also Published As

Publication number Publication date
AU5770599A (en) 2000-02-14

Similar Documents

Publication Publication Date Title
US7657457B2 (en) Graphical user interface for product ordering in retail system
US20180095734A1 (en) System and method for creating a universally compatible application development system
Hartmann et al. Towards a theory of user judgment of aesthetics and user interface quality
US8606645B1 (en) Method, medium, and system for an augmented reality retail application
US8589253B2 (en) Software system for decentralizing eCommerce with single page buy
US5745710A (en) Graphical user interface for selection of audiovisual programming
WO2000005639A2 (en) Method and system for providing an avatar interactive computer guide system
US9141192B2 (en) Interactive digital catalogs for touch-screen devices
US20070276721A1 (en) Computer implemented shopping system
US20030067494A1 (en) Authoring system for computer-based
US20050165663A1 (en) Multimedia terminal for product ordering
CN112770187B (en) Shop data processing method and device
US20090012846A1 (en) Computerized book reviewing system
WO1995015533A1 (en) Computer system for allowing a consumer to purchase packaged goods at home
CN115151938A (en) System for identifying products within audiovisual content
WO2006126205A2 (en) Systems and uses and methods for graphic display
WO2001069364A2 (en) Natural user interface for virtual reality shopping systems
US20140222627A1 (en) 3d virtual store
Friedewald et al. Science and technology roadmapping: Ambient intelligence in everyday life (AmI@ Life)
US10360946B1 (en) Augmenting content with interactive elements
Koontz et al. Mixed reality merchandising: bricks, clicks–and mix
Barbara et al. Extended store: How digitalization effects the retail space design
CN111768269A (en) Panoramic image interaction method and device and storage medium
Maik et al. Evaluating Forms of User Interaction with a Virtual Exhibition of Household Appliances
Spence et al. Interaction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WA Withdrawal of international application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642