WO2001088662A2 - Answering natural language queries - Google Patents

Answering natural language queries Download PDF

Info

Publication number
WO2001088662A2
WO2001088662A2 PCT/US2001/015711 US0115711W WO0188662A2 WO 2001088662 A2 WO2001088662 A2 WO 2001088662A2 US 0115711 W US0115711 W US 0115711W WO 0188662 A2 WO0188662 A2 WO 0188662A2
Authority
WO
WIPO (PCT)
Prior art keywords
natural language
user
query
information
question
Prior art date
Application number
PCT/US2001/015711
Other languages
French (fr)
Other versions
WO2001088662A3 (en
Inventor
Gary Mekikian
Deniz Yuret
Original Assignee
Answerfriend.Com
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/572,770 external-priority patent/US6957213B1/en
Application filed by Answerfriend.Com filed Critical Answerfriend.Com
Priority to AU2001261631A priority Critical patent/AU2001261631A1/en
Publication of WO2001088662A2 publication Critical patent/WO2001088662A2/en
Publication of WO2001088662A3 publication Critical patent/WO2001088662A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation

Definitions

  • This invention relates to answering natural language queries.
  • Such a query may be a question phrased in English, for example, and the response may be sentences of text that belong to a body of free-text sources and are responsive to the question.
  • an index that is created in advance.
  • an index could include the words “Georgia” and “capital” and associated pointers to sentences that include those words.
  • the index can be used to find responsive sentences.
  • implicit references also known as anaphora in linguistic literature
  • one or more segments are identified as relevant to the query based at least in part on the implicit references.
  • implicit references improves the quality of the responses to the query.
  • a characteristic of natural language text is the use of words (references) that refer to other words or to concepts that appear in or are implied by other parts of the text (antecedents). For example, in the sentence “He is best known for his theory of relativity,” the word “he” (the reference) may refer to the name “Albert Einstein” (the antecedent) that appears in another sentence: “Albert Einstein was one of the greatest scientists of all time. " Two broad categorizations of references may be useful. One broad categorization is based on the positions of the antecedent and the reference. The other broad categorization is based on the type of 5 reference. The first categorization is based on three distinct contexts in which the reference may be used in a question answering setting.
  • References of the kind that are based on position may occur in at o least three different contexts in a question answering setting:
  • sentence S2 the word “he” refers to "Albert Einstein” in sentence SI. 0
  • S4 China is a huge country in eastern Asia.
  • S5 It produces more cotton, rice, and wheat than any other country.
  • Q4 What is the scientific classification of rice?
  • the second categorization is based on the type of phrase used for the reference and includes the following five groups (examples included):
  • Pronoun China is a big country. It is in Asia. Definite Noun Phrase: China is in Asia. This country produces rice.
  • Implementations of the invention take advantage of references to identify sentences in free-text sources that may answer natural language questions.
  • One goal of some implementations of the invention is to shorten the processing delay in receiving an answer after a question is posed at run time.
  • shifting processing steps from run time to a preliminary indexing phase can reduce the delay.
  • One way to shift processing to the indexing phase relates to the need to match synonyms that appear in a question and in a sentence. For example, the words “produces” and “raise” in the following question and sentence must be matched at run time:
  • Another opportunity for shifting processing to the indexing phase relates to the fact that there tend to be many more specializations of a concept than generalizations of a concept. For example, there are more than 250 countries (including China) that represent specializations of the concept "country” but relatively few generalizations for the concept "China”. So, in the following example, overall processing time is saved by generating and storing the generalizations of "China", the concept that appears in the sentence, during the indexing phase, rather than generating the larger number of specializations of "countries”, the concept that appears in the question:
  • the invention features receiving segments of text (e.g., sentences), each segment having elements. Implicit references are inferred from the elements of the segments. A query is received, and, in response to the query, one or more segments are identified as relevant to the query (e.g., by scoring) based at least in part on the implicit references. Implementations of the invention may include one or more of the following features.
  • the implicit references may be inferred prior to the time when the query is received and may be stored as entries in a searchable index, each entry including a pointer to one of the segments from which the reference was inferred.
  • One or more of the identified segments may be selected for presentation to a user.
  • the implicit references may be generalizations of the elements contained in the segments.
  • the references may be name variations that refer to elements, or indirect references to elements, or definite noun phrase references to elements, or pronouns, or null references.
  • the antecedents of the indirect references may be found in titles or m headings.
  • the antecedent can be a concept recognized by a pattern of characters (e.g., a date) and it can be referred to by a generalization (e.g., "when" or "at that time”).
  • the scoring may be based on a matching of elements in a question with elements in an index file that contains information about the inferred implicit references.
  • the selection of segments to be displayed may be based on scoring. As few as one segment from a given source need be displayed.
  • the step of responding to the query may include identifying implicit references between the query and a previous query.
  • the features of the invention include receiving a question in the form of natural language speech from a source, automatically recognizing the speech, feeding the recognized speech to a natural language query engine operating on information accessible through a web site to generate a text answer to the question, synthesizing a spoken response to the question based on the answer, and playing the spoken response back to the source of the question.
  • Implementations of the invention may include one or more of the following features.
  • Commands also, may be received in the form of natural language speech from a source, the commands may be determined using natural language processing, and the speech may be acted upon by controlling navigation in the web site.
  • the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and, in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
  • the invention features speaking a natural language question to a web site and receiving a natural language spoken answer to the question back from the website.
  • features of the invention include receiving a natural language question from a user, deriving information about the user from the question, selecting promotional information based on the information about the user, generating an answer to the question using a natural language query engine, and returning the answer to the user together with the promotional information.
  • Implementations of the invention may include one or more of the following features.
  • the information about the user may include preferences suggested by the question.
  • the promotional information may include advertising. Advertising tags may be generated for use in selecting the promotional information.
  • the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
  • the invention features receiving page information contained in a web page that is being viewed by a user, deriving user information about the user from the page information using a natural language query engine, selecting promotional information based on the user information, and displaying the promotional information to the user while the user is viewing the web page.
  • the invention features receiving a natural language question from a user, deriving information about the user from the question, selecting available information that is related to the question, generating an answer to the question using a natural language query engine, and returning the answer to the user together with the available information.
  • Implementations of the invention may include one or more of the following features.
  • the information about the user may include preferences suggested by the question.
  • the information related to the question may include articles. Advertising tags may be generated for use in selecting the information.
  • the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
  • the invention features entering a natural language question on a wireless personal electronic device, generating natural language answer to the question using a natural language query engine, and presenting the natural language answer to a user.
  • Implementations of the invention may include one or more of the following features.
  • the question may be entered through a keyboard.
  • the answer may be presented through an interface of the device.
  • the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
  • the invention features presenting to a user a web page that comprises a shopping cart, displaying on the shopping cart an identification of an item for purchase, providing a mechanism that enables the user, without leaving the shopping cart, to ask a natural language question, and providing an answer to the natural language question.
  • Implementations of the invention may include one or more of the following features.
  • the mechanism mayinclude a dialog box displayed over the shopping cart web page.
  • the dialog box may be displayed in association with the identification of the item for purchase.
  • the answer may include information about the item for purchase.
  • the user may take a step in response to the answer and complete a transaction on the shopping cart web page.
  • the answer may be provided from a natural language query engine.
  • the mechanism may be provided by an agent that watches items being added to the shopping cart.
  • the invention features receiving natural language questions about products, selecting product information using a natural language query engine based on the questions, and serving the product information from a web server to a user.
  • Implementations of the invention may include one or more of the following features.
  • the user may respond to the web server by buying one of the products.
  • the questions may identify desired characteristics of the products.
  • the natural language search is done by entering the search in a field of an email message and sending it to an email address.
  • the invention features a method that includes (a) receiving from a user, over an electronic network, an electronic mail message containing a written natural language query, (b) identifying the written natural language query in the 5 electronic mail message, (c) using a natural language query engine to apply the natural language query to a body of information, to generate information responsive to the query, and (d) taking an action based on the responsive information.
  • Taking an action may include sending an electronic mail message containing the responsive information to the user over the publicly accessible electronic network, or filling an order for a product or service.
  • the query may include a question5 to be answered and the responsive information may include an answer to the question.
  • the query may include a request for an action or service and taking an action may include providing the action or service in response to the request.
  • the body of information may include textual content or commercial 0 information.
  • the natural language query may be identified based on an indicator arranged by the user.
  • the indicator may include a position of the query within the electronic mail message, e.g., within a subject field of the electronic mail message.
  • the electronic mail message may be directed to an address that is5 prearranged to automatically receive and respond to the natural language query.
  • the invention features apparatus that includes (a) an electronic mail message server connected to receive electronic mail messages containing natural language queries from an electronic network and to send electronic mail messages containing responses to the natural language queries to the electronic network, (b) software adapted to identify written natural language queries in electronic mail messages received at the server and to provide information responsive to the natural language queries as electronic mail messages to the server for delivery, and (c) a natural language query engine connected to receive the natural language queries from the electronic mail message server and to apply them to a body of information to obtain the responsive information.
  • the invention features (a) automatically stripping natural language queries from electronic mail messages, (b) automatically applying the queries to a natural language search engine to generate responsive information, and (c) automatically taking action based on the responsive information.
  • free-text sources are prepared for use in answering questions by first applying a preprocessing routine 30, shown in figure 1.
  • the text is parsed (32) to identify sentence boundaries.
  • sentence boundaries are identified using patterns that are manually created, although other approaches could be used.
  • patterns are described that identify potential end-of- sentence markers (period, question mark, exclamation point, paragraph break, title break, sometimes quotes, etc.).
  • certain alternative uses are eliminated. For example, in the case of a period, the eliminated alternative includes periods that appear at the end of abbreviations and in acronyms and floating point numbers, for example.
  • Each sentence is marked (34) with a single new line in one implementation, or using markup tags in another implementation.
  • a unique sentence number is assigned (36) to each sentence.
  • the numbers are unique within a single index file. Therefore all sentences (whether or not from different documents) that go into a single index get unique numbers.
  • part of the unique numbers e.g., the first six digits
  • another part the last four digits
  • Titles and other headings are identified (38) in a manner that depends on the text format. Some formats (like HTML) use markup elements that identify the titles. Plain text sources require pattern-based analysis. Titles also are marked (40) to identify some possible indirect references. An example would be the sentence “The economy is booming.” found in an article entitled “China”. Notice that unlike in the case of the sentence "This country produces rice", none of the words in the sentence "The economy is booming” directly refers to China. However, from the title one can infer that the subject is the Chinese economy. One way to index the title information is with respect to every sentence in its scope. Another more complicated way to use the title information is to build and make use of a knowledge base of part- whole, group-member relationships.
  • Such relationships would include, for example, the fact that a typical country has a population, an economy, a president, and an army, etc. Then, when any of these words (e.g., economy and president) are used by itself in a sentence, the indirect reference to the country can be identified.
  • the output of the pre-processing is a pre-processed text file 42.
  • the pre-processed text file has text of one sentence on each line preceded by a sentence number and a tab character and followed by the text of the applicable titles.
  • a special markup language may use specific tags to mark sentences, paragraphs, sections, documents and titles in the text.
  • all the text sources that go into a single application can be converted into one large pre-processed text file before being passed to the indexer.
  • Another implementation could use separate pre-processed files for each article and let the indexer read the information from multiple files.
  • the indexing phase 50 begins.
  • the purpose of the indexing phase is to use the pre- processed text file 42 to build an index file (table) 70 that lists foreseen ways in which a question may refer to an element of a sentence.
  • a single index file is built for all sources in the system.
  • element of a sentence, we mean a concept referred to in the sentence.
  • the concept may be referred to using an ordinary word (walk, cake), a name (Bill Clinton), a multi-word phrase (stand up, put on), a pronoun (he referring to Bill Clinton), a definite noun phrase (the country referring to China), an indirect reference (the economy, indirectly referring to China), or a null reference (there is no word referring to the concept but the concept is still referenced).
  • the answer to the question "When did Germans invade Poland?" would be 1939 even though there is no word in the second sentence directly or indirectly referring to this time phrase. Time phrases and place phrases often affect more than a single sentence, therefore creating null-references.
  • Each entry in the index file 70 includes a pointer to the sentence to which the questions may refer based on that entry.
  • the index file relates the elements found in a sentence to a unique identifier for that sentence.
  • the index file can be thought of as a two-column table in which one column contains sentence ID numbers and the other column contains the words, concepts, referents, generalizations, and synonyms (collectively referred to as the elements of the sentence).
  • the following three components are created for the index: the string buffer, the sentence id buffer, and the hash table.
  • the string buffer contains the null terminated strings of each element found in the source text.
  • the strings are placed in the buffer consecutively in no particular order.
  • the sentence id buffer contains sentence ID arrays for each element.
  • the array for a particular element can be identified by giving the start position in the buffer and length of the array.
  • the arrays are placed in the buffer consecutively in no particular order.
  • the hash table is a standard hash table that contains key- value pairs and that enables a fast search of a given key.
  • the key of each entry is a pointer to the string buffer.
  • the value of each entry consists of a pointer to the sentence ID buffer and an array length.
  • This structure enables finding the sentences that contain a particular element as follows: First, the element is searched in the hash table by comparing it with certain keys in the hash table. For each comparison, the string in the string buffer that the key points to is retrieved and compared to the element. When a match is found, the corresponding sentence ID buffer pointer and array length is read. Finally, the specified array is located in the sentence ID buffer.
  • each sentence in the preprocessed text file 42 is read and passed to several modules. Each module reads the words of a sentence and, based on them, recognizes certain types of constructions and references that represent foreseen ways in which a question may refer to an element of the sentence. When a module identifies one of those ways, it writes an entry into the index file 70 together with the unique identifying number of the sentence from which it was generated.
  • indexing modules there are eight indexing modules called: words, title, word-isa, ako, patterns, names, name-isa, and references.
  • the words module identifies (50) each word in the current sentence and adds it to the index file.
  • the words module also derives the stem of each word, using a table of English word and word stem pairs, such as . flowers->flower and went->go.
  • the words module adds the stem to the index file for use, for example, in matching morphological variants of words that may appear in a question.
  • the words in each heading in the set of headings that apply to a sentence are added (52) to the index file with pointers to the sentence.
  • only one heading (the document title) is used for every sentence in a document.
  • the pre-processed text file contains tags for titles of various levels (document, chapter, section, subsection, for example) and sectioning tags that identify the scope of each title. Using these tags, the indexer is able to determine, for each sentence, the document, the chapter, and the section that it is in. The indexer combines all titles that apply and indexes them with the sentence. Title indexing may not be appropriate for every source. For example, encyclopedia sources have well defined titles that are usually appropriate and helpful whereas newspapers have partial sentences for titles, which are usually not appropriate for the above method.
  • the word-isa module generates (54) the generalizations (mentioned earlier) for words that appear in the sentence and for words that appear in headings. For example, if the word "red” appears in a sentence, the generalization word “color” is placed in the index file so that a question that asks "what color” will be matched to the sentence that includes "red". For this purpose, a database table with the same name (word-isa) and containing two columns is used. The first column contains words and the second column contains possible generalizations. For example, "red- >color" would be one of the entries in that table.
  • the ako module identifies generalizations (56) of generalizations already generated. For example, if the ako module encounters the generalization "color” that had been generated at step 54, the ako module adds the further generalization "attribute" to the index file.
  • the patterns module reviews (58) the text for special patterns of dates and numbers and adds the generalizations to the index file. For example, if the date January 23 rd , 1998, appears in the text, the patterns module would add the generalizations "date” "time” and "when” so that when a question asks "when did this event happen?" it matches the date. Another example that appears frequently in an encyclopedia is the lifespan information in biographies. The first sentence of a typical biography starts “John Doe (1932-1987) ... ". A pattern that recognizes the life-span structure allows matching of questions of the type "When was John Doe born?"
  • the names module identifies proper names (60) in the text and generates and indexes the names accordingly.
  • the names module uses two methods to identify names in a sentence.
  • the first method uses a list of precompiled names and name variations to match those in the sentence. For example "United States” and its variations "U.S.A.” and "United States of America" would be in the name list and each would be recognized as a name when seen in the sentence.
  • the second method uses patterns that identify names and name types. Proper names are marked with capitalization and can be isolated easily. (There are some difficulties associated with sentence beginnings and small function words like "of that are not capitalized in the middle of a name.)
  • the names-isa module generates generalizations (62) for proper names and adds them to the index file. For example, if the name "Clinton” is found in the text, the word “President” could be added to the index file. Other examples are "China -> country” and “Albert Einstein -> physicist”.
  • the name generalization makes use of a knowledge based and a pattern based method as well. If a name is found in the database, generalizations of the name are located in the name-isa table. This is a table just like the word-isa table that lists one or more generalizations for a given name.
  • the references module identifies (64) implicit references in the form of pronouns, definite noun phrases and name variants.
  • the module could also handle indirect references and null references.5 (Handling indirect references would require a "has-a” table similar to the "is-a” table discussed below.
  • the "has-a” table would represent relationships of the kind: "A country has an economy, a president, an army, etc.”
  • Antecedents of references are determined using a short-term buffer 80.
  • the antecedents are added to the index file, and the short term buffer 80 is updated with the potential references for the new names in the sentence, in the following way: 5
  • the short-term buffer contains a set of pairs of the type "he -> Bill Clinton", "country -> China”, i.e.
  • a potential reference pointing to a potential antecedent pointing to a potential antecedent.
  • the sentence is scanned for potential reference words or phrases. For each one discovered, the set of the potential antecedents is added to the index file. After each sentence is processed, the short-term buffer is cleared and updated with new potential references.
  • the new potential antecedents are the names and other concepts used in the current sentence (either explicitly mentioned or implicitly referred to).
  • the new potential references are all generalizations, name variants and pronouns compatible with these antecedents.
  • the short-term buffer 80 has two fields. One field contains antecedent words, the other contains potential references associated with each of the antecedents. As each element of a sentence is encountered, potential references are stored in the short-term buffer (e.g., when "China” is encountered in a sentence, the potential references “country”, “nation”, and “it” are added to the potential references field). When a referring word or phrase such as a pronoun or a definite noun phrase (e.g., "the country”) is encountered in a later portion of the text, the word is looked up in the short-term buffer to identify the possible antecedents.
  • a referring word or phrase such as a pronoun or a definite noun phrase
  • the modules that are active during the indexing phase use the following lexical databases to perform their functions.
  • a skip-word database 82 lists function words such as prepositions, conjunctions, and auxiliary words that are not to be added to the index file.
  • the skip-word database is used in step 50 of figure 2.
  • a stem database 84 also used in step 50, contains a list of the stems of most English words.
  • the word stems can be found in sources such as the CELEX lexical database available from the Linguistic Data Consortium of the University of Pennsylvania. Other sources for this material include on-line dictionaries. Alternatively, one could use a rules-based approach by analyzing a word and stripping its suffixes.
  • a word-isa database 86 used in step 54, contains generalizations of single words that can potentially match question words.
  • the word-isa table is generated using three approaches: 1. Consulting online lexical ("word-related") databases like wordnet or thesauri like Roget's. 2. Writing data-mining programs that process large o corpora (text sources) or the actual source to be indexed as a way to discover such relations. 3. Manually editing and cleaning up the results of 1 and 2.
  • a source like an encyclopedia typically includes an article classification and a title index which contain useful information related to the generation of the isa and ako5 tables.
  • An ako database 88 contains lists of generalizations for single words and is used in step 56.
  • the ako database is generated in a manner similar to the generation of the word isa table. 0
  • a name-isa database 90 contains generalizations for recognized proper names like countries, companies, and famous people and is used in step 62.
  • the name isa database is generated in a manner similar to the generation of the word isa table.
  • the pattern-based5 rules mentioned before (which assign person/place/organization type general classes to names) can be used to expedite the process.
  • scores are generated (92) for each unique sentence element contained in the index file.
  • the score is inversely proportional to the number of times the sentence element appears in the index file.
  • the score also reflects the part of speech and the confidence in reference resolution.
  • the score is stored in a score file 94.
  • the score file contains a set of pairs of the type, for example, "walk -> 7.86", “Clinton -> 15.76".
  • the numbers are computed based on the frequency of the given term, e.g., as -log_2 (frequency).
  • the frequency is either computed based on the index file by counting the number of occurrences of each term in the index file or based on a large reference corpus (such as the Cob corpus frequencies from CELEX). The latter is particularly useful when the data to be indexed is small and its frequencies are not statistically significant.
  • the score file may then be manually modified to assign higher values to domain-specific terms or lower values to optional modifiers.
  • the index file is in the form of a set of pairs of the type "walk -> 132459", "Clinton -> 345512" etc.
  • the numbers are unique sentence ID numbers.
  • Sentence He was the one of the brothers of the chic Peter.
  • the run time process (100) receives questions posed by a user and uses the index file and the score file to identify sentences that may answer the questions.
  • the run time process has two main parts. One part is the analysis of the questions 101 to produce a question file 104. The second part is the matching of information 103 in the question file with information in the index file to identify sentences that are likely to provide answers to the questions.
  • each word in a question is processed using modules similar to those used in the indexing phase.
  • a stems module 102 uses the skip-word database 82 to pass over certain words and uses the stem database 84 to determine stems of each word and records them in the question file 104.
  • a q-ref module identifies (106) potential references between the current question and antecedent elements of other questions. The identification is done in a manner similar to step 64 in figure 2, using a short-term buffer 105. The antecedents are recorded in the question file 104. No generalizations, synonym generation, etc. are performed at run time. It is important that such steps not be performed at run time to avoid double matching.
  • the matching part of the run time process searches in the index file for each element in the question file 108. If an element in the question file is found in the index file, an answer score for the sentences associated with that element is updated by adding the score 108 associated with that element in the score file 94.
  • the sentences are sorted 112 according to their respective total scores.
  • a decision 114 is made about which sentences to display as the answer to the question.
  • One approach is to display sentences that are at the top of the scoring. By comparing the sentences having the highest scores with the maximum possible sentence score, a determination can be about the quality of the answer represented by each of those sentences.
  • a typical noun in English is worth about 10 to 15 points.
  • a sentence that has a score within 10 points of the maximum possible score would represent a high quality answer. If the answer quality of the highest scoring sentence is high, that sentence could be displayed alone. If several of the top-scoring sentences have close scores, they can all be displayed.
  • a bias can be applied to cause the display of high-scoring sentences from different free-text sources in lieu of multiple sentences from a single source.
  • the display algorithm can be configured to display one or two neighbor sentences around the sentence or the whole paragraph around the sentence.
  • the user could be told that no good answer was found and a few pointers to relevant documents could be displayed.
  • the answer system is useful in a wide variety of contexts, including the Internet, local networks, or a single workstation.
  • the indexing can be done at a central location and the run time process can handle questions received from browsers at a central server.
  • the invention offers a number of advantages.
  • the quality of the answers is high because the indexing of implicit references significantly improves the chances that useful responsive sentences will be found.
  • the invention is useful in a wide variety of contexts, among them on-line searching using the World Wide Web.
  • portions of text other than sentences can form the basis of the indexing and scoring.
  • other kinds of references and generalizations could be used as the basis for the indexing phase.
  • Indexing need not be captured in a single central index file and score file but can be distributed among multiple index files and o score files. At run time, questions may be answered by a scoring system that operates on all of the files.
  • references can easily be integrated into the existing framework once the necessary knowledge is built. 5 Also, once grammatical relations are determined with satisfactory accuracy, they can be incorporated into the existing indexing- retrieval framework without major changes to the architecture.
  • a person could use any voice-based communication device, such as a wireless or wired phone, to5 connect (200) to a web site, and using voice, navigate the web site and obtain information by issuing voice commands and questions.
  • the user could utter a natural language query (202).
  • the website would include speech recognition software mat would permit voice-to-text transcription of the query (204).
  • the text would then be passed to the query response engine described earlier (206).
  • the query response engine generates one or more responses (208) in text form and passes them to a speech synthesizer (210).
  • the speech synthesizer converts the text to speech (212) that is played back over the phone to the user (214).
  • a person could get answers to questions from a wireless communication device.
  • the device After the device is connected to a web site 220. the user types a query on the device, either using a keyboard or a stylus only a touch-sensitive screen.
  • the query is passed to the query response engine described earlier (224).
  • the query response engine generates responses (226) that are in the form of answers to the query rather than in the form of links to places where the answer may be available.
  • the answers are then returned to the wireless device (228). For example, the question entered by the user might be "What was one of Einstein's achievements?" One response might be the answer "Einstein developed the theory of relativity.”
  • advertising delivered to a web user can be personalized based on questions that the user asks.
  • the user enters a query (230).
  • the text of the query is passed to the engine (232) and a response is generated (234).
  • the engine also uses the response to generate ad TAGS (238). For example, if the question is "what are the ski conditions like in Aspen?" the engine will generate TAGS that relate to commerce for Aspen, such as "Ski Rental, Cabin Rental, Dining in Aspen, Flying to Aspen".
  • TAGS are then used to extract appropriate ads from ad inventory. The ads are presented to the user along with the answer to the question asked.
  • a user browses the web (250). Based on a web page being displayed to the user in the course of the browsing, a set of information, for example, words that appear on the web page, is derived for use with the query response engine (252). The information is applied to the query response engine as if it were a query (254). The results of the query are used to generate ad TAGS (256) and the TAGS are used to extract appropriate ads from ad inventory (258) as before. The ads are presented to the user as part of the page being read, or a later page (260).
  • a set of information for example, words that appear on the web page
  • the information is applied to the query response engine as if it were a query (254).
  • the results of the query are used to generate ad TAGS (256) and the TAGS are used to extract appropriate ads from ad inventory (258) as before.
  • the ads are presented to the user as part of the page being read, or a later page (260).
  • TAGS are chosen 270 to relate to articles or information, for example, about Aspen, such as "latest Aspen news, Traveling in Aspen, Events in Aspen", etc. These TAGS are then used to extract appropriate information from information sources (280) and construct the next page that is shown to the user (282). The resulting personalized page is then presented to the user along with the answer to the question asked (284).
  • Another application develops user profile and preference information based on questions asked A user types (or asks) questions 290.
  • the query response engine processes the questions (292) and generates a log (294) that includes the following information, for example: identity of the user (name, IP address, etc.); the questions asked and answers to the questions; any un-answered questions; and the click stream reflecting what the user did after the answers were delivered to him.
  • the log is analyzed (296) to generate profile TAGs.
  • the profile TAGS are used to update a user profile (298). The next time the user logs in, or enters another query, the updated profile is used to personalize web pages and advertising for the user (300).
  • another application facilitates online shopping by answering questions about products in the shopping cart.
  • the user adds items to a shopping cart on a commercial web site (310).
  • the items are used as the basis for generating question dialog boxes for each of the items (312).
  • Each dialog box hovers above the shopping cart.
  • the user may then ask a question about an item (314).
  • the query response engine answers the question without forcing the user to leave the shopping cart (316).
  • the answer is shown in the hovering dialog box.
  • the user completes the transaction based on the answer (318).
  • the user can navigate, e.g., a product catalogue by asking questions.
  • the user can navigate, e.g., a product catalogue by asking questions.
  • the query response engine processes the request (332) and generates a list or an item that meets the criteria (334). The user then clicks on the items to buy (336).
  • a natural language query 10 has been typed into a subject field 12 of an electronic mail (email) message 14.
  • the message also includes message field 16 (which is shown empty but could contain other information), a "to" address 17, for example, an Internet address of an email message server, and a "from" address 18, which typically identifies the source of the message.
  • a natural language query we mean any arbitrary clause or sentence that is expressed in a human language, such as English, in a manner that is natural to native users of the language.
  • the query need not comply with any special syntax or vocabulary to accommodate to the needs of a computer program.
  • the query need not be expressed as a complete sentence or as a question. It could be expressed, for example, as a command, or an order, or any kind of request for action or service.
  • the email message containing the query is sent through the Internet 20 to the "to" address which is the location of an email message server 22.
  • the email message server automatically receives the messages, automatically strips out the subject information, in this case the natural language query, and the "from" address, and automatically passes the query to a natural language query engine 24.
  • the natural language query engine 24 applies the query to a body of information 25 that may contain a response or responses to the query.
  • the resulting response or responses 28 are passed back through software 26 which forms a new email message 40 (such as the one shown in figure 14) using the response 28 in the message field 42 and the received "from" address as the "to" address 44 of the new message.
  • the new message is forwarded to the email message server, which sends it through the Internet back to the source of the query to which it responds.
  • queries could be expressed as commands, orders, or any requests for action or service.
  • a message sent to an email message server 48 of an on-line grocery site 50 could contain an order: "For Customer #123, please deliver 1 pound of cherries by 5PM tonight".
  • the online site can receive this message, use a natural language query engine 52 to apply it to stored commercial information 54, for example, a database of address and credit card information associated with the user, and then apply the information in the database to software 56 that builds an order, charges the customer, arranges for the delivery of the goods, and e-mails back a confirmation of the delivery.
  • Another example is "please ship B2305 printer to customer 123 via overnight delivery, and charge it to account 123".
  • the natural language query engine will interpret the instruction, go into a product database and find B2305, place the order in the name of customer 123 for overnight delivery, and charge account 123.
  • Another example is, "sell 100 shares of GE in account 123".
  • the natural language engine will interpret the order, translate it into transactions, and e-mail back a confirmation.
  • Any natural language query engine could be used to respond to the queries.
  • One suitable engine is described above.
  • the query could be identified by other means than positioning it in the subject field.
  • the query could be written in the message.
  • the query could be distinguished from other text by predefined markers. For example the query could be preceded and followed by the string ++**. Or the query could be placed on the first line of the message.
  • the response to the query could be returned other than in an email message, for example, by FAX or by posting on a website.
  • the email message could contain credit card or other charge information and the user could be charged for the response service.
  • the message field could contain instructions about how to return the response, for example, including a FAX phone number.
  • the natural language messages can be sent and received over a wired or wireless network or a point-to-point connection.
  • a user could speak the natural language message into a cellular or mobile phone.
  • the message can be recognized and converted into text to be applied to the natural language query engine.

Abstract

Natural language query systems and applications are described. A pre-processing routine (30) is applied and comprises the parsing of text to identify sentence boundaries (32); the marking of sentences (34); the assignment of a new sentence number (36); the identification of tiles and headings (38); and the marking of titles and headings (40), to produce a pre-processed text file (42).

Description

ANSWERINGNATURALLANGUAGE QUERIES
This invention relates to answering natural language queries.
Such a query may be a question phrased in English, for example, and the response may be sentences of text that belong to a body of free-text sources and are responsive to the question.
One way to find the relevant sentences of text uses an index that is created in advance. In a simple example, an index could include the words "Georgia" and "capital" and associated pointers to sentences that include those words. At run time, if a question asks about the capital of Georgia, the index can be used to find responsive sentences.
In the invention, implicit references (also known as anaphora in linguistic literature) are inferred from the words of segments of text. In response to a query, one or more segments are identified as relevant to the query based at least in part on the implicit references. Using implicit references improves the quality of the responses to the query.
A characteristic of natural language text is the use of words (references) that refer to other words or to concepts that appear in or are implied by other parts of the text (antecedents). For example, in the sentence "He is best known for his theory of relativity," the word "he" (the reference) may refer to the name "Albert Einstein" (the antecedent) that appears in another sentence: "Albert Einstein was one of the greatest scientists of all time. " Two broad categorizations of references may be useful. One broad categorization is based on the positions of the antecedent and the reference. The other broad categorization is based on the type of 5 reference. The first categorization is based on three distinct contexts in which the reference may be used in a question answering setting.
References of the kind that are based on position may occur in at o least three different contexts in a question answering setting:
1. Between two sentences
SI: Albert Einstein was one of the greatest scientists of all5 time.
S2: He is best known for his theory of relativity.
In sentence S2, the word "he" refers to "Albert Einstein" in sentence SI. 0
2. Between two questions
Ql: When was Einstein born? Q2: Did he invent relativity? 5
In the question Q2, the word "he" refers to "Einstein" as used in question Ql.
3. Between a question and a sentence S3: Einstein is best known for his theory of relativity. Q3: Wlio invented relativity?
The word "who" in question Q3 refers to "Einstein" as used in sentence S3.
All three types of references may have to be resolved to match a question with responsive sentences in the free-text sources. Consider the following example:
S4: China is a huge country in eastern Asia. S5: It produces more cotton, rice, and wheat than any other country. Q4: What is the scientific classification of rice?
Q5: Which countries produce this crop?
The phrase "this crop" in Q5 refers to "rice" in Q4. The word "it" in S5 refers to "China" in S4. The phrase "which countries" in Q5 refers to "it" in S5 and in turn to "China" in S4. A resolution of the three types of references would show that S5 is a potential answer to Q5.
The second categorization is based on the type of phrase used for the reference and includes the following five groups (examples included):
Pronoun: China is a big country. It is in Asia. Definite Noun Phrase: China is in Asia. This country produces rice.
Name variant: International Business Machines versus IBM, Great Britain versus Britain versus England. Indirect references: (in an article about China): The climate is usually mild: (Here the climate does not refer to China but it is known that it is the Chinese climate that is under discussion. Indirect references rely on "has-a" relationships.) Null references: "Cisco acquired Cerent Corp. for 7.5 billion dollars. The negotiations lasted 3.5 months." The second sentence is responsive to the question "How long did Cisco negotiate with Cerent?" even though it does not contain any words that refer to Cisco or Cerent.
Implementations of the invention take advantage of references to identify sentences in free-text sources that may answer natural language questions.
One goal of some implementations of the invention is to shorten the processing delay in receiving an answer after a question is posed at run time. In general, shifting processing steps from run time to a preliminary indexing phase can reduce the delay.
One way to shift processing to the indexing phase relates to the need to match synonyms that appear in a question and in a sentence. For example, the words "produces" and "raise" in the following question and sentence must be matched at run time:
S6: China produces more corn than any other country. Q6: In which countries do people raise corn?
By generating and storing synonyms for the word "produces" during the indexing phase, rather than generating synonyms for "raise" at run time, the processing delay in responding to questions can be reduced, an advantage which justifies the additional storage space required for the larger index.
Another opportunity for shifting processing to the indexing phase relates to the fact that there tend to be many more specializations of a concept than generalizations of a concept. For example, there are more than 250 countries (including China) that represent specializations of the concept "country" but relatively few generalizations for the concept "China". So, in the following example, overall processing time is saved by generating and storing the generalizations of "China", the concept that appears in the sentence, during the indexing phase, rather than generating the larger number of specializations of "countries", the concept that appears in the question:
S7: China produces more corn than any other country. S8. In which countries do people raise corn?
Thus, in general, in one aspect, the invention features receiving segments of text (e.g., sentences), each segment having elements. Implicit references are inferred from the elements of the segments. A query is received, and, in response to the query, one or more segments are identified as relevant to the query (e.g., by scoring) based at least in part on the implicit references. Implementations of the invention may include one or more of the following features. The implicit references may be inferred prior to the time when the query is received and may be stored as entries in a searchable index, each entry including a pointer to one of the segments from which the reference was inferred. One or more of the identified segments may be selected for presentation to a user.
The implicit references may be generalizations of the elements contained in the segments. The references may be name variations that refer to elements, or indirect references to elements, or definite noun phrase references to elements, or pronouns, or null references. The antecedents of the indirect references may be found in titles or m headings. The antecedent can be a concept recognized by a pattern of characters (e.g., a date) and it can be referred to by a generalization (e.g., "when" or "at that time").
The scoring may be based on a matching of elements in a question with elements in an index file that contains information about the inferred implicit references. The selection of segments to be displayed may be based on scoring. As few as one segment from a given source need be displayed. The step of responding to the query may include identifying implicit references between the query and a previous query.
In general, in another aspect, the features of the invention include receiving a question in the form of natural language speech from a source, automatically recognizing the speech, feeding the recognized speech to a natural language query engine operating on information accessible through a web site to generate a text answer to the question, synthesizing a spoken response to the question based on the answer, and playing the spoken response back to the source of the question.
Implementations of the invention may include one or more of the following features. Commands, also, may be received in the form of natural language speech from a source, the commands may be determined using natural language processing, and the speech may be acted upon by controlling navigation in the web site. The natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and, in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
In general, in another aspect, the invention features speaking a natural language question to a web site and receiving a natural language spoken answer to the question back from the website.
In general, in another aspect, features of the invention include receiving a natural language question from a user, deriving information about the user from the question, selecting promotional information based on the information about the user, generating an answer to the question using a natural language query engine, and returning the answer to the user together with the promotional information. Implementations of the invention may include one or more of the following features. The information about the user may include preferences suggested by the question. The promotional information may include advertising. Advertising tags may be generated for use in selecting the promotional information. The natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
In general, in another aspect, the invention features receiving page information contained in a web page that is being viewed by a user, deriving user information about the user from the page information using a natural language query engine, selecting promotional information based on the user information, and displaying the promotional information to the user while the user is viewing the web page.
In general, in another aspect, the invention features receiving a natural language question from a user, deriving information about the user from the question, selecting available information that is related to the question, generating an answer to the question using a natural language query engine, and returning the answer to the user together with the available information.
Implementations of the invention may include one or more of the following features. The information about the user may include preferences suggested by the question. The information related to the question may include articles. Advertising tags may be generated for use in selecting the information. The natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
In general, in one aspect, the invention features entering a natural language question on a wireless personal electronic device, generating natural language answer to the question using a natural language query engine, and presenting the natural language answer to a user.
Implementations of the invention may include one or more of the following features. The question may be entered through a keyboard. The answer may be presented through an interface of the device. The natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
In general, in one aspect, the invention features presenting to a user a web page that comprises a shopping cart, displaying on the shopping cart an identification of an item for purchase, providing a mechanism that enables the user, without leaving the shopping cart, to ask a natural language question, and providing an answer to the natural language question.
Implementations of the invention may include one or more of the following features. The mechanism mayinclude a dialog box displayed over the shopping cart web page. The dialog box may be displayed in association with the identification of the item for purchase. The answer may include information about the item for purchase. The user may take a step in response to the answer and complete a transaction on the shopping cart web page. The answer may be provided from a natural language query engine. The mechanism may be provided by an agent that watches items being added to the shopping cart.
In general, in another aspect the invention features receiving natural language questions about products, selecting product information using a natural language query engine based on the questions, and serving the product information from a web server to a user.
Implementations of the invention may include one or more of the following features. The user may respond to the web server by buying one of the products. The questions may identify desired characteristics of the products.
In the invention, the natural language search is done by entering the search in a field of an email message and sending it to an email address. In general, in another aspect, the invention features a method that includes (a) receiving from a user, over an electronic network, an electronic mail message containing a written natural language query, (b) identifying the written natural language query in the 5 electronic mail message, (c) using a natural language query engine to apply the natural language query to a body of information, to generate information responsive to the query, and (d) taking an action based on the responsive information.
o Implementations of the invention may include one or more of the following features: Taking an action may include sending an electronic mail message containing the responsive information to the user over the publicly accessible electronic network, or filling an order for a product or service. The query may include a question5 to be answered and the responsive information may include an answer to the question. The query may include a request for an action or service and taking an action may include providing the action or service in response to the request. The body of information may include textual content or commercial 0 information. The natural language query may be identified based on an indicator arranged by the user. The indicator may include a position of the query within the electronic mail message, e.g., within a subject field of the electronic mail message. The electronic mail message may be directed to an address that is5 prearranged to automatically receive and respond to the natural language query.
In general, in another aspect, the invention features apparatus that includes (a) an electronic mail message server connected to receive electronic mail messages containing natural language queries from an electronic network and to send electronic mail messages containing responses to the natural language queries to the electronic network, (b) software adapted to identify written natural language queries in electronic mail messages received at the server and to provide information responsive to the natural language queries as electronic mail messages to the server for delivery, and (c) a natural language query engine connected to receive the natural language queries from the electronic mail message server and to apply them to a body of information to obtain the responsive information.
In general, in another aspect, the invention features (a) automatically stripping natural language queries from electronic mail messages, (b) automatically applying the queries to a natural language search engine to generate responsive information, and (c) automatically taking action based on the responsive information. Other advantages and features will become apparent from the following description and from the claims.
Some implementations of the invention are illustrated in the block diagrams of figures 1 through 15 and described below.
In some implementations of the invention, free-text sources are prepared for use in answering questions by first applying a preprocessing routine 30, shown in figure 1. First, the text is parsed (32) to identify sentence boundaries. For purposes of parsing, the sentence boundaries are identified using patterns that are manually created, although other approaches could be used. In the manual approach, patterns are described that identify potential end-of- sentence markers (period, question mark, exclamation point, paragraph break, title break, sometimes quotes, etc.). Then certain alternative uses are eliminated. For example, in the case of a period, the eliminated alternative includes periods that appear at the end of abbreviations and in acronyms and floating point numbers, for example.
Each sentence is marked (34) with a single new line in one implementation, or using markup tags in another implementation. A unique sentence number is assigned (36) to each sentence. The numbers are unique within a single index file. Therefore all sentences (whether or not from different documents) that go into a single index get unique numbers. In another implementation, part of the unique numbers (e.g., the first six digits) are used to encode the article the sentence is coming from and another part (the last four digits) is used to identify the sentence number within the article.
Titles and other headings are identified (38) in a manner that depends on the text format. Some formats (like HTML) use markup elements that identify the titles. Plain text sources require pattern-based analysis. Titles also are marked (40) to identify some possible indirect references. An example would be the sentence "The economy is booming." found in an article entitled "China". Notice that unlike in the case of the sentence "This country produces rice", none of the words in the sentence "The economy is booming" directly refers to China. However, from the title one can infer that the subject is the Chinese economy. One way to index the title information is with respect to every sentence in its scope. Another more complicated way to use the title information is to build and make use of a knowledge base of part- whole, group-member relationships. Such relationships would include, for example, the fact that a typical country has a population, an economy, a president, and an army, etc.. Then, when any of these words (e.g., economy and president) are used by itself in a sentence, the indirect reference to the country can be identified. The output of the pre-processing is a pre-processed text file 42. In one implementation, the pre-processed text file has text of one sentence on each line preceded by a sentence number and a tab character and followed by the text of the applicable titles. In another implementation, a special markup language (similar to HTML or XML) may use specific tags to mark sentences, paragraphs, sections, documents and titles in the text. The sentence tags contain id numbers as part of the tag such as: <s id=124345>. This format is more flexible and may be easily extend to include other tags. A user may be permitted to specify references not identified by the indexer by explicitly inserting them into the pre-processed text file using specific tags.
In one implementation, all the text sources that go into a single application (e.g., a whole encyclopedia) can be converted into one large pre-processed text file before being passed to the indexer. Another implementation could use separate pre-processed files for each article and let the indexer read the information from multiple files. As shown in figure 2, after pre-processing, the indexing phase 50 begins. The purpose of the indexing phase is to use the pre- processed text file 42 to build an index file (table) 70 that lists foreseen ways in which a question may refer to an element of a sentence. A single index file is built for all sources in the system.
By "element" of a sentence, we mean a concept referred to in the sentence. The concept may be referred to using an ordinary word (walk, cake), a name (Bill Clinton), a multi-word phrase (stand up, put on), a pronoun (he referring to Bill Clinton), a definite noun phrase (the country referring to China), an indirect reference (the economy, indirectly referring to China), or a null reference (there is no word referring to the concept but the concept is still referenced). For example, if the text contained the sentences: "The war started in 1939. Germans invaded Poland.", the answer to the question "When did Germans invade Poland?" would be 1939 even though there is no word in the second sentence directly or indirectly referring to this time phrase. Time phrases and place phrases often affect more than a single sentence, therefore creating null-references.)
Each entry in the index file 70 includes a pointer to the sentence to which the questions may refer based on that entry. Conceptually the index file relates the elements found in a sentence to a unique identifier for that sentence. The index file can be thought of as a two-column table in which one column contains sentence ID numbers and the other column contains the words, concepts, referents, generalizations, and synonyms (collectively referred to as the elements of the sentence). For efficient scoring later, the following three components are created for the index: the string buffer, the sentence id buffer, and the hash table.
The string buffer contains the null terminated strings of each element found in the source text. The strings are placed in the buffer consecutively in no particular order.
The sentence id buffer contains sentence ID arrays for each element. The array for a particular element can be identified by giving the start position in the buffer and length of the array. The arrays are placed in the buffer consecutively in no particular order.
The hash table is a standard hash table that contains key- value pairs and that enables a fast search of a given key. The key of each entry is a pointer to the string buffer. The value of each entry consists of a pointer to the sentence ID buffer and an array length.
This structure enables finding the sentences that contain a particular element as follows: First, the element is searched in the hash table by comparing it with certain keys in the hash table. For each comparison, the string in the string buffer that the key points to is retrieved and compared to the element. When a match is found, the corresponding sentence ID buffer pointer and array length is read. Finally, the specified array is located in the sentence ID buffer. In the indexing phase, each sentence in the preprocessed text file 42 is read and passed to several modules. Each module reads the words of a sentence and, based on them, recognizes certain types of constructions and references that represent foreseen ways in which a question may refer to an element of the sentence. When a module identifies one of those ways, it writes an entry into the index file 70 together with the unique identifying number of the sentence from which it was generated.
In one implementation, there are eight indexing modules called: words, title, word-isa, ako, patterns, names, name-isa, and references.
As shown in figure 2, the words module identifies (50) each word in the current sentence and adds it to the index file. The words module also derives the stem of each word, using a table of English word and word stem pairs, such as . flowers->flower and went->go. The words module adds the stem to the index file for use, for example, in matching morphological variants of words that may appear in a question.
In the title module, the words in each heading in the set of headings that apply to a sentence are added (52) to the index file with pointers to the sentence. In one implementation only one heading (the document title) is used for every sentence in a document. In another implementation, the pre-processed text file contains tags for titles of various levels (document, chapter, section, subsection, for example) and sectioning tags that identify the scope of each title. Using these tags, the indexer is able to determine, for each sentence, the document, the chapter, and the section that it is in. The indexer combines all titles that apply and indexes them with the sentence. Title indexing may not be appropriate for every source. For example, encyclopedia sources have well defined titles that are usually appropriate and helpful whereas newspapers have partial sentences for titles, which are usually not appropriate for the above method.
The word-isa module generates (54) the generalizations (mentioned earlier) for words that appear in the sentence and for words that appear in headings. For example, if the word "red" appears in a sentence, the generalization word "color" is placed in the index file so that a question that asks "what color" will be matched to the sentence that includes "red". For this purpose, a database table with the same name (word-isa) and containing two columns is used. The first column contains words and the second column contains possible generalizations. For example, "red- >color" would be one of the entries in that table.
The ako module identifies generalizations (56) of generalizations already generated. For example, if the ako module encounters the generalization "color" that had been generated at step 54, the ako module adds the further generalization "attribute" to the index file.
The patterns module reviews (58) the text for special patterns of dates and numbers and adds the generalizations to the index file. For example, if the date January 23rd, 1998, appears in the text, the patterns module would add the generalizations "date" "time" and "when" so that when a question asks "when did this event happen?" it matches the date. Another example that appears frequently in an encyclopedia is the lifespan information in biographies. The first sentence of a typical biography starts "John Doe (1932-1987) ... ". A pattern that recognizes the life-span structure allows matching of questions of the type "When was John Doe born?"
The names module identifies proper names (60) in the text and generates and indexes the names accordingly. For example, the names module uses two methods to identify names in a sentence. The first method uses a list of precompiled names and name variations to match those in the sentence. For example "United States" and its variations "U.S.A." and "United States of America" would be in the name list and each would be recognized as a name when seen in the sentence. The second method uses patterns that identify names and name types. Proper names are marked with capitalization and can be isolated easily. (There are some difficulties associated with sentence beginnings and small function words like "of that are not capitalized in the middle of a name.)
The names-isa module generates generalizations (62) for proper names and adds them to the index file. For example, if the name "Clinton" is found in the text, the word "President" could be added to the index file. Other examples are "China -> country" and "Albert Einstein -> physicist". The name generalization makes use of a knowledge based and a pattern based method as well. If a name is found in the database, generalizations of the name are located in the name-isa table. This is a table just like the word-isa table that lists one or more generalizations for a given name. For names that were not found in the table but that were detected using capitalization, for example, the rough generalization of the name (person, place, organization) can be inferred using internal and external clues. An example of an internal clue would be the 5 appearance of the word "Corp." as part of the name, which would imply that it is a company. Similarly "Mount" or "City" implies a place and "Mr." or "John" implies a person. External clues are words outside the name that provide information. For example, if a name is preceded by "in" one can deduce that it is a place or o possibly an organization but not a person.
The references module identifies (64) implicit references in the form of pronouns, definite noun phrases and name variants. The module could also handle indirect references and null references.5 (Handling indirect references would require a "has-a" table similar to the "is-a" table discussed below. The "has-a" table would represent relationships of the kind: "A country has an economy, a president, an army, etc." 0 Antecedents of references are determined using a short-term buffer 80. The antecedents are added to the index file, and the short term buffer 80 is updated with the potential references for the new names in the sentence, in the following way: 5 The short-term buffer contains a set of pairs of the type "he -> Bill Clinton", "country -> China", i.e. a potential reference pointing to a potential antecedent. The sentence is scanned for potential reference words or phrases. For each one discovered, the set of the potential antecedents is added to the index file. After each sentence is processed, the short-term buffer is cleared and updated with new potential references. The new potential antecedents are the names and other concepts used in the current sentence (either explicitly mentioned or implicitly referred to). The new potential references are all generalizations, name variants and pronouns compatible with these antecedents.
The short-term buffer 80 has two fields. One field contains antecedent words, the other contains potential references associated with each of the antecedents. As each element of a sentence is encountered, potential references are stored in the short-term buffer (e.g., when "China" is encountered in a sentence, the potential references "country", "nation", and "it" are added to the potential references field). When a referring word or phrase such as a pronoun or a definite noun phrase (e.g., "the country") is encountered in a later portion of the text, the word is looked up in the short-term buffer to identify the possible antecedents.
The modules that are active during the indexing phase use the following lexical databases to perform their functions.
A skip-word database 82 lists function words such as prepositions, conjunctions, and auxiliary words that are not to be added to the index file. The skip-word database is used in step 50 of figure 2.
A stem database 84, also used in step 50, contains a list of the stems of most English words. The word stems can be found in sources such as the CELEX lexical database available from the Linguistic Data Consortium of the University of Pennsylvania. Other sources for this material include on-line dictionaries. Alternatively, one could use a rules-based approach by analyzing a word and stripping its suffixes.
5 A word-isa database 86, used in step 54, contains generalizations of single words that can potentially match question words. The word-isa table is generated using three approaches: 1. Consulting online lexical ("word-related") databases like wordnet or thesauri like Roget's. 2. Writing data-mining programs that process large o corpora (text sources) or the actual source to be indexed as a way to discover such relations. 3. Manually editing and cleaning up the results of 1 and 2. A source like an encyclopedia typically includes an article classification and a title index which contain useful information related to the generation of the isa and ako5 tables.
An ako database 88 contains lists of generalizations for single words and is used in step 56. The ako database is generated in a manner similar to the generation of the word isa table. 0
A name-isa database 90 contains generalizations for recognized proper names like countries, companies, and famous people and is used in step 62. The name isa database is generated in a manner similar to the generation of the word isa table. The pattern-based5 rules mentioned before (which assign person/place/organization type general classes to names) can be used to expedite the process.
After the indexing phase, scores are generated (92) for each unique sentence element contained in the index file. The score is inversely proportional to the number of times the sentence element appears in the index file.
The score also reflects the part of speech and the confidence in reference resolution. The score is stored in a score file 94.
In one implementation of the scoring algorithm, the score file contains a set of pairs of the type, for example, "walk -> 7.86", "Clinton -> 15.76". The numbers are computed based on the frequency of the given term, e.g., as -log_2 (frequency). The frequency is either computed based on the index file by counting the number of occurrences of each term in the index file or based on a large reference corpus (such as the Cob corpus frequencies from CELEX). The latter is particularly useful when the data to be indexed is small and its frequencies are not statistically significant. The score file may then be manually modified to assign higher values to domain-specific terms or lower values to optional modifiers.
The index file is in the form of a set of pairs of the type "walk -> 132459", "Clinton -> 345512" etc. The numbers are unique sentence ID numbers. Here is an example sentence and some sample terms that are inserted into the index file for this sentence: Sentence: He was the one of the brothers of the apostle Peter. Example terms: Plain word: apostle
Stem: brother (from brothers)
Generalization: person (from apostle via word-isa file) Indirect reference: Andrew ("he" refers to Saint Andrew in the previous sentence). Once the indexing phase is completed, the index file and score file can be used as the basis for answering questions.
As shown in figure 3, the run time process (100) receives questions posed by a user and uses the index file and the score file to identify sentences that may answer the questions. The run time process has two main parts. One part is the analysis of the questions 101 to produce a question file 104. The second part is the matching of information 103 in the question file with information in the index file to identify sentences that are likely to provide answers to the questions.
In the first part of the run time process, each word in a question is processed using modules similar to those used in the indexing phase.
A stems module 102 uses the skip-word database 82 to pass over certain words and uses the stem database 84 to determine stems of each word and records them in the question file 104.
A q-ref module identifies (106) potential references between the current question and antecedent elements of other questions. The identification is done in a manner similar to step 64 in figure 2, using a short-term buffer 105. The antecedents are recorded in the question file 104. No generalizations, synonym generation, etc. are performed at run time. It is important that such steps not be performed at run time to avoid double matching.
The matching part of the run time process searches in the index file for each element in the question file 108. If an element in the question file is found in the index file, an answer score for the sentences associated with that element is updated by adding the score 108 associated with that element in the score file 94.
After all elements in the question have been matched, the sentences are sorted 112 according to their respective total scores.
Using the sorted sentence list, a decision 114 is made about which sentences to display as the answer to the question.
One approach is to display sentences that are at the top of the scoring. By comparing the sentences having the highest scores with the maximum possible sentence score, a determination can be about the quality of the answer represented by each of those sentences. A typical noun in English is worth about 10 to 15 points. A sentence that has a score within 10 points of the maximum possible score would represent a high quality answer. If the answer quality of the highest scoring sentence is high, that sentence could be displayed alone. If several of the top-scoring sentences have close scores, they can all be displayed. A bias can be applied to cause the display of high-scoring sentences from different free-text sources in lieu of multiple sentences from a single source. If the highest scoring sentence is not a high quality answer, or if the question is a "how" or "why" question, additional context around the sentence can be displayed to aid the user's interpretation. For this purpose, the display algorithm can be configured to display one or two neighbor sentences around the sentence or the whole paragraph around the sentence.
If the highest scoring sentence is a low quality answer, the user could be told that no good answer was found and a few pointers to relevant documents could be displayed.
The answer system is useful in a wide variety of contexts, including the Internet, local networks, or a single workstation. In the case of the Internet, the indexing can be done at a central location and the run time process can handle questions received from browsers at a central server.
The invention offers a number of advantages. In particular, the quality of the answers is high because the indexing of implicit references significantly improves the chances that useful responsive sentences will be found. The invention is useful in a wide variety of contexts, among them on-line searching using the World Wide Web.
Other implementations are within the scope of the claims.
For example, portions of text other than sentences, such as paragraphs or sections or chapters can form the basis of the indexing and scoring. Also, other kinds of references and generalizations could be used as the basis for the indexing phase.
Questions need not be phrased as complete English sentences.
5
Languages other than English can be used.
Indexing need not be captured in a single central index file and score file but can be distributed among multiple index files and o score files. At run time, questions may be answered by a scoring system that operates on all of the files.
Other types of references (null, indirect) can easily be integrated into the existing framework once the necessary knowledge is built. 5 Also, once grammatical relations are determined with satisfactory accuracy, they can be incorporated into the existing indexing- retrieval framework without major changes to the architecture.
A variety of other applications may make use of the query 0 response techniques discussed above. Among the applications are the following:
1. As shown in figure 4, a person could use any voice-based communication device, such as a wireless or wired phone, to5 connect (200) to a web site, and using voice, navigate the web site and obtain information by issuing voice commands and questions. The user could utter a natural language query (202). The website would include speech recognition software mat would permit voice-to-text transcription of the query (204). The text would then be passed to the query response engine described earlier (206). The query response engine generates one or more responses (208) in text form and passes them to a speech synthesizer (210). The speech synthesizer converts the text to speech (212) that is played back over the phone to the user (214).
'2. As shown in figure 5, a person could get answers to questions from a wireless communication device. After the device is connected to a web site 220. the user types a query on the device, either using a keyboard or a stylus only a touch-sensitive screen. At the website, the query is passed to the query response engine described earlier (224). The query response engine generates responses (226) that are in the form of answers to the query rather than in the form of links to places where the answer may be available. The answers are then returned to the wireless device (228). For example, the question entered by the user might be "What was one of Einstein's achievements?" One response might be the answer "Einstein developed the theory of relativity."
3. As shown in figure 6, advertising delivered to a web user can be personalized based on questions that the user asks. The user enters a query (230). As before, the text of the query is passed to the engine (232) and a response is generated (234). The engine also uses the response to generate ad TAGS (238). For example, if the question is "what are the ski conditions like in Aspen?" the engine will generate TAGS that relate to commerce for Aspen, such as "Ski Rental, Cabin Rental, Dining in Aspen, Flying to Aspen". These TAGS are then used to extract appropriate ads from ad inventory. The ads are presented to the user along with the answer to the question asked.
4. As shown in figure 7, a user browses the web (250). Based on a web page being displayed to the user in the course of the browsing, a set of information, for example, words that appear on the web page, is derived for use with the query response engine (252). The information is applied to the query response engine as if it were a query (254). The results of the query are used to generate ad TAGS (256) and the TAGS are used to extract appropriate ads from ad inventory (258) as before. The ads are presented to the user as part of the page being read, or a later page (260).
5. As shown in figure 8, in an application similar to the one described in figure 6, except that the TAGS are chosen 270 to relate to articles or information, for example, about Aspen, such as "latest Aspen news, Traveling in Aspen, Events in Aspen", etc. These TAGS are then used to extract appropriate information from information sources (280) and construct the next page that is shown to the user (282). The resulting personalized page is then presented to the user along with the answer to the question asked (284).
6. As shown in figure 9, another application develops user profile and preference information based on questions asked A user types (or asks) questions 290. The query response engine processes the questions (292) and generates a log (294) that includes the following information, for example: identity of the user (name, IP address, etc.); the questions asked and answers to the questions; any un-answered questions; and the click stream reflecting what the user did after the answers were delivered to him. The log is analyzed (296) to generate profile TAGs. The profile TAGS are used to update a user profile (298). The next time the user logs in, or enters another query, the updated profile is used to personalize web pages and advertising for the user (300).
7. As shown in figure 10, another application facilitates online shopping by answering questions about products in the shopping cart. The user adds items to a shopping cart on a commercial web site (310). The items are used as the basis for generating question dialog boxes for each of the items (312). Each dialog box hovers above the shopping cart. The user may then ask a question about an item (314). The query response engine answers the question without forcing the user to leave the shopping cart (316). The answer is shown in the hovering dialog box. The user completes the transaction based on the answer (318).
8. As shown in figure 11, in another application, the user can navigate, e.g., a product catalogue by asking questions. The user
(in plain language) asks the system to show products that meet specific criteria (e.g., "show me the cheapest PC", "the fastest car", etc.) (330). The query response engine processes the request (332) and generates a list or an item that meets the criteria (334). The user then clicks on the items to buy (336).
9. As shown in figure 12, in another application, corporate and departmental reports can be generated based on a question log. Users ask questions to interact with a reporting system (338). The query response engine processes the questions (340) and updates the question log (342). The log includes the following information: identification of the user (name, IP address, etc.); questions asked, and answers to the questions; un-answered questions; and the click stream indicating what the user did after the answer was given. The log is analyzed by a report generator to generate pre-defined reports (344). The reports use question subjects, frequencies, users, whether they were answered or not, and other information contained in the questions to surmise information that is relevant to various departments such as product development, support, finance, human resources, etc. The reports are interpreted by humans to make business decisions about new products, product design, financing, internal processes, control, and other aspects of the business. The reports are based on context and intelligence extracted from the questions the users ask.
Another application, shown in Figures 13 through 15 is useful in an email context.
A shown in figure 13, a natural language query 10 has been typed into a subject field 12 of an electronic mail (email) message 14. The message also includes message field 16 (which is shown empty but could contain other information), a "to" address 17, for example, an Internet address of an email message server, and a "from" address 18, which typically identifies the source of the message.
By a natural language query we mean any arbitrary clause or sentence that is expressed in a human language, such as English, in a manner that is natural to native users of the language. Among other things, the query need not comply with any special syntax or vocabulary to accommodate to the needs of a computer program. The query need not be expressed as a complete sentence or as a question. It could be expressed, for example, as a command, or an order, or any kind of request for action or service.
As shown in figure 13, the email message containing the query is sent through the Internet 20 to the "to" address which is the location of an email message server 22. The email message server automatically receives the messages, automatically strips out the subject information, in this case the natural language query, and the "from" address, and automatically passes the query to a natural language query engine 24. The natural language query engine 24 applies the query to a body of information 25 that may contain a response or responses to the query. The resulting response or responses 28 are passed back through software 26 which forms a new email message 40 (such as the one shown in figure 14) using the response 28 in the message field 42 and the received "from" address as the "to" address 44 of the new message. The new message is forwarded to the email message server, which sends it through the Internet back to the source of the query to which it responds.
As mentioned earlier, the queries could be expressed as commands, orders, or any requests for action or service.
For example, a message sent to an email message server 48 of an on-line grocery site 50 could contain an order: "For Customer #123, please deliver 1 pound of cherries by 5PM tonight". The online site can receive this message, use a natural language query engine 52 to apply it to stored commercial information 54, for example, a database of address and credit card information associated with the user, and then apply the information in the database to software 56 that builds an order, charges the customer, arranges for the delivery of the goods, and e-mails back a confirmation of the delivery.
Another example is, "Send a message to my staff to attend a 5PM meeting in the office, tomorrow!" which could be handled in a similar way.
Another example is "please ship B2305 printer to customer 123 via overnight delivery, and charge it to account 123". The natural language query engine will interpret the instruction, go into a product database and find B2305, place the order in the name of customer 123 for overnight delivery, and charge account 123.
Another example is, "sell 100 shares of GE in account 123". The natural language engine will interpret the order, translate it into transactions, and e-mail back a confirmation.
Any natural language query engine could be used to respond to the queries. One suitable engine is described above.
For example, the query could be identified by other means than positioning it in the subject field. The query could be written in the message. Within the message field, the query could be distinguished from other text by predefined markers. For example the query could be preceded and followed by the string ++**. Or the query could be placed on the first line of the message.
The response to the query could be returned other than in an email message, for example, by FAX or by posting on a website.
Other instructions could be provided from the source to the email message server with respect to the query. For example, the email message could contain credit card or other charge information and the user could be charged for the response service. Or the message field could contain instructions about how to return the response, for example, including a FAX phone number.
The natural language messages can be sent and received over a wired or wireless network or a point-to-point connection. A user could speak the natural language message into a cellular or mobile phone. At the phone, or centrally, the message can be recognized and converted into text to be applied to the natural language query engine.
Other implementations are within the scope of the claims.

Claims

Claims
1. A method comprising receiving segments of text, each segment having elements, inferring implicit references from the elements of the 5 segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references. o
2. The method of claim 1 in which the implicit references are inferred prior to the time when the query is received.
3. The method of claim 1 in which the implicit references are stored as entries in a searchable index, each entry including a pointer to one of the segments from which the reference was5 inferred.
4. The method of claim 1 in which the segments comprise sentences.
5. The method of claim 1 further comprising selecting one or more of the identified segments for0 presentation to a user.
6. The method of claim 5 in which the segments that are presented to the user are determined based on scoring.
7. The method of claim 5 in which only one segment is displayed. 5
8. The method of claim 5 in which only a single segment from a given source is displayed.
9. The method of claim 1 in which the implicit references comprise generalizations of specializations represented by the elements contained in the segments.
10. The method of claim 1 in which the implicit reference comprises a name variation that refers to an element.
11. The method of claim 1 in which the implicit reference comprises an indirect reference to an element.
5 12. The method of claim 1 in which the implicit reference comprises a pronoun.
13. The method of claim 1 in which the implicit reference comprises a definite noun phrase.
14. The method of claim 1 in which the implicit reference o comprises a null reference.
15. The method of claim 1 in which antecedents of the implicit reference are found in a title.
16. The method of claim 1 in which antecedents of the implicit reference are found in a heading. 5
17. The method of claim 1 in which the implicit reference comprises a generalization and the element to which the implicit reference refers comprises a specialization.
18. The method of claim 1 in which an antecedent may be a pattern of characters and the pattern is referred to by a 0 generalization.
19. The method of claim 1 in which the implicit reference comprises a proper name and the element to which the reference refers comprises a noun or noun phrase.
20. The method of claim 1 in which the implicit reference5 comprises a pronoun, definite noun phrase, or name variant.
21. The method of claim 1 in which the identifying comprises scoring.
22. The method of claim 21 in which the scoring is based on a matching of elements in a question with elements in an index file that contains information about the inferred implicit references.
23. The method of claim 1 in which responding to the query includes identifying implicit references between the query and a previous query.
24. A method comprising receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, and storing an index file based on the implicit references for later use in responding to a query.
25. A method comprising receiving a query, and in response to the query, identifying one or more segments of text as relevant to the query based at least in part on implicit references that were pre-stored in an index file.
26. A method comprising receiving a question in the form of natural language speech from a source, automatically recognizing the speech, feeding the recognized speech to a natural language query engine operating on information accessible through a web site to generate a text answer to the question, synthesizing a spoken response to the question based on the answer, and playing the spoken response back to the source of the question.
27. The method of claim 26 also including receiving commands in the form of natural language speech from a source, automatically recognizing the speech, 5 determining the commands using natural language processing, and acting on the speech by controlling navigation in the web site.
28. The method of claim 26 in which the natural language o query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and 5 in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
29. A method comprising speaking a natural language question to a web site, and0 receiving a natural language spoken answer to the question back from the website.
30. A method comprising receiving a question in the form of natural language from a source, 5 feeding the question to a natural language query engine operating on information accessible through a web site to generate a text answer to the question, returning the text answer to the source of the question.
31. A method comprising receiving a natural language question from a user, deriving information about the user from the question, selecting promotional information based on the infoπnation about the user, generating an answer to the question using a natural language query engine, and returning the answer to the user together with the promotional information.
32. The method of claim 31 in which the information about the user includes preferences suggested by the question.
33. The method of claim 31 in which the promotional information comprises advertising.
34. The method of claim 31 also including generating advertising tags for use in selecting the promotional information.
35. The method of claim 31 in which the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
36. A method comprising receiving page information contained in a web page that is being viewed by a user, deriving user information about the user from the page information using a natural language query engine, selecting promotional information based on the user information, displaying the promotional information to the user while the user is viewing the web page.
37. The method of claim 36 in which the information about the user includes preferences suggested by the web page that is being viewed.
38. The method of claim 36 in which the promotional information comprises advertising.
39. The method of claim 36 also including generating advertising tags for use in selecting the promotional information.
40. The method of claim 36 in which the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
41. A method comprising receiving a question or command from a user, deriving information about the user from the question or command, selecting promotional infoπnation based on the infoπnation about the user, generating an answer to the question or command using a natural language query engine, and retorting the answer to the user together with the promotional information.
42. A method comprising receiving a natural language question from a user, 5 deriving information about the user from the question, selecting available information that is related to the question, generating an answer to the question using a natural language query engine, and o returning the answer to the user together with the available information.
43. The method of claim 42 in which the information about the user includes preferences suggested by the question.
44. The method of claim 42 in which the information related to5 the question comprises articles.
45. The method of claim 42 also including generating advertising tags for use in selecting the information.
46. The method of claim 42 in which the natural language query engine operates by 0 receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments5 as relevant to the query based at least in part on the implicit references.
47. A method comprising receiving natural language questions from a user, using a natural language query engine to provide natural language answers to the questions, enabling the user to take steps through a user interface after questions are received or answers are provided, 5 generating a log of information about the questions, answers, and steps of the user, in real-time or in batch mode, updating a user profile based on the log, using natural language processing to extract meaning from o the questions asked, answers provided and actions taken by the user based on each question and answer pair, and selecting content for web pages that are served to the user based on the user profile.
48. A method comprising 5 receiving natural language questions from a user, using a natural language query engine to provide natural language answers to the questions, enabling the user to take steps through a user interface after questions are received or answers are provided, 0 generating a log of information about the questions, answers, and steps of the user, analyzing the log using natural language processing to generate reports.
49. The method of claim 48 in which the log is analyzed with5 respect to subjects of the questions, frequencies, time stamps, users, and whether answers were given.
50. The method of claim 48 in which the log is analyzed with respect to subject of the questions, frequencies, users, answers or lack thereof, and reports are summarized by categories specified by the users. Natural language processing is used to map categories to types of data in the log. For example, if the users requests a summary of all questions (or answers) that relate to "system crashing", natural language processing identifies all questions (or answers) that contain phrases or words that are synonymous to "system crashing".
51. The method of claim 48 in which the log is analyzed with respect to the meanings that can be extracted from questions, the frequency of questions, question types, time of the questions, and users.
52. A method comprising receiving a natural language command from a user, deriving information about the user from the command, selecting available information that is related to the command, generating an answer to the command using a natural language query engine, and returning the answer to the user together with the available information.
53. A method comprising receiving natural language commands from a user, using a natural language query engine to provide natural language answers to the commands, enabling the user to take steps through a user interface after commands are received or answers are provided, generating a log of information about the questions, answers, and steps of the user, in real-time or in batch mode, updating a user profile based on the log, using natural language processing to extract meaning from the commands, answers provided and actions taken by the user based on each command and answer pair, and selecting content for web pages that are served to the user based on the user profile.
54. A method comprising receiving natural language commands from a user, using a natural language query engine to provide natural language answers to the commands, enabling the user to take steps through a user interface after commands are received or answers are provided, generating a log of information about the commands, answers, and steps of the user, analyzing the log using natural language processing to generate reports.
55. A method comprising entering a natural language question or command on a wireless personal electronic device, generating a natural language answer to the question or command using a natural language query engine, and presenting the natural language answer to a user.
56. The method of claim 55 in which the question or command is entered through a keyboard.
57. The method of claim 55 in which the answer is presented through an interface of the device.
58. The method of claim 55 in which the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments 5 as relevant to the query based at least in part on the implicit references.
59. A method comprising presenting to a user web page that comprises a shopping cart, o displaying on the shopping cart an identification of an item for purchase, providing a mechanism that enables the user, without leaving the shopping cart, to enter a natural language question or command, and 5 providing an answer to the natural language question or command.
60. The method of claim 59 in which the mechanism comprises a dialog box displayed over the shopping cart web page.
61. The method of claim 59 in which the dialog box is 0 displayed in association with the identification of the item for purchase.
62. The method of claim 59 in which the answer comprises information about the item for purchase.
63. The method of claim 59 in which the user takes a step in5 response to the answer and completes a transaction on the shopping cart web page.
64. The method of claim 59 in which the answer is provided from a natural language query engine.
65. The method of claim 59 in which the mechanism is provided by an agent that watches items being added to the shopping cart.
66. A method comprising receiving natural language questions or commands about products, selecting product information using a natural language query engine based on the questions or commands, and serving the product information from a web server to a user.
67. The method of claim 66 in which the user responds to the web server by buying one of the products.
68. The method of claim 66 in which the questions identify desired characteristics of the products.
69. The method of claim 66 in which the natural language query engine operates by receiving segments of text, each segment having elements, inferring implicit references from the elements of the segments, receiving a query, and in response to the query, identifying one or more segments as relevant to the query based at least in part on the implicit references.
70. A method comprising receiving from a user, over an electronic network, an electronic mail message containing a written natural language query, identifying the written natural language query in the electronic mail message, using a natural language query engine to apply the natural language query to a body of information, to generate information responsive to the query, and talcing an action based on the responsive information. 5
71. The method of claim 70 in which taking an action includes sending an electronic mail message containing the responsive information to the user over the publicly accessible electronic network.
72. The method of claim 70 in which the query includes a o question to be answered and the responsive information includes an answer to the question.
73. The method of claim 70 in which the query includes a request for an action or service and taking an action includes providing the action or service in response to the request. 5
74. The method of claim 70 in which the body of information includes textual content.
75. The method of claim 70 in which the body of information includes commercial information.
76. The method of claim 70 in which the action includes filling0 an order for a product or service.
77. The method of claim 70 in which the natural language query is identified based on an indicator arranged by the user.
78. The method of claim 77 in which the indicator comprises a position of the query within the electronic mail message. 5
79. The method of claim 78 in which the position is within a subject field of the electronic mail message.
80. The method of claim 70 in which the electronic mail message is directed to an address that is prearranged to automatically receive and respond to the natural language query.
81. A method comprising
5 receiving from a user, over a publicly accessible electronic network, an electronic mail message containing a written natural language query in a subject field of the message, the message being received at an address that is prearranged to automatically receive and respond to the natural language query, o automatically obtaining the natural language query from the subject field, using a natural language query engine to apply the natural language queiy to a body of infoπnation, to generate information responsive to the query, and 5 taking an action based on the responsive information.
82. Apparatus comprising an electronic mail message server connected to receive electronic mail messages containing natural language queries from an electronic network and to send electromc mail messages0 containing responses to the natural language queries to the electronic network, software adapted to identify written natural language queries in electronic mail messages received at the server and to provide information responsive to the natural language queries as5 electronic mail messages to the server for delivery, and a natural language query engine connected to receive the natural language queries from the electromc mail message server and to apply them to a body of information to obtain the responsive information. 5
83. A method comprising automatically stripping natural language queries from electronic mail messages, and automatically applying the queries to a natural language search engine to generate responsive information, and o automatically taking action based on the responsive information.
84. The method of claim 70 in which the written natural language query is derived by recognition of a spoken natural language query. 5 85. A method comprising receiving from a user, over an electronic network, a spoken electronic mail message containing a natural language query, identifying the natural language query in the spoken electronic mail message, 0 using a natural language query engine to apply the natural language query to a body of information, to generate information responsive to the query, and taking an action based on the responsive information. 5
PCT/US2001/015711 2000-05-17 2001-05-16 Answering natural language queries WO2001088662A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001261631A AU2001261631A1 (en) 2000-05-17 2001-05-16 Answering natural language queries

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
US57218600A 2000-05-17 2000-05-17
US57302500A 2000-05-17 2000-05-17
US57302400A 2000-05-17 2000-05-17
US57227600A 2000-05-17 2000-05-17
US57302300A 2000-05-17 2000-05-17
US09/573,024 2000-05-17
US09/573,023 2000-05-17
US09/572,770 2000-05-17
US09/572,186 2000-05-17
US09/572,276 2000-05-17
US09/573,025 2000-05-17
US09/572,770 US6957213B1 (en) 2000-05-17 2000-05-17 Method of utilizing implicit references to answer a query
US63761600A 2000-08-11 2000-08-11
US09/637,616 2000-08-11

Publications (2)

Publication Number Publication Date
WO2001088662A2 true WO2001088662A2 (en) 2001-11-22
WO2001088662A3 WO2001088662A3 (en) 2002-03-28

Family

ID=27569845

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/015711 WO2001088662A2 (en) 2000-05-17 2001-05-16 Answering natural language queries

Country Status (2)

Country Link
AU (1) AU2001261631A1 (en)
WO (1) WO2001088662A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6957213B1 (en) 2000-05-17 2005-10-18 Inquira, Inc. Method of utilizing implicit references to answer a query
US7668850B1 (en) 2006-05-10 2010-02-23 Inquira, Inc. Rule based navigation
US7747601B2 (en) 2006-08-14 2010-06-29 Inquira, Inc. Method and apparatus for identifying and classifying query intent
US8082264B2 (en) 2004-04-07 2011-12-20 Inquira, Inc. Automated scheme for identifying user intent in real-time
US8095476B2 (en) 2006-11-27 2012-01-10 Inquira, Inc. Automated support scheme for electronic forms
US8612208B2 (en) * 2004-04-07 2013-12-17 Oracle Otc Subsidiary Llc Ontology for use with a system, method, and computer readable medium for retrieving information and response to a query
US8781813B2 (en) 2006-08-14 2014-07-15 Oracle Otc Subsidiary Llc Intent management tool for identifying concepts associated with a plurality of users' queries
US9953265B2 (en) 2015-05-08 2018-04-24 International Business Machines Corporation Visual summary of answers from natural language question answering systems
CN111444701A (en) * 2019-01-16 2020-07-24 阿里巴巴集团控股有限公司 Method and device for prompting inquiry
WO2020154677A1 (en) * 2019-01-24 2020-07-30 Snap Inc. Interactive informational interface

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5321833A (en) * 1990-08-29 1994-06-14 Gte Laboratories Incorporated Adaptive ranking system for information retrieval
US5535382A (en) * 1989-07-31 1996-07-09 Ricoh Company, Ltd. Document retrieval system involving ranking of documents in accordance with a degree to which the documents fulfill a retrieval condition corresponding to a user entry
US5694546A (en) * 1994-05-31 1997-12-02 Reisman; Richard R. System for automatic unattended electronic information transport between a server and a client by a vendor provided transport software with a manifest list
US5742816A (en) * 1995-09-15 1998-04-21 Infonautics Corporation Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic
US5794050A (en) * 1995-01-04 1998-08-11 Intelligent Text Processing, Inc. Natural language understanding system
US5812865A (en) * 1993-12-03 1998-09-22 Xerox Corporation Specifying and establishing communication data paths between particular media devices in multiple media device computing systems based on context of a user or users
US5826269A (en) * 1995-06-21 1998-10-20 Microsoft Corporation Electronic mail interface for a network server
US5848399A (en) * 1993-11-30 1998-12-08 Burke; Raymond R. Computer system for allowing a consumer to purchase packaged goods at home
US5873076A (en) * 1995-09-15 1999-02-16 Infonautics Corporation Architecture for processing search queries, retrieving documents identified thereby, and method for using same
US5873080A (en) * 1996-09-20 1999-02-16 International Business Machines Corporation Using multiple search engines to search multimedia data
US5884302A (en) * 1996-12-02 1999-03-16 Ho; Chi Fai System and method to answer a question
US5893091A (en) * 1997-04-11 1999-04-06 Immediata Corporation Multicasting with key words
US5897622A (en) * 1996-10-16 1999-04-27 Microsoft Corporation Electronic shopping and merchandising system
US5901287A (en) * 1996-04-01 1999-05-04 The Sabre Group Inc. Information aggregation and synthesization system
US5913215A (en) * 1996-04-09 1999-06-15 Seymour I. Rubinstein Browse by prompted keyword phrases with an improved method for obtaining an initial document set
US5948054A (en) * 1996-02-27 1999-09-07 Sun Microsystems, Inc. Method and system for facilitating the exchange of information between human users in a networked computer system
US5966695A (en) * 1995-10-17 1999-10-12 Citibank, N.A. Sales and marketing support system using a graphical query prospect database
US5974412A (en) * 1997-09-24 1999-10-26 Sapient Health Network Intelligent query system for automatically indexing information in a database and automatically categorizing users
US5987454A (en) * 1997-06-09 1999-11-16 Hobbs; Allen Method and apparatus for selectively augmenting retrieved text, numbers, maps, charts, still pictures and/or graphics, moving pictures and/or graphics and audio information from a network resource
US5995921A (en) * 1996-04-23 1999-11-30 International Business Machines Corporation Natural language help interface
US6006225A (en) * 1998-06-15 1999-12-21 Amazon.Com Refining search queries by the suggestion of correlated terms from prior searches
US6016476A (en) * 1997-08-11 2000-01-18 International Business Machines Corporation Portable information and transaction processing system and method utilizing biometric authorization and digital certificate security
US6021403A (en) * 1996-07-19 2000-02-01 Microsoft Corporation Intelligent user assistance facility
US6052710A (en) * 1996-06-28 2000-04-18 Microsoft Corporation System and method for making function calls over a distributed network
US6061057A (en) * 1997-03-10 2000-05-09 Quickbuy Inc. Network commercial system using visual link objects

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535382A (en) * 1989-07-31 1996-07-09 Ricoh Company, Ltd. Document retrieval system involving ranking of documents in accordance with a degree to which the documents fulfill a retrieval condition corresponding to a user entry
US5321833A (en) * 1990-08-29 1994-06-14 Gte Laboratories Incorporated Adaptive ranking system for information retrieval
US5848399A (en) * 1993-11-30 1998-12-08 Burke; Raymond R. Computer system for allowing a consumer to purchase packaged goods at home
US5812865A (en) * 1993-12-03 1998-09-22 Xerox Corporation Specifying and establishing communication data paths between particular media devices in multiple media device computing systems based on context of a user or users
US5694546A (en) * 1994-05-31 1997-12-02 Reisman; Richard R. System for automatic unattended electronic information transport between a server and a client by a vendor provided transport software with a manifest list
US5794050A (en) * 1995-01-04 1998-08-11 Intelligent Text Processing, Inc. Natural language understanding system
US5826269A (en) * 1995-06-21 1998-10-20 Microsoft Corporation Electronic mail interface for a network server
US5873076A (en) * 1995-09-15 1999-02-16 Infonautics Corporation Architecture for processing search queries, retrieving documents identified thereby, and method for using same
US5742816A (en) * 1995-09-15 1998-04-21 Infonautics Corporation Method and apparatus for identifying textual documents and multi-mediafiles corresponding to a search topic
US5966695A (en) * 1995-10-17 1999-10-12 Citibank, N.A. Sales and marketing support system using a graphical query prospect database
US5948054A (en) * 1996-02-27 1999-09-07 Sun Microsystems, Inc. Method and system for facilitating the exchange of information between human users in a networked computer system
US5901287A (en) * 1996-04-01 1999-05-04 The Sabre Group Inc. Information aggregation and synthesization system
US5913215A (en) * 1996-04-09 1999-06-15 Seymour I. Rubinstein Browse by prompted keyword phrases with an improved method for obtaining an initial document set
US5995921A (en) * 1996-04-23 1999-11-30 International Business Machines Corporation Natural language help interface
US6052710A (en) * 1996-06-28 2000-04-18 Microsoft Corporation System and method for making function calls over a distributed network
US6021403A (en) * 1996-07-19 2000-02-01 Microsoft Corporation Intelligent user assistance facility
US5873080A (en) * 1996-09-20 1999-02-16 International Business Machines Corporation Using multiple search engines to search multimedia data
US5897622A (en) * 1996-10-16 1999-04-27 Microsoft Corporation Electronic shopping and merchandising system
US5884302A (en) * 1996-12-02 1999-03-16 Ho; Chi Fai System and method to answer a question
US6061057A (en) * 1997-03-10 2000-05-09 Quickbuy Inc. Network commercial system using visual link objects
US5893091A (en) * 1997-04-11 1999-04-06 Immediata Corporation Multicasting with key words
US5987454A (en) * 1997-06-09 1999-11-16 Hobbs; Allen Method and apparatus for selectively augmenting retrieved text, numbers, maps, charts, still pictures and/or graphics, moving pictures and/or graphics and audio information from a network resource
US6016476A (en) * 1997-08-11 2000-01-18 International Business Machines Corporation Portable information and transaction processing system and method utilizing biometric authorization and digital certificate security
US5974412A (en) * 1997-09-24 1999-10-26 Sapient Health Network Intelligent query system for automatically indexing information in a database and automatically categorizing users
US6006225A (en) * 1998-06-15 1999-12-21 Amazon.Com Refining search queries by the suggestion of correlated terms from prior searches

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6957213B1 (en) 2000-05-17 2005-10-18 Inquira, Inc. Method of utilizing implicit references to answer a query
US20140074826A1 (en) * 2004-04-07 2014-03-13 Oracle Otc Subsidiary Llc Ontology for use with a system, method, and computer readable medium for retrieving information and response to a query
US9747390B2 (en) 2004-04-07 2017-08-29 Oracle Otc Subsidiary Llc Ontology for use with a system, method, and computer readable medium for retrieving information and response to a query
US8082264B2 (en) 2004-04-07 2011-12-20 Inquira, Inc. Automated scheme for identifying user intent in real-time
US8612208B2 (en) * 2004-04-07 2013-12-17 Oracle Otc Subsidiary Llc Ontology for use with a system, method, and computer readable medium for retrieving information and response to a query
US8924410B2 (en) 2004-04-07 2014-12-30 Oracle International Corporation Automated scheme for identifying user intent in real-time
US7921099B2 (en) 2006-05-10 2011-04-05 Inquira, Inc. Guided navigation system
US7672951B1 (en) 2006-05-10 2010-03-02 Inquira, Inc. Guided navigation system
US8296284B2 (en) 2006-05-10 2012-10-23 Oracle International Corp. Guided navigation system
US7668850B1 (en) 2006-05-10 2010-02-23 Inquira, Inc. Rule based navigation
US7747601B2 (en) 2006-08-14 2010-06-29 Inquira, Inc. Method and apparatus for identifying and classifying query intent
US8781813B2 (en) 2006-08-14 2014-07-15 Oracle Otc Subsidiary Llc Intent management tool for identifying concepts associated with a plurality of users' queries
US8478780B2 (en) 2006-08-14 2013-07-02 Oracle Otc Subsidiary Llc Method and apparatus for identifying and classifying query intent
US8898140B2 (en) 2006-08-14 2014-11-25 Oracle Otc Subsidiary Llc Identifying and classifying query intent
US8095476B2 (en) 2006-11-27 2012-01-10 Inquira, Inc. Automated support scheme for electronic forms
US9953265B2 (en) 2015-05-08 2018-04-24 International Business Machines Corporation Visual summary of answers from natural language question answering systems
US11049027B2 (en) 2015-05-08 2021-06-29 International Business Machines Corporation Visual summary of answers from natural language question answering systems
CN111444701A (en) * 2019-01-16 2020-07-24 阿里巴巴集团控股有限公司 Method and device for prompting inquiry
WO2020154677A1 (en) * 2019-01-24 2020-07-30 Snap Inc. Interactive informational interface
US10817317B2 (en) 2019-01-24 2020-10-27 Snap Inc. Interactive informational interface
US11321105B2 (en) 2019-01-24 2022-05-03 Snap Inc. Interactive informational interface

Also Published As

Publication number Publication date
WO2001088662A3 (en) 2002-03-28
AU2001261631A1 (en) 2001-11-26

Similar Documents

Publication Publication Date Title
US6957213B1 (en) Method of utilizing implicit references to answer a query
US9256679B2 (en) Information search method and system, information provision method and system based on user&#39;s intention
US9747390B2 (en) Ontology for use with a system, method, and computer readable medium for retrieving information and response to a query
US8977953B1 (en) Customizing information by combining pair of annotations from at least two different documents
US8346536B2 (en) System and method for multi-lingual information retrieval
US5541838A (en) Translation machine having capability of registering idioms
US6044365A (en) System for indexing and retrieving graphic and sound data
JP3695191B2 (en) Translation support apparatus and method and computer-readable recording medium
US6286000B1 (en) Light weight document matcher
US20050203900A1 (en) Associative retrieval system and associative retrieval method
JPH1173417A (en) Method for identifying text category
JP2009528636A (en) System and method for identifying related queries for languages with multiple writing systems
JP2008527509A (en) Systems, methods, software, and interfaces for multilingual information retrieval
US8000957B2 (en) English-language translation of exact interpretations of keyword queries
JPH03172966A (en) Similar document retrieving device
WO2001088662A2 (en) Answering natural language queries
JPH0484271A (en) Intra-information retrieval device
US8082240B2 (en) System for retrieving information units
JPH0962684A (en) Information retrieval method and information retrieval device, and information guiding method and information guiding device
JPH11120206A (en) Method and device for automatic determination of text genre using outward appearance feature of untagged text
JP3191762B2 (en) Document file search device and machine-readable recording medium recording program
JP2002183175A (en) Text mining method
KR102280028B1 (en) Method for managing contents based on chatbot using big-data and artificial intelligence and apparatus for the same
Berger et al. Querying tourism information systems in natural language
JP2005284776A (en) Text mining apparatus and text analysis method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US US US US US US US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US US US US US US US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP