US20130246386A1 - Identifying key phrases within documents - Google Patents
Identifying key phrases within documents Download PDFInfo
- Publication number
- US20130246386A1 US20130246386A1 US13/794,093 US201313794093A US2013246386A1 US 20130246386 A1 US20130246386 A1 US 20130246386A1 US 201313794093 A US201313794093 A US 201313794093A US 2013246386 A1 US2013246386 A1 US 2013246386A1
- Authority
- US
- United States
- Prior art keywords
- document
- textual
- phrase
- phrases
- act
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06F17/3053—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G06F17/30864—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/258—Heading extraction; Automatic titling; Numbering
Definitions
- Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks are distributed across a number of different computer systems and/or a number of different computing environments.
- tasks e.g., word processing, scheduling, accounting, etc.
- One technique for characterizing documents includes using full text search solutions that mine documents into full text inverted indices.
- Another technique for characterizing documents mines document level semantics (e.g., to identify similarities between documents). Proper implementation of either of these two techniques can require heavy investments in both computer hardware and personnel resources.
- the present invention extends to methods, systems, and computer program products for identifying key phrases in documents.
- a document is accessed.
- the frequency of occurrence of a plurality of different textual phrases within the document is calculated.
- Each textual phrase includes one or more individual words of a specified language.
- a language model for the specified language is accessed.
- the language model defines expected frequencies of occurrence at least for individual words of the specified language.
- a cross-entropy value is computed for the textual phrase.
- the cross-entropy value is computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language.
- a specified number of statistically significant textual phrases from within the document are selected based on the computed cross-entropy values.
- a key phrase data structure is populated a with data representative of each of the selected specified number of statistically significant textual phrases.
- a document containing a plurality of textual phrases is accessed. For each textual phrase in the plurality of textual phrases contained the document, a location list is generated for the textual phrase. The location list indicates one or more locations of the textual phrase within the document. For each textual phrase in the plurality of textual phrases contained in the document, a score is assigned to the textual phrase. The score is based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data.
- the plurality of textual phrases is ranked according to the assigned scores.
- a subset of the plurality of textual phrases is selected from within the document based on the rankings.
- a key phrase data structure is populated from the selected subset of the plurality of textual phrases.
- FIG. 1 illustrates an example computer architecture that facilitates identifying key phrases within documents.
- FIG. 2 illustrates a flow chart of an example method for identifying key phrases within documents.
- FIG. 3 illustrates an example computer architecture that facilitates identifying key phrases within documents.
- FIG. 4 illustrates a flow chart of an example method for identifying key phrases within documents.
- the present invention extends to methods, systems, and computer program products for identifying key phrases in documents.
- a document is accessed.
- the frequency of occurrence of a plurality of different textual phrases within the document is calculated.
- Each textual phrase includes one or more individual words of a specified language.
- a language model for the specified language is accessed.
- the language model defines expected frequencies of occurrence at least for individual words of the specified language.
- a cross-entropy value is computed for the textual phrase.
- the cross-entropy value is computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language.
- a specified number of statistically significant textual phrases from within the document are selected based on the computed cross-entropy values.
- a key phrase data structure is populated a with data representative of each of the selected specified number of statistically significant textual phrases.
- a document containing a plurality of textual phrases is accessed. For each textual phrase in the plurality of textual phrases contained the document, a location list is generated for the textual phrase. The location list indicates one or more locations of the textual phrase within the document. For each textual phrase in the plurality of textual phrases contained the document, a score is assigned to the textual phrase. The score is based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data.
- the plurality of textual phrases is ranked according to the assigned scores.
- a subset of the plurality of textual phrases is selected from within the document based on the rankings.
- a key phrase data structure is populated from the selected subset of the plurality of textual phrases.
- Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below.
- Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures.
- Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system.
- Computer-readable media that store computer-executable instructions are computer storage media (devices).
- Computer-readable media that carry computer-executable instructions are transmission media.
- embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
- Computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
- a “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices.
- a network or another communications connection can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
- program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa).
- computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system.
- a network interface module e.g., a “NIC”
- NIC network interface module
- computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
- Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
- the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like.
- the invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
- program modules may be located in both local and remote memory storage devices.
- an integrated data flow and extract-transform-load pipeline crawls, parses and word breaks large corpuses of documents in database tables.
- Documents can be broken into tuples.
- the tuples are of the format ⁇ phrase, frequency ⁇ .
- a phrase can include one or more words and the frequency is the frequency of occurrence within a document.
- the tuples can be sent to a heuristically based algorithm that uses statistical language models and weight+cross-entropy threshold functions to summarize the document into its “top N” most statistically significant phrases.
- tuples can be of the format including ⁇ phrase, location list ⁇ .
- the location list lists the locations of the phrase within a document.
- the tuples are sent to a Keyword Extraction Algorithm (“KEX”) to compute, potentially with a higher quality (e.g. less noisy phrases), a set of textually relevant tags.
- KEX Keyword Extraction Algorithm
- documents can be characterized by salient and relevant key phrases (tags).
- each tuple can also include a document ID.
- FIG. 1 illustrates an example computer architecture 100 that facilitates identifying key phrases within documents.
- computer architecture 100 includes database 101 , frequency calculation module 102 , cross-entropy calculation module 103 , phrase selector 106 , and key phrase data structure 107 .
- Each of the depicted computer systems is connected to one another over (or is part of) a network, such as, for example, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet.
- LAN Local Area Network
- WAN Wide Area Network
- each of the depicted components can create message related data and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), etc.) over the network.
- IP Internet Protocol
- TCP Transmission Control Protocol
- HTTP Hypertext Transfer Protocol
- SMTP Simple Mail Transfer Protocol
- Database 101 can be virtually any type of database (e.g., a Structured Query Language (“SQL”) database or other relational database). As depicted, database 101 can contain one or more tables including table 109 . Each table in database 101 can include one or more rows and one or more columns used to organize data, such as, for example, documents. For example, table 109 includes a plurality of documents including documents 112 and 122 . Each document can be identified by a corresponding document ID. For example, document ID 111 can identify document 112 , document ID 121 can identify document 122 , etc.
- SQL Structured Query Language
- Frequency calculation module 102 is configured to calculate the frequency of occurrence of a textual phrase within a document. Frequency calculation module 102 can receive a document as input. From the document, frequency calculation module 102 can calculate the frequency with which one or more textual phrases occur in the document. A textual phrase can include one or more words of a specified language. Frequency calculation module 102 can output a list of phrases and corresponding frequencies for a document.
- cross-entropy calculation module 103 is configured to calculate a cross-entropy between phrases in a specified document and the same phrases in a corresponding language module.
- Cross-entropy calculation module 103 can receive a list of one or more phrases and corresponding frequencies of occurrence for a document.
- Cross-entropy calculation module 103 can also receive a statistical language model.
- the statistical language model can include a plurality of words (or phrases) of a specified language and can define an expected frequency of occurrence for each of the plurality of words (or phrases) in the language.
- Cross-entropy can measure the “amount of surprise” in the frequency of occurrence of a phrase in a specified document relative the frequency of occurrence of the phrase in the language model. For example, a particular phrase can occur with more or less frequency in a specified document as compared to the language model.
- cross-entropy calculation module 103 can be configured to calculate the cross-entropy between the frequency of occurrence of a phrase in a specified document and the frequency of occurrence of the phrase in a language module.
- expected frequencies of occurrence represent how often a word (or phrase) generally occurs within the specific language. In other embodiments, expected frequencies of occurrence are adjusted for particular document domains, such as, for example, legal documents, medical documents, engineering documents, sports related documents, financial documents, etc.
- combiner 104 can combine one or more words from a language model into a phrase contained in a document. For example, combiner 104 can combine the words ‘annual’ and ‘budget’ into “annual budget”. Combiner 104 can also compute a representative expected frequency for a phrase from expected frequencies for individual words included in the phrase. For example, combiner 104 can compute an expected frequency for “annual budget” from an expected frequency for ‘annual’ and an expected frequency for ‘budget’. Combiner 104 can include an algorithm for inferring (e.g., interpolating, extrapolating, etc.) an expected frequency for a phrase from a plurality of frequencies for individual words.
- algorithm for inferring e.g., interpolating, extrapolating, etc.
- Cross-entropy calculation module 103 can output a list of one more phrases and corresponding cross entropies.
- Phrase selection module 106 is configured to select phrases for inclusion in a key phrase data structure for a document. Phrase selection module 106 can receive a list of one or more phrases and corresponding cross entropies. Phrase selection module 106 can also receive one or selection functions. Phrase selection module 106 can apply the selection functions to the cross entropies to select a subset of phrases for inclusion in the key phrase data structure for the document. Selection functions can include weighting functions and/or threshold functions. Selected phrases can be copied to the key phrase data structure for the document.
- FIG. 2 illustrates a flow chart of an example method 200 for identifying key phrases within documents. Method 200 will be described with respect to the components and data in computer architecture 200 .
- Method 200 includes an act of accessing a document (act 201 ).
- frequency calculation module 102 can access document 112 .
- Method 200 includes an act of calculating the frequency of occurrence of a plurality of different textual phrases within the document, each textual phrase including one or more individual words of a specified language (act 202 ).
- frequency calculation module 102 can calculate the frequency of occurrence of a plurality of textual phrases, such as, for example, phrases 131 , 132 , and 133 , within document 112 .
- Each textual phrase in document 112 can include one or more individual words of a specified language (e.g., English, Japanese, Chinese, languages of India, etc.).
- a frequency for a phrase can represent how often a phrase occurs in document 112 .
- frequency 141 represents how often phrase 131 occurs in document 112
- frequency 142 represents how often phrase 132 occurs in document 112
- frequency 143 represents how often phrase 133 occurs in document 112 , etc.
- Frequency calculation module 102 can calculate frequencies for other additional phrases within document 112 .
- Frequency calculation module 102 can send the phrases and corresponding frequencies to cross-entropy calculation module 103 .
- Cross-entropy calculation module 103 can receive the phrases and corresponding frequencies from frequency calculation module 102 .
- Method 200 includes an act of accessing a language model for the specified language, the language model defining expected frequencies of occurrence at least for individual words of the specified language (act 203 ).
- cross-entropy calculation module can access statistical language model 159 .
- Statistical language model 159 can define expected frequencies of occurrence for words of the language of document 112 .
- word 161 has expected frequency 171
- word 162 has expected frequency 172 , etc.
- method 200 includes an act of computing a cross-entropy value for the textual phrase, the cross-entropy value computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language (act 204 ).
- cross-entropy calculation module 103 can compute a cross-entropy value for phrases from document 112 , such as, for example, phrases 131 , 132 , 133 , etc.
- Cross-entropy for phrases 131 , 132 , 133 , etc. can be computed from frequencies 141 , 142 , 143 , etc., and expected frequencies 171 , 172 , etc.
- cross-entropy can be increased.
- cross-entropy can be decreased.
- combiner 104 can compute an expected frequency for a phrase from expected frequencies for one or more words included in the phrase.
- cross entropy is computed in accordance with the following pseudo code example (where an ngram represents a phrase):
- values for one or more of minWeightCommonRange, maxWeightCommonRange are selected to linearize results.
- minLogprobCommonRange and maxLogprobCommonRange are calculated from experimental results.
- minLogprobCommonRange can be experimentally calculated as 2 and 12 (a range where the values for the rawWeight are commonly included).
- the pseudo code can be used to measure and reward the “amount of surprise” that each n-gram (phrase) has in the context of a given document. That is, the more frequent an n-gram is in comparison with its expected frequency, the more weight it carries in that document.
- ComputeCrossEntropy function provides a more sophisticated measurement that accounts for document length.
- the ComputeCrossEntropy function balances credit for very short and very long documents.
- ComputeCrossEntropy function is configured to not give too much credit to very short documents nor steal to much credit from very long documents.
- Method 200 includes an act of selecting a specified number of statistically significant textual phrases from within the document based on the computed cross-entropy values (act 205 ).
- cross-entropy calculation module 103 can return a maximum number of top candidates based on computed cross-entropies.
- the number of top candidates can be all or some number less than all of the phrases contained in document 112 , such as, for example, phrases 131 , 132 , 133 , etc.
- Cross-entropy calculation module 103 can output the number of top candidates long with their corresponding cross-entropy values to phrase selector 106 .
- phrase 131 can be output with cross-entropy 151
- phrase 132 can be output with cross-entropy 152
- phrase 133 can be output with cross-entropy 153
- Phrase selector 106 can receive the number of top candidates long with their corresponding cross-entropy values from cross-entropy calculation module 103 .
- Phrase selector 106 can apply selection functions 158 to filter out one or more of the top candidates.
- Selections functions 158 can include weighting and/or threshold functions. Weighting functions can be used to rank phrase relevance (based on cross-entropy) in a key phrase data structure. Weighting functions can also provide a sufficiently detailed sort order with respect to both document similarity and phrase relevance. Threshold functions allow a key phrase data structure to be maintained in a lossy state. Threshold functions can be used to prune out phrases that have a cross-entropy under a specified cross-entropy threshold for a document.
- selection functions can be used in selection functions.
- Functional forms for selection functions can be selected arbitrarily.
- some possible types of weighting functions include:
- threshold functions can be of the form: f(.) ⁇ T, or of the form f(.)/g(.) ⁇ T %.
- phrase selector 106 When both weighting and threshold functions are applied, it may be that phrase selector 106 outputs a set of phrases sorted from more relevant to less relevant, wherein the least relevant phrase retains a threshold relevance.
- phrase selector 106 can output one or more phrases form document 112 such as, for example, phrases 132 , 191 , 192 , etc.
- Method 200 includes an act of populating a key phrase data structure with data representative of each of the selected specified number of statistically significant textual phrases (act 206 ).
- phrase selector 106 can populated key phrase data structure 107 with phrases 132 , 191 , 192 , etc. Phrases may or may not be stored along with a corresponding weight in a key phrase data structure.
- a key phrase data structure can be of the non-normalized format:
- a document ID e.g., document ID 111 , 121 , etc.
- a key phrase data structure can be of the non-normalized format:
- Doc Id Tags 218 heart (w1), attack (w2), clogging (w3), PID:99 (w4) or of the normalized format:
- FIG. 3 illustrates an example computer architecture 300 that facilitates identifying key phrases within documents.
- computer architecture 300 includes database 301 , location indexer 302 , keyword extractor 303 , ranking module 306 , and key phrase data structure 307 .
- Each of the depicted computer systems is connected to one another over (or is part of) a network, such as, for example, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet.
- LAN Local Area Network
- WAN Wide Area Network
- each of the depicted components can create message related data and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), etc.) over the network.
- IP Internet Protocol
- TCP Transmission Control Protocol
- HTTP Hypertext Transfer Protocol
- SMTP Simple Mail Transfer Protocol
- Database 301 can be virtually any type of database (e.g., a Structured Query Language (“SQL”) database or other relational database). As depicted, database 301 can contain one or more tables including table 309 . Each table in database 301 can include one or more rows and one or more columns used to organize data, such as, for example, documents. For example, table 309 includes a plurality of documents including documents 312 and 322 . Each document can be identified by a corresponding document ID. For example, document ID 311 can identify document 312 , document ID 321 can identify document 322 , etc.
- SQL Structured Query Language
- Location indexer 302 is configured to identify one or more locations within a document where phrases are located.
- Keyword extractor 303 is configured to score key phrases from a document based on a location list for the key phrases relative to the occurrence of phrases in a training data set.
- a training data set can be used at keyword extractor 303 to produce a model for a supported language.
- a phrase is used as a query term submitted to a search engine. Web pages returned in the search results from the query term are used as training data for the phrase. Training for a language can occur in accordance with the following pseudo code (where an ngram represents a phrase):
- Keyword extractor 303 can run phrases and corresponding location lists against the model to extract phrases from a document. Keywords can be extracted in accordance with the following psuedocode (for a document in a given language and where an ngram represents a phrase):
- Ranking module 306 is configured to receive phrases and corresponding scores and rank the phrases in accordance with the scores. Ranking module 306 can store the ranked phrases in key phrase data structure 307 .
- FIG. 4 illustrates a flow chart of an example method 400 for identifying key phrases within documents. Method 400 will be described with respect to the components and data in computer architecture 300 .
- Method 400 includes an act of accessing a document containing a plurality of textual phrases (act 401 ).
- location indexer 302 can access document 312 .
- Document 312 can contain a plurality of textual phrases, such as, for example, phrases 331 , 332 , 333 , etc.
- method 400 includes an act of generating a location list for the textual phrase, the location list indicating one or more locations of the textual phrase within the document (act 402 ).
- location indexer 302 can generate locations list 341 for phrase 331 . Locations list 341 indicates one or more locations within document 312 where phrase 331 is found.
- location indexer 302 can generate locations list 342 for phrase 332 . Locations list 342 indicates one or more locations within document 312 where phrase 332 is found.
- location indexer 302 can generate locations list 343 for phrase 333 . Locations list 343 indicates one or more locations within document 312 where phrase 333 is found. Location lists for other phrases in document 312 can also be generated.
- Location indexer 302 can send phrases and corresponding locations lists to keyword extractor 303 .
- Keyword extractor 3030 can receive phrases and corresponding locations lists from location indexer 302 .
- method 400 includes an act of assigning a score to the textual phrase based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data (act 403 ).
- keyword extractor 303 can assign score 351 to phrase 331 based on the contents of locations list 341 relative to the occurrence of phrase 331 in training data 359 .
- keyword extractor 303 can assign score 352 to phrase 332 based on the contents of locations list 342 relative to the occurrence of phrase 332 in training data 359 .
- keyword extractor 303 can assign score 353 to phrase 333 based on the contents of locations list 343 relative to the occurrence of phrase 333 in training data 359 . Scores for other phrases (e.g., phrases 393 and 394 ) can also be assigned.
- Keyword extractor 303 can send phrases and corresponding scores to ranking module 306 .
- Ranking module 306 can receive phrases and corresponding scores from keyword extractor 303 .
- Method 400 includes an act of ranking the plurality of textual phrases according to the assigned scores (act 404 ).
- ranking module 306 can sort phrases 331 , 332 , 333 , etc. according to assigned scores 351 , 352 , 353 , etc.
- ranking module 306 sorts phrases based on assigned scores such that phrases with similar relevancy to document 312 are grouped together.
- Method 400 includes an act of selecting a subset of the plurality of textual phrases from within the document based on rankings (act 405 ). For example, ranking module 306 can select phrases 332 , 393 , 394 , etc., from within document 312 based on rankings. Method 400 includes an act of populating a key phrase data structure the selected subset of the plurality of textual phrases (act 406 ). For example, ranking module 306 can populate key phrase data structure 307 with phrases 332 , 393 , 394 , etc.
- a document ID e.g., document ID 311 , 321 , etc.
- a document ID can travel along with each phrase to indicate the document where each phrase originated.
- Embodiments of the invention extends to methods, systems, and computer program products for identifying key phrases within documents.
- Embodiments of the invention include using a tag index to determine what a document primarily relates to. For example, an integrated data flow and extract-transform-load pipeline, crawls, parses and word breaks large corpuses of documents in database tables. Documents can be broken into tuples. The tuples can be sent to a heuristically based algorithm that uses statistical language models and weight+cross-entropy threshold functions to summarize the document into its “top N” most statistically significant phrases. Accordingly, embodiments of the invention scale efficiently (e.g., linearly) and (potentially large numbers of) documents can be characterized by salient and relevant key phrases (tags).
Abstract
Systems are used for identifying key phrases within documents. These systems utilize a tags and a tag index to determine what a document primarily relates to. For example, an integrated data flow and extract-transform-load pipeline, crawls, parses and word breaks large corpuses of documents in database tables. Documents can be broken into tuples. The tuples can be sent to a heuristically based algorithm that uses statistical language models and weight plus cross-entropy threshold functions to summarize the document into its “top N” most statistically significant phrases. These systems can scale efficiently (e.g., linearly) and (potentially large numbers of) documents can be characterized by salient and relevant key phrases (tags).
Description
- This application is a continuation of U.S. patent application Ser. No. 12/959,840 filed on Dec. 3, 2010 and entitled “IDENTIFYING KEY PHRASES WITHIN DOCUMENTS,” which issued as U.S. Pat. No. 8,423,546 on Apr. 16, 2013, and which application is expressly incorporated herein by reference in its entirety.
- Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks are distributed across a number of different computer systems and/or a number of different computing environments.
- For many organizations, documents easily comprise the largest information assets by volume. As such, characterizing a document by its salient features, such as, for example, its key words and phrases, is an important piece of functionality.
- One technique for characterizing documents includes using full text search solutions that mine documents into full text inverted indices. Another technique for characterizing documents mines document level semantics (e.g., to identify similarities between documents). Proper implementation of either of these two techniques can require heavy investments in both computer hardware and personnel resources.
- Further, document parsing, mining, etc. operations are often replicated across these two techniques. As such, an end user pays additional costs by having to invest in (perhaps as much as double) resources to reap the benefits of both search and semantic insight over their documents. Additionally, many more complex document mining techniques require integrating disparate systems together and lead to further costs in order to satisfy an organization's document processing needs.
- The present invention extends to methods, systems, and computer program products for identifying key phrases in documents. In some embodiments, a document is accessed. The frequency of occurrence of a plurality of different textual phrases within the document is calculated. Each textual phrase includes one or more individual words of a specified language. A language model for the specified language is accessed. The language model defines expected frequencies of occurrence at least for individual words of the specified language.
- For each textual phrase in the plurality of different textual phrases a cross-entropy value is computed for the textual phrase. The cross-entropy value is computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language. A specified number of statistically significant textual phrases from within the document are selected based on the computed cross-entropy values. A key phrase data structure is populated a with data representative of each of the selected specified number of statistically significant textual phrases.
- In other embodiments, a document containing a plurality of textual phrases is accessed. For each textual phrase in the plurality of textual phrases contained the document, a location list is generated for the textual phrase. The location list indicates one or more locations of the textual phrase within the document. For each textual phrase in the plurality of textual phrases contained in the document, a score is assigned to the textual phrase. The score is based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data.
- The plurality of textual phrases is ranked according to the assigned scores. A subset of the plurality of textual phrases is selected from within the document based on the rankings. A key phrase data structure is populated from the selected subset of the plurality of textual phrases.
- This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
- Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
- In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
-
FIG. 1 illustrates an example computer architecture that facilitates identifying key phrases within documents. -
FIG. 2 illustrates a flow chart of an example method for identifying key phrases within documents. -
FIG. 3 illustrates an example computer architecture that facilitates identifying key phrases within documents. -
FIG. 4 illustrates a flow chart of an example method for identifying key phrases within documents. - The present invention extends to methods, systems, and computer program products for identifying key phrases in documents. A document is accessed. The frequency of occurrence of a plurality of different textual phrases within the document is calculated. Each textual phrase includes one or more individual words of a specified language. A language model for the specified language is accessed. The language model defines expected frequencies of occurrence at least for individual words of the specified language.
- For each textual phrase in the plurality of different textual phrases a cross-entropy value is computed for the textual phrase. The cross-entropy value is computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language. A specified number of statistically significant textual phrases from within the document are selected based on the computed cross-entropy values. A key phrase data structure is populated a with data representative of each of the selected specified number of statistically significant textual phrases.
- In other embodiments, a document containing a plurality of textual phrases is accessed. For each textual phrase in the plurality of textual phrases contained the document, a location list is generated for the textual phrase. The location list indicates one or more locations of the textual phrase within the document. For each textual phrase in the plurality of textual phrases contained the document, a score is assigned to the textual phrase. The score is based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data.
- The plurality of textual phrases is ranked according to the assigned scores. A subset of the plurality of textual phrases is selected from within the document based on the rankings. A key phrase data structure is populated from the selected subset of the plurality of textual phrases.
- Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
- Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
- A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
- Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
- Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
- Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
- In general, an integrated data flow and extract-transform-load pipeline, crawls, parses and word breaks large corpuses of documents in database tables. Documents can be broken into tuples. In some embodiments, the tuples are of the format {phrase, frequency}. A phrase can include one or more words and the frequency is the frequency of occurrence within a document. The tuples can be sent to a heuristically based algorithm that uses statistical language models and weight+cross-entropy threshold functions to summarize the document into its “top N” most statistically significant phrases.
- Alternately, tuples can be of the format including {phrase, location list}. The location list lists the locations of the phrase within a document. The tuples are sent to a Keyword Extraction Algorithm (“KEX”) to compute, potentially with a higher quality (e.g. less noisy phrases), a set of textually relevant tags. Accordingly, documents can be characterized by salient and relevant key phrases (tags).
- When a plurality of documents is being processed, each tuple can also include a document ID.
-
FIG. 1 illustrates anexample computer architecture 100 that facilitates identifying key phrases within documents. Referring toFIG. 1 ,computer architecture 100 includesdatabase 101,frequency calculation module 102,cross-entropy calculation module 103,phrase selector 106, and keyphrase data structure 107. Each of the depicted computer systems is connected to one another over (or is part of) a network, such as, for example, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet. Accordingly, each of the depicted components as well as any other connected computer systems and their components, can create message related data and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), etc.) over the network. -
Database 101 can be virtually any type of database (e.g., a Structured Query Language (“SQL”) database or other relational database). As depicted,database 101 can contain one or more tables including table 109. Each table indatabase 101 can include one or more rows and one or more columns used to organize data, such as, for example, documents. For example, table 109 includes a plurality ofdocuments including documents document ID 111 can identifydocument 112,document ID 121 can identifydocument 122, etc. -
Frequency calculation module 102 is configured to calculate the frequency of occurrence of a textual phrase within a document.Frequency calculation module 102 can receive a document as input. From the document,frequency calculation module 102 can calculate the frequency with which one or more textual phrases occur in the document. A textual phrase can include one or more words of a specified language.Frequency calculation module 102 can output a list of phrases and corresponding frequencies for a document. - In general,
cross-entropy calculation module 103 is configured to calculate a cross-entropy between phrases in a specified document and the same phrases in a corresponding language module.Cross-entropy calculation module 103 can receive a list of one or more phrases and corresponding frequencies of occurrence for a document.Cross-entropy calculation module 103 can also receive a statistical language model. The statistical language model can include a plurality of words (or phrases) of a specified language and can define an expected frequency of occurrence for each of the plurality of words (or phrases) in the language. - Cross-entropy can measure the “amount of surprise” in the frequency of occurrence of a phrase in a specified document relative the frequency of occurrence of the phrase in the language model. For example, a particular phrase can occur with more or less frequency in a specified document as compared to the language model. Thus,
cross-entropy calculation module 103 can be configured to calculate the cross-entropy between the frequency of occurrence of a phrase in a specified document and the frequency of occurrence of the phrase in a language module. - In some embodiments, expected frequencies of occurrence represent how often a word (or phrase) generally occurs within the specific language. In other embodiments, expected frequencies of occurrence are adjusted for particular document domains, such as, for example, legal documents, medical documents, engineering documents, sports related documents, financial documents, etc.
- When appropriate,
combiner 104 can combine one or more words from a language model into a phrase contained in a document. For example,combiner 104 can combine the words ‘annual’ and ‘budget’ into “annual budget”.Combiner 104 can also compute a representative expected frequency for a phrase from expected frequencies for individual words included in the phrase. For example,combiner 104 can compute an expected frequency for “annual budget” from an expected frequency for ‘annual’ and an expected frequency for ‘budget’.Combiner 104 can include an algorithm for inferring (e.g., interpolating, extrapolating, etc.) an expected frequency for a phrase from a plurality of frequencies for individual words. -
Cross-entropy calculation module 103 can output a list of one more phrases and corresponding cross entropies. -
Phrase selection module 106 is configured to select phrases for inclusion in a key phrase data structure for a document.Phrase selection module 106 can receive a list of one or more phrases and corresponding cross entropies.Phrase selection module 106 can also receive one or selection functions.Phrase selection module 106 can apply the selection functions to the cross entropies to select a subset of phrases for inclusion in the key phrase data structure for the document. Selection functions can include weighting functions and/or threshold functions. Selected phrases can be copied to the key phrase data structure for the document. -
FIG. 2 illustrates a flow chart of anexample method 200 for identifying key phrases within documents.Method 200 will be described with respect to the components and data incomputer architecture 200. -
Method 200 includes an act of accessing a document (act 201). For example,frequency calculation module 102 can accessdocument 112.Method 200 includes an act of calculating the frequency of occurrence of a plurality of different textual phrases within the document, each textual phrase including one or more individual words of a specified language (act 202). For example,frequency calculation module 102 can calculate the frequency of occurrence of a plurality of textual phrases, such as, for example,phrases document 112. Each textual phrase indocument 112 can include one or more individual words of a specified language (e.g., English, Japanese, Chinese, languages of India, etc.). - A frequency for a phrase can represent how often a phrase occurs in
document 112. For example,frequency 141 represents how oftenphrase 131 occurs indocument 112,frequency 142 represents how oftenphrase 132 occurs indocument 112,frequency 143 represents how oftenphrase 133 occurs indocument 112, etc.Frequency calculation module 102 can calculate frequencies for other additional phrases withindocument 112.Frequency calculation module 102 can send the phrases and corresponding frequencies tocross-entropy calculation module 103.Cross-entropy calculation module 103 can receive the phrases and corresponding frequencies fromfrequency calculation module 102. -
Method 200 includes an act of accessing a language model for the specified language, the language model defining expected frequencies of occurrence at least for individual words of the specified language (act 203). For example, cross-entropy calculation module can accessstatistical language model 159.Statistical language model 159 can define expected frequencies of occurrence for words of the language ofdocument 112. For example,word 161 has expectedfrequency 171,word 162 has expectedfrequency 172, etc. - For each textual phrase in the plurality of different textual phrases,
method 200 includes an act of computing a cross-entropy value for the textual phrase, the cross-entropy value computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language (act 204). For example,cross-entropy calculation module 103 can compute a cross-entropy value for phrases fromdocument 112, such as, for example,phrases phrases frequencies frequencies - When appropriate,
combiner 104 can compute an expected frequency for a phrase from expected frequencies for one or more words included in the phrase. - In some embodiments, cross entropy is computed in accordance with the following pseudo code example (where an ngram represents a phrase):
-
languageModel = SelectLanguageModel(document) candidates = empty topN priority queue; foreach((ngram, locations) in DNI[document]) { score = ComputeCrossEntropy( document.GetSize( ), locations.Length, // actual ngram frequency in current document languageModel.GetLogProb(ngram) // expected ngram logprob from language model ); candidates.Add(ngram, score); } wherein ComputeCrossEntropy(numWordsInDocument, numOccurences, logprob) { // we reward repeated occurrences; BoostMultiplier = 20 if (numOccurences > 1): numOccurences *= BoostMultiplier observedLogprob = Log10(numOccurences/numWordsInDocument) rawWeight = logprob/observedLogprob // smoothen the result to better cover the 0-1 range. result = (((maxWeightCommonRange−minWeightCommonRange)/( maxLogprobCommonRange−minLogprobCommonRange)) * (rawWeight- minLogprobCommonRange)) + minWeightCommonRange if result < 0: result = 0 if result > 1: result = 1 return result } - In some embodiments, values for one or more of minWeightCommonRange, maxWeightCommonRange are selected to linearize results. For example, minWeightCommonRange (=0.1) and maxWeightCommonRange (=0.9) can be used to denote the “common range of values (0.1-0.9), while the “leftovers” from 0-1 (0-0.1, and 0.9-1) are left for extreme values.
- In some embodiments, minLogprobCommonRange and maxLogprobCommonRange are calculated from experimental results. For example, minLogprobCommonRange can be experimentally calculated as 2 and 12 (a range where the values for the rawWeight are commonly included).
- The pseudo code can be used to measure and reward the “amount of surprise” that each n-gram (phrase) has in the context of a given document. That is, the more frequent an n-gram is in comparison with its expected frequency, the more weight it carries in that document.
- This amount of surprise can more crudely be measured as actualFrequency/expectedFrequency. However, the ComputeCrossEntropy function provides a more sophisticated measurement that accounts for document length. The ComputeCrossEntropy function balances credit for very short and very long documents. For example, ComputeCrossEntropy function is configured to not give too much credit to very short documents nor steal to much credit from very long documents.
-
Method 200 includes an act of selecting a specified number of statistically significant textual phrases from within the document based on the computed cross-entropy values (act 205). For example,cross-entropy calculation module 103 can return a maximum number of top candidates based on computed cross-entropies. The number of top candidates can be all or some number less than all of the phrases contained indocument 112, such as, for example,phrases Cross-entropy calculation module 103 can output the number of top candidates long with their corresponding cross-entropy values tophrase selector 106. For example,phrase 131 can be output withcross-entropy 151,phrase 132 can be output withcross-entropy 152,phrase 133 can be output withcross-entropy 153, etc.Phrase selector 106 can receive the number of top candidates long with their corresponding cross-entropy values fromcross-entropy calculation module 103. -
Phrase selector 106 can applyselection functions 158 to filter out one or more of the top candidates. Selections functions 158 can include weighting and/or threshold functions. Weighting functions can be used to rank phrase relevance (based on cross-entropy) in a key phrase data structure. Weighting functions can also provide a sufficiently detailed sort order with respect to both document similarity and phrase relevance. Threshold functions allow a key phrase data structure to be maintained in a lossy state. Threshold functions can be used to prune out phrases that have a cross-entropy under a specified cross-entropy threshold for a document. - Various different types of free parameters, such as, for example, cross-entropy/log probability, term frequency, document length, etc, can be used in selection functions. Functional forms for selection functions can be selected arbitrarily. For example, some possible types of weighting functions include:
-
Functional form Example Linear f(.) = ax1 + bx2 + c Polynomial f(.) = ax1n + bx2n−1 Ratio f(.) = ax1n/bx2m Exponential 2f(.), ef(.) - Similarly, threshold functions can be of the form: f(.)<T, or of the form f(.)/g(.)<T %.
- When both weighting and threshold functions are applied, it may be that
phrase selector 106 outputs a set of phrases sorted from more relevant to less relevant, wherein the least relevant phrase retains a threshold relevance. For example,phrase selector 106 can output one or more phrases formdocument 112 such as, for example,phrases -
Method 200 includes an act of populating a key phrase data structure with data representative of each of the selected specified number of statistically significant textual phrases (act 206). For example,phrase selector 106 can populated keyphrase data structure 107 withphrases -
Tags: heart (w1), attack (w2), clogging (w3), PID:99 (w4)
or of the normalized format: -
Tag Weight heart w1 attack w2 clogging w3 PID:99 w4 - When a plurality of documents are processed (e.g.,
document document ID -
Doc Id Tags 218 heart (w1), attack (w2), clogging (w3), PID:99 (w4)
or of the normalized format: -
Doc Id Tag Weight 218 heart w1 218 attack w2 218 clogging w3 218 PID:99 w4 -
FIG. 3 illustrates anexample computer architecture 300 that facilitates identifying key phrases within documents. Referring toFIG. 3 ,computer architecture 300 includesdatabase 301,location indexer 302,keyword extractor 303, rankingmodule 306, and keyphrase data structure 307. Each of the depicted computer systems is connected to one another over (or is part of) a network, such as, for example, a Local Area Network (“LAN”), a Wide Area Network (“WAN”), and even the Internet. Accordingly, each of the depicted components as well as any other connected computer systems and their components, can create message related data and exchange message related data (e.g., Internet Protocol (“IP”) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (“TCP”), Hypertext Transfer Protocol (“HTTP”), Simple Mail Transfer Protocol (“SMTP”), etc.) over the network. -
Database 301 can be virtually any type of database (e.g., a Structured Query Language (“SQL”) database or other relational database). As depicted,database 301 can contain one or more tables including table 309. Each table indatabase 301 can include one or more rows and one or more columns used to organize data, such as, for example, documents. For example, table 309 includes a plurality ofdocuments including documents document ID 311 can identifydocument 312,document ID 321 can identifydocument 322, etc. -
Location indexer 302 is configured to identify one or more locations within a document where phrases are located. -
Keyword extractor 303 is configured to score key phrases from a document based on a location list for the key phrases relative to the occurrence of phrases in a training data set. A training data set can be used atkeyword extractor 303 to produce a model for a supported language. In some embodiments, a phrase is used as a query term submitted to a search engine. Web pages returned in the search results from the query term are used as training data for the phrase. Training for a language can occur in accordance with the following pseudo code (where an ngram represents a phrase): -
store = InitializeModel(language) // set of documents and associated keyphrases trainingSet = empty Dictionary<document,Set<ngram>> foreach (language in SetOfLanguagesWeSupport) { foreach ((ngram, frequency) inTrainingLanguageModel(language)) { // seeding the store with the language model frequencies store.Add(ngram, frequency) } // SelectSampleOf is selecting about 10000 ngrams from the language model to issue queries for foreach (ngram in SelectSampleOf(source= TrainingLanguageModel(language))) { // we only need about 10000 training documents if (trainingSet.Lengh >= 10000) break; // we only retain the top URL that matches our needs URL document = QuerySearchEngine(ngram); keyphrases = new Set<ngram>( ); keyphrases.Add(ngram); // add the query as a keyphrase trainingSet.Insert(document, keyphrases) } // parse the documents, add contained ngrams as keyphrases foreach ((document, keyphrases) in trainingSet) { foreach (ngram in document) { trainingSet[document].Add(ngram) } } // process the training set and build the KEX model // this part is generic, can take as input any training set, regardless if it was produced from querying search engine or is a manually tagged set of documents foreach ((document, keyphrases) in trainingSet) { foreach(keyphrase in keyphrases) { // it is a bit more complex than this, because we need to differentiate between keyphrases that were used as queries vs the ones that were only found inside the doc, etc. store.Update(document, keyphrase) } } } -
Keyword extractor 303 can run phrases and corresponding location lists against the model to extract phrases from a document. Keywords can be extracted in accordance with the following psuedocode (for a document in a given language and where an ngram represents a phrase): -
store = ChooseModel(language) features = empty collection foreach( (ngram, locations) in DNI[document]) { if (ngram is not in store) continue; storedFeatures = store.GetFeatures(ngram); foreach (location in locations) { dynamicFeatures = ComputeFeatures(location, ngram); features.Insert(ngram, join(storedFeatures, dynamicFeatures)); } } candidates = empty dictionary; foreach(ngram in features.Keys) { // this uses the predictive-model part of KEX trained model score = RelevanceScore(features[ngram])); if (score > threshold) { candidates.Add(ngram, score); } } return maxResults top-score candidates in score-decreasing order; -
Ranking module 306 is configured to receive phrases and corresponding scores and rank the phrases in accordance with the scores.Ranking module 306 can store the ranked phrases in keyphrase data structure 307. -
FIG. 4 illustrates a flow chart of anexample method 400 for identifying key phrases within documents.Method 400 will be described with respect to the components and data incomputer architecture 300. -
Method 400 includes an act of accessing a document containing a plurality of textual phrases (act 401). For example,location indexer 302 can accessdocument 312.Document 312 can contain a plurality of textual phrases, such as, for example,phrases - For each textual phrase in the plurality of textual phrases contained in the document,
method 400 includes an act of generating a location list for the textual phrase, the location list indicating one or more locations of the textual phrase within the document (act 402). For example,location indexer 302 can generate locations list 341 forphrase 331. Locations list 341 indicates one or more locations withindocument 312 wherephrase 331 is found. Similarly,location indexer 302 can generate locations list 342 forphrase 332. Locations list 342 indicates one or more locations withindocument 312 wherephrase 332 is found. Likewise,location indexer 302 can generate locations list 343 forphrase 333. Locations list 343 indicates one or more locations withindocument 312 wherephrase 333 is found. Location lists for other phrases indocument 312 can also be generated. -
Location indexer 302 can send phrases and corresponding locations lists tokeyword extractor 303. Keyword extractor 3030 can receive phrases and corresponding locations lists fromlocation indexer 302. - For each textual phrase in the plurality of textual phrases contained the document,
method 400 includes an act of assigning a score to the textual phrase based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data (act 403). For example,keyword extractor 303 can assign score 351 tophrase 331 based on the contents of locations list 341 relative to the occurrence ofphrase 331 intraining data 359. Similarly,keyword extractor 303 can assign score 352 tophrase 332 based on the contents of locations list 342 relative to the occurrence ofphrase 332 intraining data 359. Likewise,keyword extractor 303 can assign score 353 tophrase 333 based on the contents of locations list 343 relative to the occurrence ofphrase 333 intraining data 359. Scores for other phrases (e.g.,phrases 393 and 394) can also be assigned. -
Keyword extractor 303 can send phrases and corresponding scores to rankingmodule 306.Ranking module 306 can receive phrases and corresponding scores fromkeyword extractor 303. -
Method 400 includes an act of ranking the plurality of textual phrases according to the assigned scores (act 404). For example, rankingmodule 306 can sortphrases scores module 306 sorts phrases based on assigned scores such that phrases with similar relevancy to document 312 are grouped together. -
Method 400 includes an act of selecting a subset of the plurality of textual phrases from within the document based on rankings (act 405). For example, rankingmodule 306 can selectphrases document 312 based on rankings.Method 400 includes an act of populating a key phrase data structure the selected subset of the plurality of textual phrases (act 406). For example, rankingmodule 306 can populate keyphrase data structure 307 withphrases - When a plurality of documents are processed (e.g.,
documents document ID - The present invention extends to methods, systems, and computer program products for identifying key phrases within documents. Embodiments of the invention include using a tag index to determine what a document primarily relates to. For example, an integrated data flow and extract-transform-load pipeline, crawls, parses and word breaks large corpuses of documents in database tables. Documents can be broken into tuples. The tuples can be sent to a heuristically based algorithm that uses statistical language models and weight+cross-entropy threshold functions to summarize the document into its “top N” most statistically significant phrases. Accordingly, embodiments of the invention scale efficiently (e.g., linearly) and (potentially large numbers of) documents can be characterized by salient and relevant key phrases (tags).
- The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (20)
1. At a computing system including one or more processors and system memory, a method implemented by the computing system for identifying key phrases within a document, the method comprising:
an act of accessing a document containing a plurality of textual phrases;
for each textual phrase in the plurality of textual phrases contained the document:
an act of generating a location list for the textual phrase, the location list indicating one or more locations of the textual phrase within the document;
an act of assigning a score to the textual phrase based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data;
an act of ranking the plurality of textual phrases according to the assigned scores;
an act of selecting a subset of the plurality of textual phrases from within the document based on rankings; and
an act of populating a key phrase data structure the selected subset of the plurality of textual phrases.
2. The method as recited in claim 1 , further comprising an act of generating the training set of data through a plurality of queries to a search engine.
3. The method as recited in claim 1 , wherein the training set of data is a language model.
4. The method as recited in claim 1 , wherein the ranking includes sorting a plurality of textual phrases associated with the document based on assigned scores such that textual phrases that are determined to have a similar relevancy to the document are grouped together.
5. The method as recited in claim 1 , wherein the method further includes appending the document with a document identifier.
6. The method as recited in claim 5 , wherein the document identifier indicates where the textual phrase occurs in the document.
7. The method as recited in claim 1 , wherein the method includes identifying a set of one or more most statistically significant textual phrases in the document.
8. A computer program product for use at a computing system, the computer program product comprising one or more computer storage devices having stored thereon computer-executable instructions that, when executed at a processor, cause the computing system to perform a method for identifying key phrases within a document, wherein the method includes the computing system performing the following:
an act of accessing a document containing a plurality of textual phrases;
for each textual phrase in the plurality of textual phrases contained the document:
an act of generating a location list for the textual phrase, the location list indicating one or more locations of the textual phrase within the document;
an act of assigning a score to the textual phrase based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data;
an act of ranking the plurality of textual phrases according to the assigned scores;
an act of selecting a subset of the plurality of textual phrases from within the document based on rankings; and
an act of populating a key phrase data structure the selected subset of the plurality of textual phrases.
9. The computer program product as recited in claim 8 , further comprising an act of generating the training set of data through a plurality of queries to a search engine.
10. The computer program product as recited in claim 8 , wherein the training set of data is a language model.
11. The computer program product as recited in claim 8 , wherein the ranking includes sorting a plurality of textual phrases associated with the document based on assigned scores such that textual phrases that are determined to have a similar relevancy to the document are grouped together.
12. The computer program product as recited in claim 8 , wherein the method further includes appending the document with a document identifier and wherein the document identifier indicates where the textual phrase occurs in the document.
13. The computer program product as recited in claim 8 , wherein the method includes identifying a set of one or more statistically significant textual phrases in the document and using a tag index to identify what the document primarily relates to based on the one or more most statistically significant textual phrases in the document.
14. A computing system comprising:
at least one processor; and
one or more computer-readable media having stored computer-executable instructions that, when executed by the at least one processor, cause the computing system to perform a method for identifying key phrases within a document, wherein the method includes the computing system performing the following:
an act of accessing a document containing a plurality of textual phrases;
for each textual phrase in the plurality of textual phrases contained the document:
an act of generating a location list for the textual phrase, the location list indicating one or more locations of the textual phrase within the document;
an act of assigning a score to the textual phrase based on the contents of the location list for the textual phrase relative to the occurrence of the textural phrase in a training set of data;
an act of ranking the plurality of textual phrases according to the assigned scores;
an act of selecting a subset of the plurality of textual phrases from within the document based on rankings; and
an act of populating a key phrase data structure the selected subset of the plurality of textual phrases.
15. The computing system as recited in claim 14 , further comprising an act of generating the training set of data through a plurality of queries to a search engine.
16. The computing system as recited in claim 14 , wherein the training set of data is a language model.
17. The computing system as recited in claim 14 , wherein the ranking includes sorting a plurality of textual phrases associated with the document based on assigned scores such that textual phrases that are determined to have a similar relevancy to the document are grouped together.
18. The computing system as recited in claim 14 , wherein the method further includes appending the document with a document identifier and wherein the document identifier indicates where the textual phrase occurs in the document.
19. The computing system as recited in claim 14 , wherein the method includes identifying a set of one or more statistically significant textual phrases in the document and using a tag index to identify what the document primarily relates to based on the one or more most statistically significant textual phrases in the document.
20. The computing system as recited in claim 14 , wherein the one or more computer-readable media comprises system memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/794,093 US20130246386A1 (en) | 2010-12-03 | 2013-03-11 | Identifying key phrases within documents |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/959,840 US8423546B2 (en) | 2010-12-03 | 2010-12-03 | Identifying key phrases within documents |
US13/794,093 US20130246386A1 (en) | 2010-12-03 | 2013-03-11 | Identifying key phrases within documents |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/959,840 Continuation US8423546B2 (en) | 2010-12-03 | 2010-12-03 | Identifying key phrases within documents |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130246386A1 true US20130246386A1 (en) | 2013-09-19 |
Family
ID=46163212
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/959,840 Active 2031-02-20 US8423546B2 (en) | 2010-12-03 | 2010-12-03 | Identifying key phrases within documents |
US13/794,093 Abandoned US20130246386A1 (en) | 2010-12-03 | 2013-03-11 | Identifying key phrases within documents |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/959,840 Active 2031-02-20 US8423546B2 (en) | 2010-12-03 | 2010-12-03 | Identifying key phrases within documents |
Country Status (3)
Country | Link |
---|---|
US (2) | US8423546B2 (en) |
CN (1) | CN102591914B (en) |
HK (1) | HK1172421A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573008A (en) * | 2015-01-08 | 2015-04-29 | 广东小天才科技有限公司 | Monitoring method and device for network information |
CN110134767A (en) * | 2019-05-10 | 2019-08-16 | 云知声(上海)智能科技有限公司 | A kind of screening technique of vocabulary |
EP3706017A1 (en) * | 2019-03-07 | 2020-09-09 | Verint Americas Inc. | System and method for determining reasons for anomalies using cross entropy ranking of textual items |
US20200401768A1 (en) * | 2019-06-18 | 2020-12-24 | Verint Americas Inc. | Detecting anomolies in textual items using cross-entropies |
US11080317B2 (en) | 2019-07-09 | 2021-08-03 | International Business Machines Corporation | Context-aware sentence compression |
US20220027419A1 (en) * | 2018-12-28 | 2022-01-27 | Shenzhen Sekorm Component Network Co., Ltd | Smart search and recommendation method for content, storage medium, and terminal |
US11314789B2 (en) | 2019-04-04 | 2022-04-26 | Cognyte Technologies Israel Ltd. | System and method for improved anomaly detection using relationship graphs |
US11334832B2 (en) | 2018-10-03 | 2022-05-17 | Verint Americas Inc. | Risk assessment using Poisson Shelves |
US11567914B2 (en) | 2018-09-14 | 2023-01-31 | Verint Americas Inc. | Framework and method for the automated determination of classes and anomaly detection methods for time series |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120076414A1 (en) * | 2010-09-27 | 2012-03-29 | Microsoft Corporation | External Image Based Summarization Techniques |
US8838433B2 (en) * | 2011-02-08 | 2014-09-16 | Microsoft Corporation | Selection of domain-adapted translation subcorpora |
US8719692B2 (en) * | 2011-03-11 | 2014-05-06 | Microsoft Corporation | Validation, rejection, and modification of automatically generated document annotations |
US9679050B2 (en) * | 2014-04-30 | 2017-06-13 | Adobe Systems Incorporated | Method and apparatus for generating thumbnails |
US10325221B2 (en) * | 2015-06-02 | 2019-06-18 | Microsoft Technology Licensing, Llc | Metadata tag description generation |
CN106021234A (en) * | 2016-05-31 | 2016-10-12 | 徐子涵 | Label extraction method and system |
US20180349354A1 (en) * | 2016-06-29 | 2018-12-06 | Intel Corporation | Natural language indexer for virtual assistants |
US10325021B2 (en) * | 2017-06-19 | 2019-06-18 | GM Global Technology Operations LLC | Phrase extraction text analysis method and system |
US10417268B2 (en) * | 2017-09-22 | 2019-09-17 | Druva Technologies Pte. Ltd. | Keyphrase extraction system and method |
CN110019162B (en) * | 2017-12-04 | 2021-07-06 | 北京京东尚科信息技术有限公司 | Method and device for realizing attribute normalization |
CN109582791B (en) * | 2018-11-13 | 2023-01-24 | 创新先进技术有限公司 | Text risk identification method and device |
US11269942B2 (en) * | 2019-10-10 | 2022-03-08 | International Business Machines Corporation | Automatic keyphrase extraction from text using the cross-entropy method |
CN114118026B (en) * | 2020-08-28 | 2022-07-19 | 北京仝睿科技有限公司 | Automatic document generation method and device, computer storage medium and electronic equipment |
CN112733527A (en) * | 2020-12-15 | 2021-04-30 | 上海建工四建集团有限公司 | Construction method and system of building engineering document knowledge network |
US11868413B2 (en) | 2020-12-22 | 2024-01-09 | Direct Cursus Technology L.L.C | Methods and servers for ranking digital documents in response to a query |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6070158A (en) * | 1996-08-14 | 2000-05-30 | Infoseek Corporation | Real-time document collection search engine with phrase indexing |
US6353822B1 (en) * | 1996-08-22 | 2002-03-05 | Massachusetts Institute Of Technology | Program-listing appendix |
US6654739B1 (en) * | 2000-01-31 | 2003-11-25 | International Business Machines Corporation | Lightweight document clustering |
US20040128616A1 (en) * | 2002-12-28 | 2004-07-01 | International Business Machines Corporation | System and method for providing a runtime environment for active web based document resources |
US7003719B1 (en) * | 1999-01-25 | 2006-02-21 | West Publishing Company, Dba West Group | System, method, and software for inserting hyperlinks into documents |
US20110004462A1 (en) * | 2009-07-01 | 2011-01-06 | Comcast Interactive Media, Llc | Generating Topic-Specific Language Models |
US20110004588A1 (en) * | 2009-05-11 | 2011-01-06 | iMedix Inc. | Method for enhancing the performance of a medical search engine based on semantic analysis and user feedback |
US20110264649A1 (en) * | 2008-04-28 | 2011-10-27 | Ruey-Lung Hsiao | Adaptive Knowledge Platform |
US8315849B1 (en) * | 2010-04-09 | 2012-11-20 | Wal-Mart Stores, Inc. | Selecting terms in a document |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6473753B1 (en) * | 1998-10-09 | 2002-10-29 | Microsoft Corporation | Method and system for calculating term-document importance |
US7269546B2 (en) | 2001-05-09 | 2007-09-11 | International Business Machines Corporation | System and method of finding documents related to other documents and of finding related words in response to a query to refine a search |
US20060218115A1 (en) | 2005-03-24 | 2006-09-28 | Microsoft Corporation | Implicit queries for electronic documents |
US8135728B2 (en) | 2005-03-24 | 2012-03-13 | Microsoft Corporation | Web document keyword and phrase extraction |
US20070185857A1 (en) | 2006-01-23 | 2007-08-09 | International Business Machines Corporation | System and method for extracting salient keywords for videos |
US7925678B2 (en) | 2007-01-12 | 2011-04-12 | Loglogic, Inc. | Customized reporting and mining of event data |
US8463779B2 (en) * | 2007-10-30 | 2013-06-11 | Yahoo! Inc. | Representative keyword selection |
US20090240498A1 (en) | 2008-03-19 | 2009-09-24 | Microsoft Corporation | Similiarity measures for short segments of text |
US20090287676A1 (en) * | 2008-05-16 | 2009-11-19 | Yahoo! Inc. | Search results with word or phrase index |
US8290946B2 (en) | 2008-06-24 | 2012-10-16 | Microsoft Corporation | Consistent phrase relevance measures |
US8606795B2 (en) | 2008-07-01 | 2013-12-10 | Xerox Corporation | Frequency based keyword extraction method and system using a statistical measure |
-
2010
- 2010-12-03 US US12/959,840 patent/US8423546B2/en active Active
-
2011
- 2011-12-02 CN CN201110415245.7A patent/CN102591914B/en active Active
-
2012
- 2012-12-20 HK HK12113203.2A patent/HK1172421A1/en not_active IP Right Cessation
-
2013
- 2013-03-11 US US13/794,093 patent/US20130246386A1/en not_active Abandoned
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6070158A (en) * | 1996-08-14 | 2000-05-30 | Infoseek Corporation | Real-time document collection search engine with phrase indexing |
US6353822B1 (en) * | 1996-08-22 | 2002-03-05 | Massachusetts Institute Of Technology | Program-listing appendix |
US7003719B1 (en) * | 1999-01-25 | 2006-02-21 | West Publishing Company, Dba West Group | System, method, and software for inserting hyperlinks into documents |
US6654739B1 (en) * | 2000-01-31 | 2003-11-25 | International Business Machines Corporation | Lightweight document clustering |
US20040128616A1 (en) * | 2002-12-28 | 2004-07-01 | International Business Machines Corporation | System and method for providing a runtime environment for active web based document resources |
US20110264649A1 (en) * | 2008-04-28 | 2011-10-27 | Ruey-Lung Hsiao | Adaptive Knowledge Platform |
US20110004588A1 (en) * | 2009-05-11 | 2011-01-06 | iMedix Inc. | Method for enhancing the performance of a medical search engine based on semantic analysis and user feedback |
US20110004462A1 (en) * | 2009-07-01 | 2011-01-06 | Comcast Interactive Media, Llc | Generating Topic-Specific Language Models |
US8315849B1 (en) * | 2010-04-09 | 2012-11-20 | Wal-Mart Stores, Inc. | Selecting terms in a document |
Non-Patent Citations (3)
Title |
---|
Larsen, Bjornar, and Chinatsu Aone. "Fast and effective text mining using linear-time document clustering." Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 1999. * |
Salton, Gerard, and Christopher Buckley. "Term-weighting approaches in automatic text retrieval." Information processing & management 24.5 (1988): 513-523. * |
Wikipedia, Inverted Index, 12 November 2010, accessed 28 March 2015 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573008A (en) * | 2015-01-08 | 2015-04-29 | 广东小天才科技有限公司 | Monitoring method and device for network information |
US11567914B2 (en) | 2018-09-14 | 2023-01-31 | Verint Americas Inc. | Framework and method for the automated determination of classes and anomaly detection methods for time series |
US11928634B2 (en) | 2018-10-03 | 2024-03-12 | Verint Americas Inc. | Multivariate risk assessment via poisson shelves |
US11842312B2 (en) | 2018-10-03 | 2023-12-12 | Verint Americas Inc. | Multivariate risk assessment via Poisson shelves |
US11842311B2 (en) | 2018-10-03 | 2023-12-12 | Verint Americas Inc. | Multivariate risk assessment via Poisson Shelves |
US11334832B2 (en) | 2018-10-03 | 2022-05-17 | Verint Americas Inc. | Risk assessment using Poisson Shelves |
US20220027419A1 (en) * | 2018-12-28 | 2022-01-27 | Shenzhen Sekorm Component Network Co., Ltd | Smart search and recommendation method for content, storage medium, and terminal |
EP3706017A1 (en) * | 2019-03-07 | 2020-09-09 | Verint Americas Inc. | System and method for determining reasons for anomalies using cross entropy ranking of textual items |
US11610580B2 (en) * | 2019-03-07 | 2023-03-21 | Verint Americas Inc. | System and method for determining reasons for anomalies using cross entropy ranking of textual items |
US11314789B2 (en) | 2019-04-04 | 2022-04-26 | Cognyte Technologies Israel Ltd. | System and method for improved anomaly detection using relationship graphs |
CN110134767A (en) * | 2019-05-10 | 2019-08-16 | 云知声(上海)智能科技有限公司 | A kind of screening technique of vocabulary |
US11514251B2 (en) * | 2019-06-18 | 2022-11-29 | Verint Americas Inc. | Detecting anomalies in textual items using cross-entropies |
US20200401768A1 (en) * | 2019-06-18 | 2020-12-24 | Verint Americas Inc. | Detecting anomolies in textual items using cross-entropies |
US11080317B2 (en) | 2019-07-09 | 2021-08-03 | International Business Machines Corporation | Context-aware sentence compression |
Also Published As
Publication number | Publication date |
---|---|
CN102591914B (en) | 2015-02-25 |
CN102591914A (en) | 2012-07-18 |
US8423546B2 (en) | 2013-04-16 |
US20120143860A1 (en) | 2012-06-07 |
HK1172421A1 (en) | 2013-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8423546B2 (en) | Identifying key phrases within documents | |
Zhang | Effective and efficient semantic table interpretation using tableminer+ | |
US11550835B2 (en) | Systems and methods for automatically generating content summaries for topics | |
Wang et al. | A machine learning based approach for table detection on the web | |
US8073838B2 (en) | Pseudo-anchor text extraction | |
EP1622054B1 (en) | Phrase-based searching in an information retrieval system | |
Bhagavatula et al. | Methods for exploring and mining tables on wikipedia | |
US20170235841A1 (en) | Enterprise search method and system | |
US8005858B1 (en) | Method and apparatus to link to a related document | |
EP1622052B1 (en) | Phrase-based generation of document description | |
EP1622055B1 (en) | Phrase-based indexing in an information retrieval system | |
EP1622053B1 (en) | Phrase identification in an information retrieval system | |
US7580929B2 (en) | Phrase-based personalization of searches in an information retrieval system | |
US20070185868A1 (en) | Method and apparatus for semantic search of schema repositories | |
US7822752B2 (en) | Efficient retrieval algorithm by query term discrimination | |
Liu et al. | Configurable indexing and ranking for XML information retrieval | |
US20100198802A1 (en) | System and method for optimizing search objects submitted to a data resource | |
US8745062B2 (en) | Systems, methods, and computer program products for fast and scalable proximal search for search queries | |
Minkov et al. | Improving graph-walk-based similarity with reranking: Case studies for personal information management | |
CN101650729A (en) | Dynamic construction method for Web service component library and service search method thereof | |
US8682913B1 (en) | Corroborating facts extracted from multiple sources | |
Hassler et al. | Searching XML Documents–Preliminary Work | |
Khashfeh et al. | A Text Mining Algorithm Optimising the Determination of Relevant Studies | |
Ionescu et al. | Syntactic indexes for text retrieval | |
Hold et al. | ECIR-A Lightweight Approach for Entity-Centric Information Retrieval. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GHERMAN, SORIN;MUKERJEE, KUNAL;REEL/FRAME:029966/0018 Effective date: 20101202 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0541 Effective date: 20141014 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |