US20050055340A1 - Neural-based internet search engine with fuzzy and learning processes implemented by backward propogation - Google Patents
Neural-based internet search engine with fuzzy and learning processes implemented by backward propogation Download PDFInfo
- Publication number
- US20050055340A1 US20050055340A1 US10/664,787 US66478703A US2005055340A1 US 20050055340 A1 US20050055340 A1 US 20050055340A1 US 66478703 A US66478703 A US 66478703A US 2005055340 A1 US2005055340 A1 US 2005055340A1
- Authority
- US
- United States
- Prior art keywords
- search
- data
- recited
- rules
- search engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000001537 neural effect Effects 0.000 title description 29
- 230000008569 process Effects 0.000 title description 16
- 238000012545 processing Methods 0.000 claims abstract description 68
- 238000013528 artificial neural network Methods 0.000 claims abstract description 51
- 230000006870 function Effects 0.000 claims description 29
- 230000007246 mechanism Effects 0.000 claims description 23
- 238000010801 machine learning Methods 0.000 claims description 11
- 230000006978 adaptation Effects 0.000 claims description 8
- 230000008713 feedback mechanism Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 230000006399 behavior Effects 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000013500 data storage Methods 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims 2
- 238000011156 evaluation Methods 0.000 claims 1
- 230000006872 improvement Effects 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 description 27
- 230000002068 genetic effect Effects 0.000 description 21
- 238000012552 review Methods 0.000 description 15
- 230000008901 benefit Effects 0.000 description 14
- 238000003909 pattern recognition Methods 0.000 description 9
- 241000239290 Araneae Species 0.000 description 8
- 210000005036 nerve Anatomy 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000009193 crawling Effects 0.000 description 4
- 238000013480 data collection Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 210000000481 breast Anatomy 0.000 description 3
- 238000007418 data mining Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000010304 firing Methods 0.000 description 3
- 230000036541 health Effects 0.000 description 3
- 208000001613 Gambling Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 210000003050 axon Anatomy 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 102000016550 Complement Factor H Human genes 0.000 description 1
- 108010053085 Complement Factor H Proteins 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005352 clarification Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 235000014510 cooky Nutrition 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000004870 electrical engineering Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008904 neural response Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B6/00—Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
- G02B6/24—Coupling light guides
- G02B6/26—Optical coupling means
- G02B6/28—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals
- G02B6/293—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
- G02B6/29304—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means operating by diffraction, e.g. grating
- G02B6/29316—Light guides comprising a diffractive element, e.g. grating in or on the light guide such that diffracted light is confined in the light guide
- G02B6/29317—Light guides of the optical fibre type
- G02B6/29319—With a cascade of diffractive elements or of diffraction operations
- G02B6/2932—With a cascade of diffractive elements or of diffraction operations comprising a directional router, e.g. directional coupler, circulator
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B6/00—Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
- G02B6/02—Optical fibres with cladding with or without a coating
- G02B6/02057—Optical fibres with cladding with or without a coating comprising gratings
- G02B6/02076—Refractive index modulation gratings, e.g. Bragg gratings
- G02B6/02195—Refractive index modulation gratings, e.g. Bragg gratings characterised by means for tuning the grating
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B6/00—Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
- G02B6/24—Coupling light guides
- G02B6/26—Optical coupling means
- G02B6/28—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals
- G02B6/293—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
- G02B6/29379—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means characterised by the function or use of the complete device
- G02B6/29392—Controlling dispersion
- G02B6/29394—Compensating wavelength dispersion
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B6/00—Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
- G02B6/24—Coupling light guides
- G02B6/26—Optical coupling means
- G02B6/28—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals
- G02B6/293—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
- G02B6/29379—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means characterised by the function or use of the complete device
- G02B6/29395—Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means characterised by the function or use of the complete device configurable, e.g. tunable or reconfigurable
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B10/00—Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
- H04B10/25—Arrangements specific to fibre transmission
- H04B10/2507—Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion
- H04B10/2513—Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion due to chromatic dispersion
- H04B10/2519—Arrangements specific to fibre transmission for the reduction or elimination of distortion or dispersion due to chromatic dispersion using Bragg gratings
Definitions
- the instructions for searching for specific information over a large network with a limited data set, such as on a single institutional site may have different structural and architectural characteristics than instructions for searching over a nearly indefinite number of Internet pages. Attempts to organize this information may be the product of many interdisciplinary technologies ranging from library science to electrical engineering to archival taxonomy.
- Google® owns other technology related to data searching techniques.
- these results are subject to “statistical” problems, although it may require an immense “effort” on the part of any single unsavory entity to intentionally skew such data in its favor.
- FIGS. 1 A-C illustrate some of the various searching techniques used by this entity.
- scoring techniques for finding relevant documents can learn only by statistical inferences and connectivity and require a manual detection of manipulations or irregularities. For example, many URLs can point to a single site or page, which can skew the “popular” use of the statistic. Furthermore, it is assumed that “relevance” for looking for a document begs the question as to “whom is it relevant to?” The above-described methods may be useful for persons looking for the result “relevant” to a majority of people or even a well defined subset of persons. However, users with unusual profiles or searching techniques may be excluded from effectively using these methods in looking for relevant documents over the Internet. The importance of relative criteria in searching the Internet for relevant information is not just a philosophical question, but lends itself to very practical concerns about the heuristics of the search.
- Neural networks are both a conceptual framework and a practical computing application developed in the attempt to teach computers how to model brain functioning (or other biological models) in the areas of pattern recognition of speech and vision processing.
- the concept of neural network computing originally applied to pattern recognition studies.
- the concept of neural computing requires that “rules” generated by a high level structure (such as a brain) are implemented at the “nerve” level (or the data input) to process the incoming data properly.
- Training mechanisms for the use of neural networks over the Internet for use in analyzing financial market data include U.S. Pat. No. 6,247,001 entitled “Method of Training a Neural Network” by Tresp et al. currently assigned to Siemens of Kunststoff Germany, and hereby incorporated by reference.
- Genetic algorithms are components of larger computing solutions (i.e. a larger algorithm) that are usually able to adapt and combine in other algorithms. Genetic algorithms are known to those skilled in the art for various purposes, and their description may be referenced by any number of textbooks on the subject, including Introduction to Genetic Algorithms , by Melanie Mitchell (MIT Press 1996), which is hereby incorporated by reference for purposes of teaching the implementation of genetic algorithms or components. Such algorithms are also taught in U.S. Pat. No. 6,182,057, which is hereby incorporated by reference.
- Bayesian logic is also referred to as fuzzy logic, which has been the focus of many types of intelligence-based computing for a couple of decades.
- fuzzy logic is a technique for defining members of sets based on contingent and relative variables. Fuzzy logic therefore plays crucial roles in machine learning techniques where adaptation is required.
- the use of multiple intelligence computing techniques simultaneously has been discussed in the recent literature.
- the concept of the neuro-fuzzy and/or fuzzy-neuro systems is discussed at length in Fuzzy Engineering Expert Systems with Neural Network Applications by A. B. Badirui and J. Y. Cheung (John Wiley & Sons, 2002) and Soft Computing: Integrating Evolutionary, Neural and Fuzzy Systems , by A. Tettamanzi and M. Tomassini (Springer 2001).
- the present invention provides solutions to the above-listed shortcomings by providing adaptive structures, such as fuzzy logic and genetic algorithms or modules to a neural network architecture in order to improve the capacity and trainability of the neural network for computing a relevant search result based on a large set of search criteria.
- adaptive structures such as fuzzy logic and genetic algorithms or modules
- the system of the present invention can process information that would normally be too computationally complex to resolve.
- the present invention is particularly effective at minimizing the organization and processing of massive amounts of data in order to find appropriate resources (i.e. documents or pages) in reponse for a search inquiry.
- One of the advantages of the present invention is that particular rules and application may be applied at several different levels to reduce the search and computing time.
- the fuzzy neurode implements two complementary technologies at the lowest level and may prevent the processing of massive amounts of irrelevant information at the computational level.
- the adaptive genetic components may detect particular successful or unsuccessful searching configurations of the neural network and combine with other searching configurations where similar patterns have been detected.
- fuzzy logic and computation rules based on prior search results, user and situational data and manual or automated feedback mechanisms serve to teach the intelligence components of the present invention more efficient and accurate searching mechanisms.
- FIGS. 1 A-C represent prior art examples of search engine techniques on documentscoring systems, document accesses or links.
- FIG. 2 is a diagram of a prior art web crawling and data collection system that may be implemented by the present invention.
- FIG. 3A depicts an overview of the present invention.
- FIG. 3B represents the present invention, with a virtually duplicated data resource system.
- FIG. 4 shows the representative connections between the data resource system and the search processing system.
- FIG. 5 depicts the components of the search processing system.
- FIG. 6 depicts the search processing system with user inputs and outputs.
- FIG. 7 illustrates a conceptual model of the input system for an embodiment of the present invention.
- FIG. 8 illustrates the components of the input system for an embodiment of the present invention.
- FIG. 9 outlines a general method for operation of the present invention in a first embodiment.
- FIG. 10 is a more detailed method of the implementation of the invention for generating a search processing result.
- FIG. 11A is a simplified model of three inputs.
- FIG. 11B shows a neurode input as a summation device.
- FIG. 11C shows a neurode input as a logic gate and scoring device.
- FIG. 11D shows a neurode acting as a threshold input device.
- FIG. 12A illustrates details of a simplified input system as shown in FIG. 7
- FIG. 12B illustrates the input system in FIG. 8 with the addition of fuzzy logic and rules application connections at the input level.
- FIG. 13 illustrates a relationship between input and function levels in one embodiment of the invention in which the neurode is configured by an expert rule or fuzzy logic such that it is a filter.
- FIG. 14 illustrates a weighting of a neurode at the non-linear function or output level.
- FIG. 15 illustrates a fuzzy connection at both the data input and function input levels.
- FIG. 16 represents the method of applying a fuzzy logic at one or more levels in the neurally processed search.
- FIG. 17 represents a function node with 6 binary input with 64 states.
- FIG. 18 represents 4 input neurodes with 4 different types of inputs.
- FIG. 19 represents a function node for processing 4 inputs of different types into standardized information inputs.
- FIG. 20 represents the activation of the search processing system at high level by parametric or user data inputs by the expert rule module after processing input.
- FIG. 21A is a sample search query activation of expert rules
- FIG. 21B is a highly simplified portion of a lookup table used to define and implement rule systems
- FIG. 21C is a lookup table used to activate a set of expert rules based on a search query in combination with user or parametric data.
- FIG. 22 is an example of a method for training the search processing system, by recording and adjusting the fuzzy logic determination of the weights on the neural input.
- FIG. 23 is an example in one embodiment of delivering a search result and a learning mechanism with the present invention in five sample stages.
- FIG. 24A is a sample screen of a set of returned relevant results.
- FIG. 24B is an example of training the invention through a feedback mechanism of recording users actions after returning a result.
- FIG. 24C is a sample user survey to adjust expert rules.
- FIG. 24D is an example of training the invention through an automated feedback review mechanism.
- FIG. 25A shows a genetic algorithm system as implemented in the present invention.
- FIG. 25B depicts a modified algorithm being implemented by the expert rule module in) response to an inadequate search return.
- FIG. 25C shows and example of genetic algorithm recombination in the present invention in response to an inadequate search return.
- FIG. 26 is a method for adapting and recombining a genetic algorithm.
- FIG. 27 is a simplified example of the present invention adapting to change search techniques based on updated user and parametric data.
- FIG. 28 is an example of multiple learning adjustments leading to an equilibrium for a document character detector in a neural network.
- FIG. 29 is an example of returning a search result by a pattern recognition computation technique.
- the present invention takes advantage of a virtual or actual neural network data searching system combined with the additional artificial techniques of using expert rules and fuzzy logic in search operations conducted over a large body of data collected from the Internet or other WAN.
- the present invention takes advantage of the power of the neural network in order to process higher level searching constructs instead of simple inputs.
- the system can provide many advantages in providing accuracy and customization.
- the present invention must be able to access a large pool of data collected from the Internet. Because these large pools of data are commercially available, it is expected that in a preferred embodiment of the invention that this data is purchased from a third party.
- FIG. 2 a sample metadata collection system is shown.
- the shown system and method involved in “crawling” through Internet servers for data is covered by several types of technologies, which for example, arebe included in U.S. Pat. No. 6,434,548 entitled “Distributed Metadata Searching System and Method” by Emens et al., and currently assigned to International Business Machines of Armonk, N.Y. This document is hereby incorporated by reference for all purposes.
- the present invention allows for the generation of pools of data by the search system.
- the advantage of generating the data within the system is that the data may be categorized in the most efficient manner to the user.
- the search processing system may include or use a set of one or more Internet or Web servers 25 ( i ) . . . 25 ( n ) connected to the Internet or other WAN 20 via various communication channels 30 (I) . . . 30 ( n ), which include T1, Ethernet, cable, phone and modem and other telecommunications protocols.
- the search processing system 10 may include or use a set of one or more “web crawler” and/or data resource module(s) 50 .
- These data resource module(s) 50 include a crawling and/or processing unit 60 and data storage unit(s) 70 for massive amounts of data and generally enough computing resources to sort data from the crawler systems. TThese data resource module(s) 50 may be of the type described above and referred to in FIG. 2 .
- the data is purchased from one or more vendors of amalgamated (and optionally categorized) web crawling data. These may include Inktomi, Google and other such vendors.
- the data resource module(s) 50 are accessed by a search processing system 100 through a series of actual and virtual connections 55 which may be through any number of communications links such as T1, Ethernet, DSL, etc. However, in alternate embodiments of the invention, this access may be virtual where the data is simply duplicated in a more accessible location, such as where the search processing system 100 is located.
- the virtual duplication 50 ′ of the data resource module 50 in another location is shown in FIG. 3B .
- the advantage of the virtual duplication of the data resource module is that the connection for movement of massive amounts of data may be through an internal computer bus or other fast connection 90 instead of an external communications system 55 , such as T1 or other virtual connection.
- FIG. 4 a detailed view is shown of the Multi modal AI search processor 100 (herein search processing system) connected to the data resource module(s) or collection system(s) 50 .
- the data resouce module(s) 50 has large amounts of document data stored on one or more large computer storage units 70 .
- Connections 210 ( i ) . . . 210 ( n ) may be virtual or physical in nature, but are represented as separate “nerves” in order to illustrate the computational architecture of the invention.
- the depiction of the group of connection of nerves 210 ( i ) . . . 210 ( n ) is included in the “nerve sheath” 200 .
- Virtual parts of the search processing system 100 include a neural network processor 120 , an expert rules module 140 , a fuzzy logic module 160 and an interface 180 .
- the search processing system 100 generally is responsible for the computation of search results based on the input data.
- the components of the search processing system are stored and implemented on at least one computation device 102 , which will most likely have storage or access to storage of a variety of different types.
- the details of the one or more computation device 102 on which the search processing system 100 is implemented are not particularly important to the present invention unless there are details which would affect the performance of many of the inventive steps and structure which are described below.
- It can be assumed that all the components of the search processing system 100 are executable on the one or more computational devices 102 and that data and instructions between components and modeules of the system 100 are shared through communication mechanisms included in the computational devices 102 . These can be internal busses, external communication structures such as T1, Ethernet, wireless LAN, virtual data sharing, internal or external parameter passing in programming languages, access to a common internal or external databases among other communication and/or data sharing mechanisms.
- the search processing system 100 accesses parametric control data 510 ( i ). . . 510 ( n ) that is entered into or accessed by the search processing system 100 through an interface 180 .
- the parametric control data 510 may be placed into the system by an administrator, or by a user of the system. Parametric control data may be stored and accessed by a control center in the interface 180 , in another embodiment of the invention.
- An input search query 300 allows a user or another computer to enter a set of one or more search terms or criteria.
- 210 ( n ) to the neural network processor 120 is shown a set of virtual connections.
- the search is “processed” by the three computation modules, the neural network processor 120 , the expert rules module 140 and the fuzzy logic module 160 , to give a search result through an output 400 connected to the interface 180 .
- FIG. 7 depicts the general operational and structural concepts of the present invention.
- the search processing system 100 receives data from a set of low-level input nodes 105 via the function (nonlinear in most embodiments) nodes 115 .
- the search processing system provides feedback via a feedback mechanism 102 to both the data input level 105 and function processing level 115 in order to effectively regulate the data searching system.
- These two levels, 105 and 115 are shown because of the potential benefit of using multiple levels of neural input for organizational purposes. However, these levels may be collapsed in a particular embodiment of the invention where there is no need for multi-level processing. However, levels 105 and 115 are shown for clarification purposes only and may be one and the same in some embodiments.
- FIG. 8 shows a more detailed aspect of a particular embodiment of the invention.
- the “nerve sheath” 200 includes one or more neurode inputs 101 ( i ) . . . 101 ( n ) connected to an neural network node or function gate outputs 110 ( i ) . . . 110 ( n ) through a connection or axon 210 ( i ) . . . 210 ( n ).
- These structures are shown to be virtual as they may be implemented either virtually through software or implemented in various other software and hardware embodiments.
- 101 ( n ) may be an executable program used by the search processing system 100 to gather information from the data collection system 50 , but be connected through a single telecommunication connection 200 , such as Ethernet.
- the information may be passed to the search system 100 through one data packet or a stream of packets as may be appreciated by those skilled in the art, while the individual data used by each neurode input 101 ( i ) . . . 101 ( n ) is processed by the appropriate neurode.
- the neural network nodes 110 ( l ) . . . 110 ( n ) receive the appropriate information from the weighted neurode or set of neurodes via an “axon” even though the nodes 110 ( n ) may be part of the same executable instructions as the neurodes 101 ( n ) which gather the data.
- the discrete nature of these structures is useful in implementing the multiple AI processes involved in the present invention as may be appreciated by those skilled in the art.
- FIG. 8 details the invention with the implementation of the AI search modules, the expert rules module 140 and the fuzzy logic module 160 .
- the fuzzy logic module 160 may be connected to the neurode inputs 101 ( n ) and/or the network function gates 110 ( n ) through a virtual or real connection 165 and a fuzzy logic implementation module.
- the expert rules module 140 is connected to the multiple levels of input processing through virtual connection 145 and controlled by virtual rule application device 142 when appropriate rules have been activated.
- Step 1005 results in the generation of a data set relevant to search queries.
- FIGS. 2 and 3 describe the generation of the data set through the collection of data from the Internet 20 .
- the data set may be stored on the data resources devices 50 , 50 ′.
- the data may also be used to train various levels of the artificial intelligence modules in the search processing system 100 in step 1010 . However, other resources are used to train the search processing system beyond collected data. The specifics of the training will be described below.
- the search processing system 100 receives an input search query 300 through an interface 180 and generates a result via the AI in the search processor 100 in step 1100 .
- the generated result is returned to the user via output 400 in step 1190 and rules and heuristics in the AI data set and processes are then updated on a periodic (regular or special event) or real time basis in step 1200 .
- the updated processes will also be described below.
- FIG. 10 - shows the basic steps in the generation of a search result 1100 through the search processing system 100 .
- Step 1105 requires the loading of the search terms into the search processor 100 .
- the relevant parameters (discussed above) are loaded into the search processor 100 if they have not been already loaded.
- step 1115 it is determined whether either the discernable search terms (S(i)-S(n)) or parameters (P(i)-P(n)) require the application of special expert rules included in the expert rule module 140 . If so, the appropriate expert rules are loaded and applied at the correct level in step 1120 .
- step 1125 it is determined whether fuzzy logic applies at any level to the search criteria or the parametric data. However, it is anticipated that the fuzzy logic rules will have already been set to the relevant parametric data if they have been previously accessed. If fuzzy logic rules apply to the search or parametric data, then the rules are loaded into the appropriate level where they are to be applied.
- the relevant preliminary search result is then generated in step 1175 from the neural network 120 , which receives data from (or has already “learned” from) the low-level neural input in step 1150 .
- the preliminary search result is subject to a high-level modification from the expert rule 140 and fuzzy logic 160 modules in step 1178 .
- the search results are delivered to the user through the interface 180 .
- any data for learning instructions is generated in step 1190 .
- Learning by the search processor 100 is described in detail below. It is anticipated that step 1178 will become decreasingly necessary for each time the learning instructions are generated in step 1190 .
- the application of expert rules and fuzzy logic at the low level in step 1150 saves considerable computational resources over applying them at higher levels.
- the process of generating a search result is described below, but as can be appreciated by those skilled in the art, may be executed in many different ways without departing from the spirit and scope of invention.
- Parametric control (including user) data 510 ( i ) . . . 510 ( n ) will generally be macro level data that defines the behavior of the entire search engine.
- the parametric data may be based on an individual user's preferences or conditions that may be easily determined by the interface 180 . This data may include items entered by the user such as financial situation, content preferences, geographic location, etc.
- Automatic parametric data may include weather, stock market results, the particular user of the interface, detected inquiries to the user's credit card and any number of variables which may influence the manner in which the search may be conducted. The table below helps define one aspect of the present invention.
- Human review input such as design quality and content issues that may be computationally difficult to calculate from the neural network may be stored as data in each of the modules.
- the computer will be able to apply the human rules to its own learning generated from the data and will also learn other rules on its own.
- a pattern recognition algorithm may apply to a URL with a large amount of pop-up advertising although undetectable by the search system 100 .
- the common characteristics or “neural patterns” from the spider review will alert the system that such patterns correspond to the same one as the human-reviewed URL with a large amount of advertising.
- FIGS. 11 A-D show the functions of the neurodes at the data input level or neurode level 105 .
- FIG. 11A is a simple representation of three input neurode reponding to three different data characteristics.
- FIGS. 11B and 11C represent more detailed representation of two simple neurode data input devices as would be used in the present invention.
- FIG. 11B shows a neurode 101 ( 1 ) that inputs a “top level domain” (TLD) stimulus according the level from the TLD name. For example for each level down the domain the neurode classifies 1 higher.
- TLD top level domain
- the output signal (described below) can be in different formats and still be processed at the function 115 or computational 120 levels. However, the more uniform the inputs the less computational resources will be taxed.
- FIG. 11C shows another simple input for at the neurode low level as a simple function of logic characteristics of the data.
- Neurode 101 ( 2 ) measures a “match” aspect of the search inquiry, such that the more “words” that match those with the data the stronger the input signal.
- FIG. 11D is another example in which a threshold or screening function occurs at input or output to the neural network processor 120 .
- FIG. 12A illustrates how standardized neurode input may be standardized for processing by the neural network processor 120 .
- the data is collected in the individual neurodes 101 ( 1 ), 101 ( 2 ) and 101 ( 3 ).
- the neurodes 101 ( n ) may conduct a low-level filtering, screening or processing as shown in FIGS. 11 B-D, but also may have a standardized or normalized output for processing purposes.
- the standardized or normalized output may occur at the low level 105 , or the function processing level 115 or at the neural network processor 120 .
- the function processing level may serve as a “boundary” or non-linear function through individual processors 110 ( n ,).
- FIG. 12B The general process of providing feedback through the expert rules module 140 or the fuzzy logic module 160 is shown in FIG. 12B .
- This embodiment depicts how the present invention can learn at the various processing levels to more effectively conduct a search, thus better learning how to process a search.
- the optional fuzzy logic translator 162 acts as a translator between the fuzzy logic module and the individual neurode 101 ( n ) or function 110 ( n ) inputs.
- the effect of the fuzzy logic translation on individual neurodes 101 ( n ) is depicted by weight 167 ( n ).
- the application of expert rules from the expert rule module 140 is applied in the same manner by input rule 147 ( n ) or output rule 143 ( n ) applicators.
- each of the inputs is standardized to one input of set ⁇ 0, 1, 2, 3 ⁇ and as such will be easy for the neural network 120 to process.
- the function processing inputs thus will have an input which can be processed.
- FIG. 13 represents an individual example of weighting/influencing at the individual neurode level through the weighting connections 167 ( n ).
- FIG. 14 represents weighting at the function gate level 163 ( n ).
- FIG. 15 depicts weighting at both the neurode 101 ( n ) and function gate 110 ( n ).
- the main thrust of the providing low-level computing is to both save resources in compiling analysis on a large pool of data and continually improving the low level “intelligence” capabilities.
- FIGS. 13-15 represent various types of fuzzy neurodes as they may be implemented into the system of the present invention.
- the implementation of such specially adapted neurodes may save significant computational resources by implementing simple rules (for set inclusion mainly) at the low level inputs.
- simple rules for set inclusion mainly
- the computation needed to implement fuzzy logic rules for simple inputs may be executed by any number of computational devices or by a single computational device.
- FIG. 16 represents a simplified method 1160 for conducting a search in which the fuzzy logic is implemented at the input level 105 at the neurodes 101 ( n ). If the fuzzy logic applies to particular neurodes for parametric inputs, then the fuzzy logic module sends a signal to the input level 105 of the neurodes or the function gate level 115 to adjust the weights accordingly. For example, if the parametric input is good market conditions, the neurodes for higher risk investment opportunities (less reputation+investment, etc.) may be weighted more heavily and result in a match.
- the search processor can learn from more than parametric inputs as it learns many other rules from the human input matching, feedback provided by humans, human actions and machine learning. Furthermore, expert rules may always override any fuzzy logic inputs when any appropriate conditions are met.
- FIGS. 17-19 represent sample processing of states of neurode in various embodiments of the present invention.
- FIG. 17 shows the number of “states” which are computed in a neural network processor 120 . These states may be used in the any computation in returning a search result. Thus 6 on/off inputs will give the neural processor 64 states computation.
- FIG. 18 depicts a collection function neurode 110 ( n ) that collects multiple input types which may or may not be compatible. The multiple types of inputs can be standardized and/or normalized for neural processing.
- FIG. 19 depicts the standardization of multiple neurode 101 ( 1 ′) . . . 101 ( n ′) input types which are then processed in common binary inputs by non-linear processors at the function inputs 110 ( 1 ′) . .
- the neural network processor can handle many data types such sets, numeric, strings, Booleans, etc.
- the standardization of input to the neural processor 120 is one manner in which the relevance determination may be advantageously computed in a particular embodiment.
- FIG. 20 shows how the parametric input 510 ( 1 ), 510 ( 2 ) may function at the “cortex” level to train the neural network processor 120 .
- the advantage of the fuzzy neural input is that computation is reduced by applying the computation at a low level as well as having the ability to apply fuzzy at a high level, for example after the search query 300 is entered into the interface 180 , parametric data 510 ( 1 ), 510 ( 2 ) applies a rulein the expert rule module 140 such that if a preliminary answer is provided by the neural network processor 120 such that R(1)>R(2) a set of fuzzy logic rules, W(x, y) will apply to a high level fuzzy logic adjustment of the preliminary search result. If a preliminary answer is R(1) ⁇ R(2) a second set of rules W(x′, y′) may optionally be applied in the fuzzy logic module 160 .
- parameters may or may not need to be relevant in all cases, the network accommodates this and determines predictability in a massively parallel distribution of search knowledge.
- the search processing ignores one or more pieces of parametric data 510 ( n ) based on search criteria and applied rules in the expert rule module 140 .
- This acts as a pre-search fuzzy set, i.e. the set of parameters used in the search is limited by the “category” of the search.
- FIG. 21A rules in three expert rules applying from the results in a simple rule lookup on a two (or greater) dimensional table 990 in FIG. 21B .
- FIG. 21C depicts how user or parametric data 510 ( n ) affects the rule lookup.
- FIG. 22 a flow chart that describes the machine learning process 1200 of the search processing system 100 to a learned rule for improving the searching technique. After the search query is run in step 1100 , the fuzzy logic weight assignments for inputs are recorded.
- the neural network processor 120 allows for the inclusion of personal data in the decision process like previous consumer behavior to add predictive ability to what a relevant search return would be.
- the personalization of the personal data for determining relevancy is a key enhancement compared to current art, which for the most part return globally relevant returns and acts to enhance the machine intelligence self-training. For example, a person in China searches for Soy Sauce and a Restaurateur in Manhattan searches for Soy Sauce. The returns will be quite different because the search processing system 100 via the neural network 120 recognizes important determinants of relevancy for each individual searcher from the application of the expert rules based on these parameters.
- the effect of thee personal and parametric data 510 ( n ) may be processed at multiple levels either directly the neural network “neurodes” (low-level), the function gates (mid-level), or post neural processing (high-level).
- the neural network which respond to the “geographic” neurode, which would be “reweighted” based on personal geography, for a low-level implementation.
- the search processing system 100 of the e present invention is melded with human review, spiders, genetic (sub)algorithms, fuzzy inference engines and expert systems which are comprised of sets of adaptable expert rules.
- the sets of rules applied by the expert rule module 140 may be preliminary global rules which are rules that are still being adapted. There may also be global rules which are rules which are the product of many adaptations and have been tested. Expert rules or subsystems may be implemented at multiple levels. An example of this is where an expert (sub)system determines the presence of documents which result in spam. Anti-spam parameter 510 ( n ) will resulting the expert system being loaded into the fuzzy logic module and applied at a low-level input so that data on documents which results in spam is not processed by the search engine at the neural network level.
- FIG. 23 depicts one system for determining an appropriate match or document based on the scoring.
- a user inputs the search inquiry “risk free bond funds” search inquiry 300 is placed into the search engine interface 180 .
- the preloaded parameters 510 ( 1 ) and 510 ( 2 ) include market conditions (“average”) and current income (“$75,000”).
- the fuzzy logic module sends weighing instructions to neurode inputs N( 1 )—N( 4 ) either based on instructions from the search engine or the relevant parameters 510 ( 1 ) and 510 ( 2 ).
- the search system 100 could associate the two parameters 510 ( 1 ), 510 ( 2 ) (market conditions and income) with search inquiry terms “risk-free” and “bond” or “funds.”
- association of information may lead to reduced processing time by eliminating nodal information that may not be particularly relevant. For example, in the automated spider review in Table 1.1, holiday input may not be particularly weighted with importance while searching for financial services (on the other hand, the expression Easter may be related to tax season).
- the machine learns that H( 4 ) is generally present when N( 3 ) and N( 4 ) signals are present, but N( 1 ) appears to be less relevant and no positive N( 2 ) data was returned which met the threshold condition.
- the search results may be presented to the user by high score and reputability in Part 4 .
- the search processing system “learns” that neurode input N( 3 ) and N( 4 ) are likely indicators of this human input attribute H( 4 ) and adapts such learning for the next appropriate search task and a preliminary global rule may be put into the expert rule module 140 .
- step 5 for the next search of the type the presence of N( 3 ) and N( 4 ) will be give larger weights or N( 1 ) may also be reduced in weight. Thus repeated learning of this type will nearly eliminate N( 1 ) as relevant. However, the system eliminates N( 2 ) from the neural connection for the next search of this type.
- the above table is illustrative of the human input and expert rules as implemented in the present invention in a particular embodiment. These are only illustrative to the example.
- the expert rules are nonflexible during an intrasearch criteria applicable to the computation of the neural input although these expert rules may be detected at the input node 105 or function gate level 115 as well. However, as shown above, the expert rules are clearly adaptable in the machine learning system of the invention.
- TABLE 1.3 Sample high-level fuzzy inferences Fuzzy Inference mild moderate severe Comment cool warm good fair bad small Big close Far short Long
- FIGS. 24 A-D show sample feedback mechanisms for learning.
- FIG. 24A is a sample screen of five returned search results with different level of domain accessibility.
- FIG. 24B is a sample tracking method 1250 ( 1 ) for learning from the behavior of a user after the search result in FIG. 24A is provided.
- the search processor 100 temporarily stores the results and compares the users behavior. Thus, if a user always chose the selection with a top level domain, the system 100 would learn that the TLD score must be increased in weight for each search.
- FIG. 24C the user provides simple feedback after the search and the results are processed by the learning system for an application to the expert rule module 140 .
- FIG. 24D an automated machine learning mechanism is shown.
- the search expression “teenage” is used by a 13 year old to find relevant documents on “heartthrobs,” “hobbies” and “hangouts.”
- the filter for adult content is always on when this user is present.
- the search processing system did not return an adult flag for “hobbies.”
- the computer has learned that adult sites which key to the word “teenage” bury their documents 3 pages deep.
- part 3 albeit too late for part 2 , the machine “learns” that a score of “3” on the domain level, necessitates a flag for adult content, block those pages. Thus, if part 2 happens after part 3 then no adult content is returned.
- FIGS. 25 A-C are illustrative of a simple examples genetic algorithms as they may be adapted or combined as the result of machine learning. These algorithms may be global to the whole search system 100 or used only by one component They are virtually stored in virtual storage 198 on one or more computing machines 102 , such that they may be accessed by any component of the search system 100 .
- FIG. 25A illustrates and inadequate search for “dance clubs in rio”, which resulted in the application of algorithms A and C in the neural processor 120 and weighting rule W( 1 , 1 ) in the fuzzy logic module 160 .
- FIG. 25B shows the expert module 140 adapting A to A′ and storing it for use with C for the retry search.
- FIG. 25C shows expert rule module combining B with C in the neural network processor 120 and applying adapted weight rule W( 1 , 1 ′).
- FIG. 26 simply illustrates a method 1300 for applying the principles shown in FIGS. 25 A-C.
- FIG. 27 illustrates an example of a genetic component as it may be implemented in the present invention.
- genetic components have been described in the invention.
- the human review factor H( 4 ) or the authoritative component was matched to other neural scores N( 3 ) and N( 4 ).
- the genetic algorithm is a content blocking technique for family suitability based on the firing of a particular neurode.
- the parametric data includes that the user does a lot of health-related research or that a family member is sick 510 ( 1 ), but also is interested in keeping the search to family content 510 ( 2 ).
- the search includes the word “breast” or other search term, which could be used for both adult and non-adult content searching.
- the fuzzy logic module 160 has learned that the word “breast” is supposed to block off the score of the adult content neural input. Thus weight given to such inputs is inverted and other information is made contingent upon the firing of such an inverted neuron. Thus no information is returned that has the adult content tag firing.
- the adult content blocker is able to learn manually from human input or from machine learning that the adult content neurode is not accurate for this user's purposes.
- the genetic algorithm determines that a health related neural input or a human input of reputable site will negate the effects of the adult content blocker for the search term “breast.” However, the algorithm may apply to other search terms that will draw both adult and non adult content. Thus, the genetic algorithm will adapt the neural input in addition to combining with other search algorithms which may apply to a common category of expression for anatomical parts or the genetic component may adapt through a neural pattern recognition.
- FIG. 28 shows a table which depicts an equilibrium of a learning mechanism, such that a preliminary global rule.
- the expression “String” is evaluated such that each letter corresponds to a neurode with a weight of one.
- the initial search results indicated that the vowel position was less indicative of the relevant results (R 1 , R 2 , R 3 ) and thus reduces the 4 th letter neural weight by 0.25.
- the second search for “strung” 75% of the results are similar to search 1 (R 2 ,R 3 ,R 4 ), thus the 4 th letter is reduced further by a factor of 0.75. This also happened on search 3 for “strang.”
- the 4 th position neurode is only a bit less than 1 (0.96), which indicates that the search “STR” X “NG” will result in the 4 th position being slightly less of a search input factor.
- the learning mechanism may have enough data on this search type that it makes the preliminary global rule of the weights, a global rule.
- Table 1.1 shows a series of example “spider level” search which may be implemented in as neural input from the data collection system 50 .
- the existence of data collection services which may be purchased for use with the present invention or generated by the categorized different than the examples in table 1.2 provide.
- the 50 or so criteria described may provide for an example of types of search criteria would be processed as neural input adapted by learning mechanisms.
- a “score” based on weights of the neural inputs would be one embodiment of the invention.
- the advantage of the present invention over the prior art is the fuzzy logic or expert rules that may be implemented at different levels.
- a weighted score based on the neural input may adapt by a number of fuzzy mechanisms at a number of computational points.
- a “score” from the generation of a result from a neural network is not an accurate description, however, pattern recognition (discussed below) which is the predominant computational solution in many neural networks may not be appropriate.
- FIG. 29 illustrates a method of an embodiment of the present invention from search inquiry for a pattern recognition technique for finding a search result.
- the neural network processor 120 receives pattern of inputs 125 .
- the processor 120 attempts to match it to previous recognized and stored 127 processes 124 and when it finds a match it loads those search results and returns the expressions to the output 400 . If the search inquiry 300 ( 2 ) was different that from the one which provided the previous pattern 125 , the search processing system then learns that the two search inquiries 300 ( 1 ), 300 ( 2 ) produce the same pattern.
Abstract
The present invention provides an Internet search engine system and method that improves searching for documents or pages by processing the characteristics of a pool of data through a neural network governed by a set of rules and fuzzy logic applications. The rules and applications may be implemented at the input (or low) level or the computational/output (or high) level. Search terms and personal and situational data may activate various rule sets, and learning from human and machine feedback adjust and recombine the rule sets to improve accuracy for future searches as well as reduce computation time.
Description
- This application claims priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application ______ filed Mar. 3, 2003, entitled NEURALLY-PROCESSED SEARCH ENGINE WITH FUZZY AND LEARNING PROCESSES IMPLEMENTED AT MULTIPLE LEVELS by Scott Dresden, which is hereby incorporated by reference in its entirety for all purposes.
- The increasing need for finding relevant data over the Internet has produced a number of categories of data searching techniques and technology over wide area networks and in particular the Internet. Many of these techniques are included in patents and publications provided by well known industry leaders in the Internet searching business including Google™, Northern Light®, and Inktomi® (used by Yahoo!®). Various aspects of these techniques will be discussed below.
- With more than 4 billion Internet sites in existence, the problem of developing an effective search engine is paramount. Even though some searching techniques may provide for effective cursory searching based on input terms, the information returned to the user may still be inadequate for guidance, because of the layers of information under an entrance page. For example, a large institution such as a government, corporation, or non profit organization may easily have more than 100,000 pages or documents on one single top-level domain uniform resource locator (URL) and at least a few thousand under a single sublevel.
- As can be appreciated by those skilled in the art, the instructions for searching for specific information over a large network with a limited data set, such as on a single institutional site may have different structural and architectural characteristics than instructions for searching over a nearly indefinite number of Internet pages. Attempts to organize this information may be the product of many interdisciplinary technologies ranging from library science to electrical engineering to archival taxonomy.
- One very popular method for data mining, is the “scoring” method. Google, Inc. of Mountain View, Calif. has several published U.S. Patent Applications including 2001/0123988 entitled “Methods and Apparatus for Employing Usage Statistics in Document Retrieval” by Dean et al. and 2001/0133481 entitled “Methods and Apparatus for Providing Search Results in Response to an Ambiguous Search Query.” Both of these patent applications are hereby incorporated by reference in order to illustrate the background to the present invention.
- As can be appreciated, one of the drawbacks of the “scoring” method is that like any statistical method, it can be artificially “skewed” by either a disproportionate group of users or other manipulable technique. Mechanisms can be put into place to account for these factors, the technological advances and otherwise “skewable” techniques. For example, U.S. Pat. No. 6,269,361 issued to Davis, et al. and assigned to GoTo.com of Pasadena, Calif. describes such a technique for influencing a place in the list of a search engine. As needed to detail the problem of influencing search results, this document is hereby incorporated by reference.
- Google® owns other technology related to data searching techniques. For example, a recently issued U.S. Pat. No. 6,526,440 entitled “Ranking Search Results by Reranking the Results Based on Local Interconectivity by Krishna Bharat teaches the use of connectivity to determine “relevance.” However, these results are subject to “statistical” problems, although it may require an immense “effort” on the part of any single unsavory entity to intentionally skew such data in its favor. For example, a single URL, used by an entity and of particular usefulness (i.e, relevance) to the majority of people may be overtaken by an entity's URL that uses many different URLs to connect to that link, allowing manipulation by entities who may benefit from the use of “click-throughs,” mainly the sale of advertising space or pop-up screens. FIGS. 1A-C illustrate some of the various searching techniques used by this entity.
- As such, scoring techniques for finding relevant documents can learn only by statistical inferences and connectivity and require a manual detection of manipulations or irregularities. For example, many URLs can point to a single site or page, which can skew the “popular” use of the statistic. Furthermore, it is assumed that “relevance” for looking for a document begs the question as to “whom is it relevant to?” The above-described methods may be useful for persons looking for the result “relevant” to a majority of people or even a well defined subset of persons. However, users with unusual profiles or searching techniques may be excluded from effectively using these methods in looking for relevant documents over the Internet. The importance of relative criteria in searching the Internet for relevant information is not just a philosophical question, but lends itself to very practical concerns about the heuristics of the search.
- There are other types of intelligent searching techniques that attempt use principles of artificial intelligence as they apply to natural language processing. U.S. Pat. No. 6,430,551 by Thelen et al. and assigned to Phillips Electronics of the Netherlands, uses pattern recognition techniques, as such
- Neural networks are both a conceptual framework and a practical computing application developed in the attempt to teach computers how to model brain functioning (or other biological models) in the areas of pattern recognition of speech and vision processing. The concept of neural network computing originally applied to pattern recognition studies. The concept of neural computing requires that “rules” generated by a high level structure (such as a brain) are implemented at the “nerve” level (or the data input) to process the incoming data properly. Training mechanisms for the use of neural networks over the Internet for use in analyzing financial market data include U.S. Pat. No. 6,247,001 entitled “Method of Training a Neural Network” by Tresp et al. currently assigned to Siemens of Munich Germany, and hereby incorporated by reference.
- Another adaptive intelligence mechanism applied to complex computing problems is the genetic algorithm. Genetic algorithms are components of larger computing solutions (i.e. a larger algorithm) that are usually able to adapt and combine in other algorithms. Genetic algorithms are known to those skilled in the art for various purposes, and their description may be referenced by any number of textbooks on the subject, including Introduction to Genetic Algorithms, by Melanie Mitchell (MIT Press 1996), which is hereby incorporated by reference for purposes of teaching the implementation of genetic algorithms or components. Such algorithms are also taught in U.S. Pat. No. 6,182,057, which is hereby incorporated by reference.
- Bayesian logic is also referred to as fuzzy logic, which has been the focus of many types of intelligence-based computing for a couple of decades. In its most simplified form fuzzy logic is a technique for defining members of sets based on contingent and relative variables. Fuzzy logic therefore plays crucial roles in machine learning techniques where adaptation is required. The use of multiple intelligence computing techniques simultaneously has been discussed in the recent literature. The concept of the neuro-fuzzy and/or fuzzy-neuro systems is discussed at length in Fuzzy Engineering Expert Systems with Neural Network Applications by A. B. Badirui and J. Y. Cheung (John Wiley & Sons, 2002) and Soft Computing: Integrating Evolutionary, Neural and Fuzzy Systems, by A. Tettamanzi and M. Tomassini (Springer 2001). These two references are incorporated by reference in order to teach the various techniques of developing and configuring neural networks, fuzzy logic, genetic algorithms and expert systems in general. Some textual references have noted that neural networks may not be good for searching algorithm applications mainly because neural network rules are implemented a low levels, which may be impractical with data input as complex as natural language expressions, which are typically used in an internet search. Such a concept is discussed in Evolutionary Algorithms for Data Mining, by Alex Freitas, Springer, 1998, p. 4, which is hereby incorporated by reference.
- An example of multiple use of artificial intelligence techniques over networks is described in U.S. Pat. No. 6,327,550 entitled “Method and Apparatus for System State Monitoring Using Pattern Recognition and Neural Networks” by Vinberg et al. and currently assigned to Computer Associates Think, Inc., of Islandia, N.Y. The Vinberg reference teaches the use of state vectors as they would be applied to networks. Other interactive multiple intelligence mechanisms are described in U.S. Pat. No. 5,249,259 (“Genetic Algorithms for Designing Neural Networks”) and U.S. Pat. No. 5,727,130 (“Genetic Algorithm for Constructing and Tuning Fuzzy Logic System”) neither of which teachers multiple interactive for data mining over networks per se. Both of these documents are incorporated by reference. However, none of these multiple intelligence node systems is particularly well suited for use in a search processing system over the Internet to find relevant documents or pages.
- The present invention provides solutions to the above-listed shortcomings by providing adaptive structures, such as fuzzy logic and genetic algorithms or modules to a neural network architecture in order to improve the capacity and trainability of the neural network for computing a relevant search result based on a large set of search criteria. By allowing the search criteria to be processed in a neural network, the system of the present invention can process information that would normally be too computationally complex to resolve.
- The present invention is particularly effective at minimizing the organization and processing of massive amounts of data in order to find appropriate resources (i.e. documents or pages) in reponse for a search inquiry. One of the advantages of the present invention is that particular rules and application may be applied at several different levels to reduce the search and computing time. For example, the fuzzy neurode implements two complementary technologies at the lowest level and may prevent the processing of massive amounts of irrelevant information at the computational level. The adaptive genetic components may detect particular successful or unsuccessful searching configurations of the neural network and combine with other searching configurations where similar patterns have been detected. Finally, fuzzy logic and computation rules based on prior search results, user and situational data and manual or automated feedback mechanisms serve to teach the intelligence components of the present invention more efficient and accurate searching mechanisms.
- The present invention can be better understood by the following diagrams and illustrations. However, as can be appreciated by those skilled in the art, the components of the present invention may be implemented in a variety of forms including virtual and physical as well as implementing what appear in the drawings as single units on multiple computing devices. Thus, the drawings are not meant to be limiting, but are provided for better understanding of the components and the interactions between the components.
- FIGS. 1A-C represent prior art examples of search engine techniques on documentscoring systems, document accesses or links.
-
FIG. 2 is a diagram of a prior art web crawling and data collection system that may be implemented by the present invention. -
FIG. 3A depicts an overview of the present invention. -
FIG. 3B represents the present invention, with a virtually duplicated data resource system. -
FIG. 4 shows the representative connections between the data resource system and the search processing system. -
FIG. 5 depicts the components of the search processing system. -
FIG. 6 depicts the search processing system with user inputs and outputs. -
FIG. 7 illustrates a conceptual model of the input system for an embodiment of the present invention. -
FIG. 8 illustrates the components of the input system for an embodiment of the present invention. -
FIG. 9 outlines a general method for operation of the present invention in a first embodiment. -
FIG. 10 is a more detailed method of the implementation of the invention for generating a search processing result. -
FIG. 11A is a simplified model of three inputs. -
FIG. 11B shows a neurode input as a summation device. -
FIG. 11C shows a neurode input as a logic gate and scoring device. -
FIG. 11D shows a neurode acting as a threshold input device. -
FIG. 12A illustrates details of a simplified input system as shown inFIG. 7 -
FIG. 12B illustrates the input system inFIG. 8 with the addition of fuzzy logic and rules application connections at the input level. -
FIG. 13 illustrates a relationship between input and function levels in one embodiment of the invention in which the neurode is configured by an expert rule or fuzzy logic such that it is a filter. -
FIG. 14 illustrates a weighting of a neurode at the non-linear function or output level. -
FIG. 15 illustrates a fuzzy connection at both the data input and function input levels. -
FIG. 16 represents the method of applying a fuzzy logic at one or more levels in the neurally processed search. -
FIG. 17 represents a function node with 6 binary input with 64 states. -
FIG. 18 represents 4 input neurodes with 4 different types of inputs. -
FIG. 19 represents a function node forprocessing 4 inputs of different types into standardized information inputs. -
FIG. 20 represents the activation of the search processing system at high level by parametric or user data inputs by the expert rule module after processing input. -
FIG. 21A is a sample search query activation of expert rules -
FIG. 21B is a highly simplified portion of a lookup table used to define and implement rule systems -
FIG. 21C is a lookup table used to activate a set of expert rules based on a search query in combination with user or parametric data. -
FIG. 22 is an example of a method for training the search processing system, by recording and adjusting the fuzzy logic determination of the weights on the neural input. -
FIG. 23 is an example in one embodiment of delivering a search result and a learning mechanism with the present invention in five sample stages. -
FIG. 24A is a sample screen of a set of returned relevant results. -
FIG. 24B is an example of training the invention through a feedback mechanism of recording users actions after returning a result. -
FIG. 24C is a sample user survey to adjust expert rules. -
FIG. 24D is an example of training the invention through an automated feedback review mechanism. -
FIG. 25A shows a genetic algorithm system as implemented in the present invention. -
FIG. 25B depicts a modified algorithm being implemented by the expert rule module in) response to an inadequate search return. -
FIG. 25C shows and example of genetic algorithm recombination in the present invention in response to an inadequate search return. -
FIG. 26 is a method for adapting and recombining a genetic algorithm. -
FIG. 27 is a simplified example of the present invention adapting to change search techniques based on updated user and parametric data. -
FIG. 28 is an example of multiple learning adjustments leading to an equilibrium for a document character detector in a neural network. -
FIG. 29 is an example of returning a search result by a pattern recognition computation technique. - The present invention takes advantage of a virtual or actual neural network data searching system combined with the additional artificial techniques of using expert rules and fuzzy logic in search operations conducted over a large body of data collected from the Internet or other WAN. The present invention takes advantage of the power of the neural network in order to process higher level searching constructs instead of simple inputs. However, by processing data through a neural network on complex searching constructs, the system can provide many advantages in providing accuracy and customization.
- The present invention must be able to access a large pool of data collected from the Internet. Because these large pools of data are commercially available, it is expected that in a preferred embodiment of the invention that this data is purchased from a third party. Referring now to
FIG. 2 , a sample metadata collection system is shown. The shown system and method involved in “crawling” through Internet servers for data is covered by several types of technologies, which for example, arebe included in U.S. Pat. No. 6,434,548 entitled “Distributed Metadata Searching System and Method” by Emens et al., and currently assigned to International Business Machines of Armonk, N.Y. This document is hereby incorporated by reference for all purposes. In an alternate embodiment, the present invention allows for the generation of pools of data by the search system. The advantage of generating the data within the system is that the data may be categorized in the most efficient manner to the user. - Referring now to
FIG. 3A , a diagram of the invention is shown as it may be implemented in one embodiment for a system for intelligent searching of documents and URLs on the Internet. The search processing system may include or use a set of one or more Internet or Web servers 25(i) . . . 25(n) connected to the Internet orother WAN 20 via various communication channels 30(I) . . . 30(n), which include T1, Ethernet, cable, phone and modem and other telecommunications protocols. The search processing system 10 may include or use a set of one or more “web crawler” and/or data resource module(s) 50. These data resource module(s) 50 include a crawling and/orprocessing unit 60 and data storage unit(s) 70 for massive amounts of data and generally enough computing resources to sort data from the crawler systems. TThese data resource module(s) 50 may be of the type described above and referred to inFIG. 2 . Once again, in a preferred embodiment of the invention the data is purchased from one or more vendors of amalgamated (and optionally categorized) web crawling data. These may include Inktomi, Google and other such vendors. - The data resource module(s) 50 are accessed by a
search processing system 100 through a series of actual andvirtual connections 55 which may be through any number of communications links such as T1, Ethernet, DSL, etc. However, in alternate embodiments of the invention, this access may be virtual where the data is simply duplicated in a more accessible location, such as where thesearch processing system 100 is located. Thevirtual duplication 50′ of thedata resource module 50 in another location is shown inFIG. 3B . The advantage of the virtual duplication of the data resource module is that the connection for movement of massive amounts of data may be through an internal computer bus or otherfast connection 90 instead of anexternal communications system 55, such as T1 or other virtual connection. - Referring now to
FIG. 4 , a detailed view is shown of the Multi modal AI search processor 100 (herein search processing system) connected to the data resource module(s) or collection system(s) 50. The data resouce module(s) 50 has large amounts of document data stored on one or more largecomputer storage units 70. Connections 210(i) . . . 210(n) may be virtual or physical in nature, but are represented as separate “nerves” in order to illustrate the computational architecture of the invention. The depiction of the group of connection of nerves 210(i) . . . 210(n) is included in the “nerve sheath” 200. - Referring now to
FIG. 5 , an intelligentsearch processing system 100 is shown. Virtual parts of thesearch processing system 100 include aneural network processor 120, an expert rulesmodule 140, afuzzy logic module 160 and aninterface 180. Thesearch processing system 100 generally is responsible for the computation of search results based on the input data. - The above structures are described as virtual structures even though they may be physically embodied in a specific device or in separate computer readable mediums. As can be appreciated by those skilled in the art, the modular descriptions of the various structures or components allow for an understanding of the computational architecture of one of the embodiments of the invention. Furthermore, there is no requirement that any one module be executed by a single computer or that all the modules be on the same computer. Neural network processing often benefits from parallel processing, which can include parallel processing on one device or multiple devices. In fact, throughout the specification the structures may be implemented in a virtual fashion. Those skilled in the art will readily recognize that there will be advantages to various implementations of the present invention. For the sake of simplicity, in a first embodiment and the examples illustrated all the modules will be located and executed on a single computational device.
- Although it will not be discussed further, the components of the search processing system are stored and implemented on at least one
computation device 102, which will most likely have storage or access to storage of a variety of different types. The details of the one ormore computation device 102 on which thesearch processing system 100 is implemented are not particularly important to the present invention unless there are details which would affect the performance of many of the inventive steps and structure which are described below. It can be assumed that all the components of thesearch processing system 100 are executable on the one or morecomputational devices 102 and that data and instructions between components and modeules of thesystem 100 are shared through communication mechanisms included in thecomputational devices 102. These can be internal busses, external communication structures such as T1, Ethernet, wireless LAN, virtual data sharing, internal or external parameter passing in programming languages, access to a common internal or external databases among other communication and/or data sharing mechanisms. - Referring now to
FIG. 6 , a more detailed illustration of the intelligentsearch processing system 100 is shown in which internal and external data interacts with thesearch processing system 100. Thesearch processing system 100 accesses parametric control data 510(i). . . 510(n) that is entered into or accessed by thesearch processing system 100 through aninterface 180. Theparametric control data 510 may be placed into the system by an administrator, or by a user of the system. Parametric control data may be stored and accessed by a control center in theinterface 180, in another embodiment of the invention. Aninput search query 300 allows a user or another computer to enter a set of one or more search terms or criteria. Thenerve sheath 200 including the individual “nerve connections” 210(i). . . 210(n) to theneural network processor 120 is shown a set of virtual connections. The search is “processed” by the three computation modules, theneural network processor 120, theexpert rules module 140 and thefuzzy logic module 160, to give a search result through anoutput 400 connected to theinterface 180. - As can be appreciated not all levels are necessary for the operation of the present invention. Although the collection of a large array of data allows the neural network to function optimally over the course of a large number of searches, in addition to developing learning rules which may apply at both low and high levels.
-
FIG. 7 depicts the general operational and structural concepts of the present invention. Thesearch processing system 100 receives data from a set of low-level input nodes 105 via the function (nonlinear in most embodiments)nodes 115. The search processing system provides feedback via afeedback mechanism 102 to both thedata input level 105 andfunction processing level 115 in order to effectively regulate the data searching system. These two levels, 105 and 115, are shown because of the potential benefit of using multiple levels of neural input for organizational purposes. However, these levels may be collapsed in a particular embodiment of the invention where there is no need for multi-level processing. However,levels -
FIG. 8 shows a more detailed aspect of a particular embodiment of the invention. The “nerve sheath” 200 includes one or more neurode inputs 101(i) . . . 101(n) connected to an neural network node or function gate outputs 110(i) . . . 110(n) through a connection or axon 210(i) . . . 210(n). These structures are shown to be virtual as they may be implemented either virtually through software or implemented in various other software and hardware embodiments. For example, the neurode inputs 101(i) . . . 101(n) may be an executable program used by thesearch processing system 100 to gather information from thedata collection system 50, but be connected through asingle telecommunication connection 200, such as Ethernet. The information may be passed to thesearch system 100 through one data packet or a stream of packets as may be appreciated by those skilled in the art, while the individual data used by each neurode input 101(i) . . . 101(n) is processed by the appropriate neurode. - Similarly the neural network nodes 110(l) . . . 110(n) receive the appropriate information from the weighted neurode or set of neurodes via an “axon” even though the nodes 110(n) may be part of the same executable instructions as the neurodes 101(n) which gather the data. The discrete nature of these structures is useful in implementing the multiple AI processes involved in the present invention as may be appreciated by those skilled in the art.
-
FIG. 8 details the invention with the implementation of the AI search modules, theexpert rules module 140 and thefuzzy logic module 160. Thefuzzy logic module 160 may be connected to the neurode inputs 101(n) and/or the network function gates 110(n) through a virtual orreal connection 165 and a fuzzy logic implementation module. Similarly, theexpert rules module 140 is connected to the multiple levels of input processing throughvirtual connection 145 and controlled by virtualrule application device 142 when appropriate rules have been activated. - Referring now to
FIG. 9 a general basic operation 1000 of the invention in a particular embodiment is described.Step 1005 results in the generation of a data set relevant to search queries.FIGS. 2 and 3 describe the generation of the data set through the collection of data from theInternet 20. The data set may be stored on thedata resources devices search processing system 100 in step 1010. However, other resources are used to train the search processing system beyond collected data. The specifics of the training will be described below. Instep 1050 thesearch processing system 100 receives aninput search query 300 through aninterface 180 and generates a result via the AI in thesearch processor 100 instep 1100. The generated result is returned to the user viaoutput 400 instep 1190 and rules and heuristics in the AI data set and processes are then updated on a periodic (regular or special event) or real time basis instep 1200. The updated processes will also be described below. -
FIG. 10 - shows the basic steps in the generation of asearch result 1100 through thesearch processing system 100.Step 1105 requires the loading of the search terms into thesearch processor 100. Atstep 1110 the relevant parameters (discussed above) are loaded into thesearch processor 100 if they have not been already loaded. Atstep 1115, it is determined whether either the discernable search terms (S(i)-S(n)) or parameters (P(i)-P(n)) require the application of special expert rules included in theexpert rule module 140. If so, the appropriate expert rules are loaded and applied at the correct level instep 1120. Instep 1125 it is determined whether fuzzy logic applies at any level to the search criteria or the parametric data. However, it is anticipated that the fuzzy logic rules will have already been set to the relevant parametric data if they have been previously accessed. If fuzzy logic rules apply to the search or parametric data, then the rules are loaded into the appropriate level where they are to be applied. - The relevant preliminary search result is then generated in
step 1175 from theneural network 120, which receives data from (or has already “learned” from) the low-level neural input instep 1150. The preliminary search result is subject to a high-level modification from theexpert rule 140 andfuzzy logic 160 modules instep 1178. Instep 1180 the search results are delivered to the user through theinterface 180. Simultaneously, any data for learning instructions is generated instep 1190. Learning by thesearch processor 100 is described in detail below. It is anticipated thatstep 1178 will become decreasingly necessary for each time the learning instructions are generated instep 1190. The application of expert rules and fuzzy logic at the low level instep 1150 saves considerable computational resources over applying them at higher levels. The process of generating a search result is described below, but as can be appreciated by those skilled in the art, may be executed in many different ways without departing from the spirit and scope of invention. - Parametric control (including user) data 510(i) . . . 510(n) will generally be macro level data that defines the behavior of the entire search engine. The parametric data may be based on an individual user's preferences or conditions that may be easily determined by the
interface 180. This data may include items entered by the user such as financial situation, content preferences, geographic location, etc. Automatic parametric data may include weather, stock market results, the particular user of the interface, detected inquiries to the user's credit card and any number of variables which may influence the manner in which the search may be conducted. The table below helps define one aspect of the present invention.TABLE 1.1 Sample characteristics for generating neural responses Neural Input # Human Review Automated Spider Review 1 TLD 2 Commerce IP Geography matches users Geography 3 Geographically User previously selected relevant 4 Authoritative Site Average time between clicks 5 Design Quality Domain Name contains Keywords 6 Extraspecial status Meta Description contains keywords 7 Ads present Meta Keywords contains keywords 8 Porno Title Tag contains keywords 9 Gambling Alt Tag contains keywords 10 Profanity used Static/Dynamic IP 11 Family safe Keyword Density 12 Overall weight add Absolute Keyword Number 13 Feedback Score 14 Privacy Link 15 Paid Inclusion 16 Link Popularity 17 DNS is correct 18 404s exist 19 Pop up windows exist 20 Flash present 21 Php 22 Asp 23 Cfm 24 last refresh of page 25 Top ten at Dmoz 26 Top ten at Zeal 27 meta refresh exists 28 https 29 Header text keywords 30 Average number user selects commerce sites 31 32 Fortune 50033 Fortune 1000 34 Average page load time 35 Christmas 36 Valentines day 37 Easter 38 Hanukah 39 New Years 40 Winter 41 Summer 42 Autumn 43 Spring 44 tax day 45 # clicks from unique lps 46 Hour of day 47 Originating IP is home or business 48 Porno 49 Gambling 50 Profanity used 51 Family safe 52 Multiple clicks from same user by cookie 53 Multiple clicks from same user by IP - The “spider review” shown above is then an effective way to describe a summary for samples of the “neural input” for the present invention. However, the list in the above table is by no means exhaustive, but meant to be illustrative only. As can be appreciated by those skilled in the art, the advantage of using a neural network to get data at such a low level is in representing fairly complicated search constructs in a large number of standardized, normalized or standardizable data inputs for processing. In the table above there at least 53 spider nerve inputs and 12 human review inputs.
- Human review input, such as design quality and content issues that may be computationally difficult to calculate from the neural network may be stored as data in each of the modules. As more data is generated by the
search processing system 100, the computer will be able to apply the human rules to its own learning generated from the data and will also learn other rules on its own. For example, a pattern recognition algorithm may apply to a URL with a large amount of pop-up advertising although undetectable by thesearch system 100. The common characteristics or “neural patterns” from the spider review will alert the system that such patterns correspond to the same one as the human-reviewed URL with a large amount of advertising. - FIGS. 11A-D show the functions of the neurodes at the data input level or
neurode level 105.FIG. 11A is a simple representation of three input neurode reponding to three different data characteristics.FIGS. 11B and 11C represent more detailed representation of two simple neurode data input devices as would be used in the present invention.FIG. 11B shows a neurode 101(1) that inputs a “top level domain” (TLD) stimulus according the level from the TLD name. For example for each level down the domain the neurode classifies 1 higher. Thus, .com is 1 .com/bookmark is 2, .com/bookmark/subdir is 3, etc. The output signal (described below) can be in different formats and still be processed at thefunction 115 or computational 120 levels. However, the more uniform the inputs the less computational resources will be taxed. -
FIG. 11C shows another simple input for at the neurode low level as a simple function of logic characteristics of the data. Neurode 101(2) measures a “match” aspect of the search inquiry, such that the more “words” that match those with the data the stronger the input signal.FIG. 11D is another example in which a threshold or screening function occurs at input or output to theneural network processor 120. -
FIG. 12A illustrates how standardized neurode input may be standardized for processing by theneural network processor 120. The data is collected in the individual neurodes 101(1), 101(2) and 101(3). The neurodes 101(n) may conduct a low-level filtering, screening or processing as shown in FIGS. 11B-D, but also may have a standardized or normalized output for processing purposes. The standardized or normalized output may occur at thelow level 105, or thefunction processing level 115 or at theneural network processor 120. The function processing level may serve as a “boundary” or non-linear function through individual processors 110(n,). - The general process of providing feedback through the
expert rules module 140 or thefuzzy logic module 160 is shown inFIG. 12B . This embodiment depicts how the present invention can learn at the various processing levels to more effectively conduct a search, thus better learning how to process a search. The optionalfuzzy logic translator 162 acts as a translator between the fuzzy logic module and the individual neurode 101(n) or function 110(n) inputs. The effect of the fuzzy logic translation on individual neurodes 101(n) is depicted by weight 167(n). The application of expert rules from theexpert rule module 140 is applied in the same manner by input rule 147(n) or output rule 143(n) applicators. - In
FIG. 12B . 13-15 three different types of neurodes process individual components of the “spider review,” which results in standardized input for thenetwork processor 120. In the illustrated example, each of the inputs is standardized to one input of set {0, 1, 2, 3} and as such will be easy for theneural network 120 to process. The function processing inputs thus will have an input which can be processed. -
FIG. 13 represents an individual example of weighting/influencing at the individual neurode level through the weighting connections 167(n). SimilarlyFIG. 14 represents weighting at the function gate level 163(n).FIG. 15 depicts weighting at both the neurode 101(n) and function gate 110(n). As can be appreciated by those skilled in the art, there are many variations on the type of input and output screening, weighting and application of functions which would be appropriate for providing different types of input. However, the main thrust of the providing low-level computing is to both save resources in compiling analysis on a large pool of data and continually improving the low level “intelligence” capabilities. - As can be appreciated by those skilled in the art,
FIGS. 13-15 represent various types of fuzzy neurodes as they may be implemented into the system of the present invention. The implementation of such specially adapted neurodes may save significant computational resources by implementing simple rules (for set inclusion mainly) at the low level inputs. However, since these structures are virtual, the computation needed to implement fuzzy logic rules for simple inputs may be executed by any number of computational devices or by a single computational device. -
FIG. 16 represents asimplified method 1160 for conducting a search in which the fuzzy logic is implemented at theinput level 105 at the neurodes 101(n). If the fuzzy logic applies to particular neurodes for parametric inputs, then the fuzzy logic module sends a signal to theinput level 105 of the neurodes or thefunction gate level 115 to adjust the weights accordingly. For example, if the parametric input is good market conditions, the neurodes for higher risk investment opportunities (less reputation+investment, etc.) may be weighted more heavily and result in a match. Of course the search processor can learn from more than parametric inputs as it learns many other rules from the human input matching, feedback provided by humans, human actions and machine learning. Furthermore, expert rules may always override any fuzzy logic inputs when any appropriate conditions are met. -
FIGS. 17-19 represent sample processing of states of neurode in various embodiments of the present invention.FIG. 17 shows the number of “states” which are computed in aneural network processor 120. These states may be used in the any computation in returning a search result. Thus 6 on/off inputs will give theneural processor 64 states computation.FIG. 18 depicts a collection function neurode 110(n) that collects multiple input types which may or may not be compatible. The multiple types of inputs can be standardized and/or normalized for neural processing.FIG. 19 depicts the standardization of multiple neurode 101(1′) . . . 101(n′) input types which are then processed in common binary inputs by non-linear processors at the function inputs 110(1′) . . . 110(4′) such that they are standardized into a value of“+” or “−”. The neural network processor can handle many data types such sets, numeric, strings, Booleans, etc. However, the standardization of input to theneural processor 120 is one manner in which the relevance determination may be advantageously computed in a particular embodiment. -
FIG. 20 shows how the parametric input 510(1), 510(2) may function at the “cortex” level to train theneural network processor 120. The advantage of the fuzzy neural input is that computation is reduced by applying the computation at a low level as well as having the ability to apply fuzzy at a high level, for example after thesearch query 300 is entered into theinterface 180, parametric data 510(1), 510(2) applies a rulein theexpert rule module 140 such that if a preliminary answer is provided by theneural network processor 120 such that R(1)>R(2) a set of fuzzy logic rules, W(x, y) will apply to a high level fuzzy logic adjustment of the preliminary search result. If a preliminary answer is R(1)<R(2) a second set of rules W(x′, y′) may optionally be applied in thefuzzy logic module 160. - However as can be appreciated by those skilled in the art, parameters may or may not need to be relevant in all cases, the network accommodates this and determines predictability in a massively parallel distribution of search knowledge. In such a case where a parametric or user data is not initially deemed to be relevant the search processing ignores one or more pieces of parametric data 510(n) based on search criteria and applied rules in the
expert rule module 140. This acts as a pre-search fuzzy set, i.e. the set of parameters used in the search is limited by the “category” of the search. - However, as can be appreciated by those skilled in the art, even 20 or 30 neural input parameters with 3 states each may quickly become unmanageable computationally complex and inaccessible. The advantage of the fuzzy logic being located at the neural input 101(n) or 110(n) is processed is that control over the computational aspects of potentially massive amounts of input data.
Search inputs 300 which is particularly sensitive to certain parameters 510(n) can be adapted to become a fuzzy neurode, instead of the neuro-fuzzy processors. The process of applying a set of expert rules in theexpert rule module 140 is shown byFIG. 21A -C. The search “jazz clubs in boston” is entered inFIG. 21A rules in three expert rules applying from the results in a simple rule lookup on a two (or greater) dimensional table 990 inFIG. 21B . There may be hundreds of multi-dimensional tables 990(i) . . . 990(n) stored physically or virtually.FIG. 21C depicts how user or parametric data 510(n) affects the rule lookup. - Referring now to
FIG. 22 a flow chart that describes themachine learning process 1200 of thesearch processing system 100 to a learned rule for improving the searching technique. After the search query is run instep 1100, the fuzzy logic weight assignments for inputs are recorded. - The
neural network processor 120 allows for the inclusion of personal data in the decision process like previous consumer behavior to add predictive ability to what a relevant search return would be. The personalization of the personal data for determining relevancy is a key enhancement compared to current art, which for the most part return globally relevant returns and acts to enhance the machine intelligence self-training. For example, a person in China searches for Soy Sauce and a Restaurateur in Manhattan searches for Soy Sauce. The returns will be quite different because thesearch processing system 100 via theneural network 120 recognizes important determinants of relevancy for each individual searcher from the application of the expert rules based on these parameters. As stated above, the effect of thee personal and parametric data 510(n) may be processed at multiple levels either directly the neural network “neurodes” (low-level), the function gates (mid-level), or post neural processing (high-level). Thus, in the above example, items in the neural network which respond to the “geographic” neurode, which would be “reweighted” based on personal geography, for a low-level implementation.TABLE 1.2 Sample Expert rules applied at low or high levels Expert Rules match existing domain name? commerce keyword appears geographical keyword appears exact match to human review keyword - The
search processing system 100 of the e present invention is melded with human review, spiders, genetic (sub)algorithms, fuzzy inference engines and expert systems which are comprised of sets of adaptable expert rules. The sets of rules applied by theexpert rule module 140 may be preliminary global rules which are rules that are still being adapted. There may also be global rules which are rules which are the product of many adaptations and have been tested. Expert rules or subsystems may be implemented at multiple levels. An example of this is where an expert (sub)system determines the presence of documents which result in spam. Anti-spam parameter 510(n) will resulting the expert system being loaded into the fuzzy logic module and applied at a low-level input so that data on documents which results in spam is not processed by the search engine at the neural network level. -
FIG. 23 depicts one system for determining an appropriate match or document based on the scoring. Inpart 1, a user inputs the search inquiry “risk free bond funds”search inquiry 300 is placed into thesearch engine interface 180. The preloaded parameters 510(1) and 510(2) include market conditions (“average”) and current income (“$75,000”). - In
part 2, the fuzzy logic module sends weighing instructions to neurode inputs N(1)—N(4) either based on instructions from the search engine or the relevant parameters 510(1) and 510(2). Of course, thesearch system 100 could associate the two parameters 510(1), 510(2) (market conditions and income) with search inquiry terms “risk-free” and “bond” or “funds.” As can be appreciated by those skilled in the art, association of information may lead to reduced processing time by eliminating nodal information that may not be particularly relevant. For example, in the automated spider review in Table 1.1, holiday input may not be particularly weighted with importance while searching for financial services (on the other hand, the expression Easter may be related to tax season). - In
part 3 documents are compared to an itemized truth table with scoring 299 from a previous search which is further put together with relevant human input data for category H(4), that is a “reputable or authoritative site” (see Table 1.1 below) in this example. Thus, the summation or weighted scoring may be one mechanism to determine appropriate search results, but the H(4) criteria in this case overrides the scoring and will not allow for high scoring matches which are not authoritative. Thus, a “target score”match 399 is made by the machine learning mechanism and noted for future use of “225” or “175” but not the “200” score. Furthermore, frompart 3, the machine learns that H(4) is generally present when N(3) and N(4) signals are present, but N(1) appears to be less relevant and no positive N(2) data was returned which met the threshold condition. The search results may be presented to the user by high score and reputability inPart 4. However, inpart 4, the search processing system “learns” that neurode input N(3) and N(4) are likely indicators of this human input attribute H(4) and adapts such learning for the next appropriate search task and a preliminary global rule may be put into theexpert rule module 140. Instep 5, for the next search of the type the presence of N(3) and N(4) will be give larger weights or N(1) may also be reduced in weight. Thus repeated learning of this type will nearly eliminate N(1) as relevant. However, the system eliminates N(2) from the neural connection for the next search of this type.TABLE 1.2 Human Review Input # Human Review Expert Rules 1 match existing domain name? 2 Commerce commerce keyword appears 3 Geographically geographical relevant keyword appears 4 Authoritative Site exact match to human review keyword 5 Design Quality 6 Extraspecial status 7 Ads present 8 Porno 9 Gambling 10 Profanity used 11 Family safe 12 Overall weight add - The above table is illustrative of the human input and expert rules as implemented in the present invention in a particular embodiment. These are only illustrative to the example. The expert rules are nonflexible during an intrasearch criteria applicable to the computation of the neural input although these expert rules may be detected at the
input node 105 orfunction gate level 115 as well. However, as shown above, the expert rules are clearly adaptable in the machine learning system of the invention.TABLE 1.3 Sample high-level fuzzy inferences Fuzzy Inference mild moderate severe Comment cool warm good fair bad small Big close Far short Long - FIGS. 24A-D show sample feedback mechanisms for learning.
FIG. 24A is a sample screen of five returned search results with different level of domain accessibility.FIG. 24B is a sample tracking method 1250(1) for learning from the behavior of a user after the search result inFIG. 24A is provided. In essence thesearch processor 100 temporarily stores the results and compares the users behavior. Thus, if a user always chose the selection with a top level domain, thesystem 100 would learn that the TLD score must be increased in weight for each search. - Similarly in
FIG. 24C the user provides simple feedback after the search and the results are processed by the learning system for an application to theexpert rule module 140. InFIG. 24D , an automated machine learning mechanism is shown. In this scenario, the search expression “teenage” is used by a 13 year old to find relevant documents on “heartthrobs,” “hobbies” and “hangouts.” The filter for adult content is always on when this user is present. However, inpart 2, the search processing system did not return an adult flag for “hobbies.” Bypart 3, the computer has learned that adult sites which key to the word “teenage” bury theirdocuments 3 pages deep. Thus, inpart 3, albeit too late forpart 2, the machine “learns” that a score of “3” on the domain level, necessitates a flag for adult content, block those pages. Thus, ifpart 2 happens afterpart 3 then no adult content is returned. - FIGS. 25A-C are illustrative of a simple examples genetic algorithms as they may be adapted or combined as the result of machine learning. These algorithms may be global to the
whole search system 100 or used only by one component They are virtually stored invirtual storage 198 on one ormore computing machines 102, such that they may be accessed by any component of thesearch system 100.FIG. 25A illustrates and inadequate search for “dance clubs in rio”, which resulted in the application of algorithms A and C in theneural processor 120 and weighting rule W(1, 1) in thefuzzy logic module 160.FIG. 25B shows theexpert module 140 adapting A to A′ and storing it for use with C for the retry search.FIG. 25C shows expert rule module combining B with C in theneural network processor 120 and applying adapted weight rule W(1, 1′).FIG. 26 simply illustrates amethod 1300 for applying the principles shown in FIGS. 25A-C. -
FIG. 27 illustrates an example of a genetic component as it may be implemented in the present invention. As can be appreciated by those skilled in the art, genetic components have been described in the invention. For example, above, the human review factor H(4) or the authoritative component was matched to other neural scores N(3) and N(4). In the illustration, the genetic algorithm is a content blocking technique for family suitability based on the firing of a particular neurode. The parametric data includes that the user does a lot of health-related research or that a family member is sick 510(1), but also is interested in keeping the search to family content 510(2). The search includes the word “breast” or other search term, which could be used for both adult and non-adult content searching. Thefuzzy logic module 160 has learned that the word “breast” is supposed to block off the score of the adult content neural input. Thus weight given to such inputs is inverted and other information is made contingent upon the firing of such an inverted neuron. Thus no information is returned that has the adult content tag firing. - However many health related sites will tag certain pages as adult content in order to allow for sensitivity to the marketplace. Thus, the adult content blocker is able to learn manually from human input or from machine learning that the adult content neurode is not accurate for this user's purposes. The genetic algorithm determines that a health related neural input or a human input of reputable site will negate the effects of the adult content blocker for the search term “breast.” However, the algorithm may apply to other search terms that will draw both adult and non adult content. Thus, the genetic algorithm will adapt the neural input in addition to combining with other search algorithms which may apply to a common category of expression for anatomical parts or the genetic component may adapt through a neural pattern recognition.
-
FIG. 28 shows a table which depicts an equilibrium of a learning mechanism, such that a preliminary global rule. In a highly simplified model the expression “String” is evaluated such that each letter corresponds to a neurode with a weight of one. The initial search results indicated that the vowel position was less indicative of the relevant results (R1, R2, R3) and thus reduces the 4th letter neural weight by 0.25. In the second search for “strung” 75% of the results are similar to search 1 (R2,R3,R4), thus the 4th letter is reduced further by a factor of 0.75. This also happened onsearch 3 for “strang.” - The machine cannot innately understand that “string” “strang” and “strung” are all related terms. Thus it has reduced the importance of the 4th position (the vowel) more than half after the third search. However, the search term “strong” is not related and returns only 25% overlapping search results R4. Thus, the machine learns that the 4th position neurode is now more relevant and adjusts the weight by a factor of 1.3. The search “streng” returns no results, boosting the weight by 1.6 and “stryng” returns only R6 which allows for a weight factor of 1.1. So at the end of the six searches, the 4th position neurode is only a bit less than 1 (0.96), which indicates that the search “STR” X “NG” will result in the 4th position being slightly less of a search input factor. The learning mechanism may have enough data on this search type that it makes the preliminary global rule of the weights, a global rule.
- Table 1.1 shows a series of example “spider level” search which may be implemented in as neural input from the
data collection system 50. As can be appreciated, the existence of data collection services which may be purchased for use with the present invention or generated by the categorized different than the examples in table 1.2 provide. However, the 50 or so criteria described may provide for an example of types of search criteria would be processed as neural input adapted by learning mechanisms. - The actual calculation of the search results would be highly dependent on the particular implementation of the invention. Certainly a “score” based on weights of the neural inputs would be one embodiment of the invention. Of course, the advantage of the present invention over the prior art is the fuzzy logic or expert rules that may be implemented at different levels. Thus, a weighted score based on the neural input may adapt by a number of fuzzy mechanisms at a number of computational points.
- A “score” from the generation of a result from a neural network is not an accurate description, however, pattern recognition (discussed below) which is the predominant computational solution in many neural networks may not be appropriate.
-
FIG. 29 illustrates a method of an embodiment of the present invention from search inquiry for a pattern recognition technique for finding a search result. Theneural network processor 120 receives pattern of inputs 125. Theprocessor 120 attempts to match it to previous recognized and stored 127 processes 124 and when it finds a match it loads those search results and returns the expressions to theoutput 400. If the search inquiry 300(2) was different that from the one which provided the previous pattern 125, the search processing system then learns that the two search inquiries 300(1), 300(2) produce the same pattern. - There is redundancy in the predictive models provided in the present invention. This means that the search processor tolerates absent and poor data very well compared with the prior art. As can be appreciated the minimally tolerated input for returning an accurate search answer will be reduced by the machine learning over time.
- The above examples and embodiments are meant to be illustrative only and are not exhaustive. As can be appreciated by those skilled in the art, many of the structures described can be virtual or physical and combined in one machine or several. Furthermore, the modularity of any given component must be appreciated. For example, three (or any number of) neurodes may be combined into a single one if the AI search processor determines that it is appropriate under the circumstances. Thus, the scope of the invention should not be limited to the example provided above, but rather to the spirit of the invention.
Claims (35)
1. A method for processing a search request including the steps of:
determining if a search request activates at least one of a set of search rules;
if said search request activates said at least one search rule, then applying said search rule;
setting a set of input weight adjustments based on said at least one search rule; processing a set of inputs responsive to a collection of data, said set of inputs adjusted by said set of weight adjustments, said processing resulting in a set of filtered data; and
adapting a search engine based on learning, said learning including at least comparing said set of filtered data to either a set of previously filtered data or a feedback mechanism.
2. The method as recited in claim 1 , wherein said set of search rules is adapted for a future search request.
3. The method as recited in claim 1 , wherein said search request is adapted to activate an alternate search rule in said set of search rules.
4. The method as recited in claim 1 , wherein said search request is adapted to not activate said search rule.
5. The method as recited in claim 1 , wherein said search request is adapted to activate a portion of said search rule.
6. The method as recited in claim 1 , further including the step of loading user data wherein said search rule may be activated by said user data.
7. The method as recited in claim 1 , further including the step of accessing external data, wherein said search rule may also be activated or altered by said user data.
8. A search engine apparatus comprising:
a computing device with at least one processor operatively coupled to an interface having an input and output, said computing device connected to at least one data storage device and an internal temporary storage device, said data storage including a set of data characteristics;
a set of one or more input nodes capable of accessing said data, each responding to at least one of said set of data characteristics;
a first module executable on said computing device for processing output responses from said set of one or more input nodes;
a second module executable on said computing device for generating and applying a set of rules, said set of rules including control of said set of one or more input nodes, said second module including at least one contingent set inclusion rule;
an adaptation module executable on said computing device responsive to processed responses from said first module and a set of one or more learning mechanisms, said adaptation mechanism providing said second module with at least one of a confirmed, new or updated rule; and
wherein a search result is generated by one or more rules from said set of rules and being applied to said processed response and provided to a user via said output.
9. The search engine apparatus as recited in claim 8 , wherein said set of one or more input nodes is a virtual program executed by said first module.
10. The search engine as recited in claim 8 , wherein said data storage device is connected to the Internet and includes a connection to a computing device with capability of gathering data from individual Internet sites.
11. The search engine as recited in claim 8 , wherein said control includes at least an activation function, said activation function selecting a subset of said set of one or more input nodes.
12. The search engine as recited in claim 8 wherein said control includes at least a weighting function.
13. The search engine as recited in claim 8 , wherein said first module includes at least one routine executable on said computing device independently of any other routines in said first module, said at least one routine responsive to said adaptation and for processing at least a portion of said responses.
14. The search engine as recited in claim 13 , wherein said at least one routine may be combined with at least one second routine executable on said computing device.
15. The search engine as recited in claim 8 , wherein said data characteristics include characteristics related to content.
16. The search engine as recited in claim 8 , wherein said data characteristics include characteristics related to site performance.
17. The search engine as recited in claim 8 , wherein said data characteristics include characteristics related to evaluations provided by users.
18. The search engine as recited in claim 8 , wherein said data characteristics include characteristics related to keywords.
19. The search engine as recited in claim 8 , wherein said second module is capable of accessing data regarding a user, said data regarding a user activating at least one of said set of rules.
20. The search engine as recited in claim 8 , wherein said second module is capable of accessing data regarding a scenario, said data regarding a scenario being for activating a least one of said set of rules.
21. The search engine as recited in claim 8 , wherein said learning mechanism includes a user feedback mechanism operatively coupled to said adaptation module, said feedback mechanism providing user input related to a search result.
22. The search engine as recited in claim 8 , wherein said learning mechanism includes a user behavior tracking mechanism operatively coupled to said adaptation module, said behavior tracking mechanism for tracking behavior related to a search result.
23. The search engine as recited in claim 8 , wherein said learning mechanism includes a machine learning unit operatively coupled to said adaptation module, said machine learning unit for comparing said temporarily stored result to previously stored results.
24. The search engine as recited in claim 8 , further including a parameter generation unit coupled to said computing device, said parameter generation unit for accessing and storing data, wherein said second module applies rules when a signal data criteria is detected by said parameter generation unit.
25. The search engine as recited in claim 8 , where said second module includes a fuzzy logic generation and execution unit.
26. The search engine as recited in claim 25 , wherein said fuzzy logic generation and execution unit is operatively coupled to said set of input nodes.
27. The search engine apparatus as recited in claim 8 , wherein said connection is a wide area network connection, a local area network connection, an Ethernet connection, a DSL connection, a T1 line connection, a T3 connection, a cable modem connection, or a modem connected through a phone line.
28. The method recited in claim 1 further comprising the act of setting screening rules based on said at least one search rule and
adjusting said set of filtered data according to said screening rules to produce a subset of filtered data.
29. The method as recited in claim 28 wherein said adapting step includes comparison of said subset of filtered data to any previously returned subset of filtered data.
30. A method for finding a document or page located on a network through a uniform resource locator in which a search engine including executable instructions running on one or more computing devices evaluates data regarding the characteristics of a set of said pages or documents and returns a set of one or more relevant documents in response to a search inquiry consisting of search terms wherein the improvement includes using a neural network to evaluate said data and return said set of one or more relevant documents, said neural network being virtual and trainable.
31. The method as recited in claim 30 , wherein fuzzy logic is applied to said neural network at either a low level or high level or both.
32. The method as recited in claim 30 , wherein said neural network is controlled by a set of one or more expert rules either directly or indirectly through fuzzy logic or both.
33. The method as recited in claim 32 , wherein said set of one or more expert rules is activated by user data.
34. The method as recited in claim 32 , wherein said set of one or more expert rules is activated by at least a portion of said search inquiry.
35. The method as recited in claim 30 , wherein said act of training said neural network includes evaluating said set of one or more relevant documents by either comparing said set of one or more relevant documents to a previously returned search result or through a feedback mechanism.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/664,787 US20050055340A1 (en) | 2002-07-26 | 2003-09-16 | Neural-based internet search engine with fuzzy and learning processes implemented by backward propogation |
US10/902,320 US20050004905A1 (en) | 2003-03-03 | 2004-07-29 | Search engine with neural network weighting based on parametric user data |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002395905A CA2395905A1 (en) | 2002-07-26 | 2002-07-26 | Multi-grating tunable chromatic dispersion compensator |
CA2,395,905 | 2002-07-26 | ||
US45123703P | 2003-03-03 | 2003-03-03 | |
US10/390,590 US6937793B2 (en) | 2002-07-26 | 2003-03-17 | Tunable chromatic dispersion compensator |
US10/664,787 US20050055340A1 (en) | 2002-07-26 | 2003-09-16 | Neural-based internet search engine with fuzzy and learning processes implemented by backward propogation |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/390,590 Continuation US6937793B2 (en) | 2002-07-26 | 2003-03-17 | Tunable chromatic dispersion compensator |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/902,320 Continuation US20050004905A1 (en) | 2003-03-03 | 2004-07-29 | Search engine with neural network weighting based on parametric user data |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050055340A1 true US20050055340A1 (en) | 2005-03-10 |
Family
ID=30449996
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/390,590 Expired - Lifetime US6937793B2 (en) | 2002-07-26 | 2003-03-17 | Tunable chromatic dispersion compensator |
US10/664,787 Abandoned US20050055340A1 (en) | 2002-07-26 | 2003-09-16 | Neural-based internet search engine with fuzzy and learning processes implemented by backward propogation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/390,590 Expired - Lifetime US6937793B2 (en) | 2002-07-26 | 2003-03-17 | Tunable chromatic dispersion compensator |
Country Status (4)
Country | Link |
---|---|
US (2) | US6937793B2 (en) |
AU (1) | AU2003212148A1 (en) |
CA (1) | CA2395905A1 (en) |
WO (1) | WO2004012365A1 (en) |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040193684A1 (en) * | 2003-03-26 | 2004-09-30 | Roy Ben-Yoseph | Identifying and using identities deemed to be known to a user |
US20060069982A1 (en) * | 2004-09-30 | 2006-03-30 | Microsoft Corporation | Click distance determination |
US20060074871A1 (en) * | 2004-09-30 | 2006-04-06 | Microsoft Corporation | System and method for incorporating anchor text into ranking search results |
US20060136411A1 (en) * | 2004-12-21 | 2006-06-22 | Microsoft Corporation | Ranking search results using feature extraction |
US20060200460A1 (en) * | 2005-03-03 | 2006-09-07 | Microsoft Corporation | System and method for ranking search results using file types |
US20060265270A1 (en) * | 2005-05-23 | 2006-11-23 | Adam Hyder | Intelligent job matching system and method |
US20060294100A1 (en) * | 2005-03-03 | 2006-12-28 | Microsoft Corporation | Ranking search results using language types |
US20070038622A1 (en) * | 2005-08-15 | 2007-02-15 | Microsoft Corporation | Method ranking search results using biased click distance |
US20070112758A1 (en) * | 2005-11-14 | 2007-05-17 | Aol Llc | Displaying User Feedback for Search Results From People Related to a User |
US7299222B1 (en) * | 2003-12-30 | 2007-11-20 | Aol Llc | Enhanced search results |
US20080104047A1 (en) * | 2005-02-16 | 2008-05-01 | Transaxtions Llc | Intelligent search with guiding info |
US20090030896A1 (en) * | 2007-07-23 | 2009-01-29 | Autiq As | Inference search engine |
US20090106235A1 (en) * | 2007-10-18 | 2009-04-23 | Microsoft Corporation | Document Length as a Static Relevance Feature for Ranking Search Results |
US20090106221A1 (en) * | 2007-10-18 | 2009-04-23 | Microsoft Corporation | Ranking and Providing Search Results Based In Part On A Number Of Click-Through Features |
US20090259651A1 (en) * | 2008-04-11 | 2009-10-15 | Microsoft Corporation | Search results ranking using editing distance and document information |
US20090281975A1 (en) * | 2008-05-06 | 2009-11-12 | Microsoft Corporation | Recommending similar content identified with a neural network |
US20100017403A1 (en) * | 2004-09-27 | 2010-01-21 | Microsoft Corporation | System and method for scoping searches using index keys |
US7761448B2 (en) | 2004-09-30 | 2010-07-20 | Microsoft Corporation | System and method for ranking search results using click distance |
US7840569B2 (en) | 2007-10-18 | 2010-11-23 | Microsoft Corporation | Enterprise relevancy ranking using a neural network |
US8452849B2 (en) | 2002-11-18 | 2013-05-28 | Facebook, Inc. | Host-based intelligent results related to a character stream |
US8577972B1 (en) | 2003-09-05 | 2013-11-05 | Facebook, Inc. | Methods and systems for capturing and managing instant messages |
US8635216B1 (en) * | 2004-09-30 | 2014-01-21 | Avaya Inc. | Enhancing network information retrieval according to a user search profile |
US8688711B1 (en) * | 2009-03-31 | 2014-04-01 | Emc Corporation | Customizable relevancy criteria |
US8701014B1 (en) | 2002-11-18 | 2014-04-15 | Facebook, Inc. | Account linking |
US8719275B1 (en) | 2009-03-31 | 2014-05-06 | Emc Corporation | Color coded radars |
US8738635B2 (en) | 2010-06-01 | 2014-05-27 | Microsoft Corporation | Detection of junk in search result ranking |
US20140222666A1 (en) * | 2012-10-15 | 2014-08-07 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for processing electronic transaction information |
US8965964B1 (en) | 2002-11-18 | 2015-02-24 | Facebook, Inc. | Managing forwarded electronic messages |
US9185067B1 (en) | 1999-12-01 | 2015-11-10 | Facebook, Inc. | System and method for analyzing communications |
US9195755B1 (en) * | 2009-03-31 | 2015-11-24 | Emc Corporation | Relevancy radar |
US9203879B2 (en) | 2000-03-17 | 2015-12-01 | Facebook, Inc. | Offline alerts mechanism |
US9203647B2 (en) | 2002-11-18 | 2015-12-01 | Facebook, Inc. | Dynamic online and geographic location of a user |
US9203794B2 (en) | 2002-11-18 | 2015-12-01 | Facebook, Inc. | Systems and methods for reconfiguring electronic messages |
US9246975B2 (en) | 2000-03-17 | 2016-01-26 | Facebook, Inc. | State change alerts mechanism |
US9319356B2 (en) | 2002-11-18 | 2016-04-19 | Facebook, Inc. | Message delivery control settings |
US9462046B2 (en) | 2003-04-02 | 2016-10-04 | Facebook, Inc. | Degrees of separation for handling communications |
US9495462B2 (en) | 2012-01-27 | 2016-11-15 | Microsoft Technology Licensing, Llc | Re-ranking search results |
US9647872B2 (en) | 2002-11-18 | 2017-05-09 | Facebook, Inc. | Dynamic identification of other users to an online user |
US9667585B2 (en) | 2002-11-18 | 2017-05-30 | Facebook, Inc. | Central people lists accessible by multiple applications |
US9727631B2 (en) | 2004-12-20 | 2017-08-08 | Facebook, Inc. | Automatic categorization of entries in a contact list |
US9779390B1 (en) | 2008-04-21 | 2017-10-03 | Monster Worldwide, Inc. | Apparatuses, methods and systems for advancement path benchmarking |
RU176922U1 (en) * | 2017-07-19 | 2018-02-01 | Федеральное государственное бюджетное образовательное учреждение высшего образования "Московский авиационный институт (национальный исследовательский университет)" | ANALOGUE FUZZY PROCESSOR |
US9959525B2 (en) | 2005-05-23 | 2018-05-01 | Monster Worldwide, Inc. | Intelligent job matching system and method |
US10181116B1 (en) | 2006-01-09 | 2019-01-15 | Monster Worldwide, Inc. | Apparatuses, systems and methods for data entry correlation |
US10187334B2 (en) | 2003-11-26 | 2019-01-22 | Facebook, Inc. | User-defined electronic message preferences |
US10341289B2 (en) | 2004-03-05 | 2019-07-02 | Facebook, Inc. | Systems and methods of calculating communications strengths |
US10387839B2 (en) | 2006-03-31 | 2019-08-20 | Monster Worldwide, Inc. | Apparatuses, methods and systems for automated online data submission |
USRE48102E1 (en) | 2002-12-31 | 2020-07-14 | Facebook, Inc. | Implicit population of access control lists |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7403714B2 (en) * | 2002-03-28 | 2008-07-22 | Main Street Ventures Llc | All optical chromatic and polarization mode dispersion correctors |
US7123795B2 (en) * | 2004-10-25 | 2006-10-17 | Teraxion Inc. | Methods for the alignment and calibration of the spectral responses of optical filters |
JP4975276B2 (en) * | 2005-06-14 | 2012-07-11 | 三菱電機株式会社 | Variable dispersion compensator and manufacturing method thereof |
CN100383584C (en) * | 2005-10-21 | 2008-04-23 | 中国科学院上海光学精密机械研究所 | Optical fiber optical grating mm wave converter and its preparation method |
CN100451701C (en) * | 2006-06-28 | 2009-01-14 | 中国科学院半导体研究所 | Integrable multiple channels color dispersion compensator |
US7711224B1 (en) | 2007-09-14 | 2010-05-04 | Teraxion Inc. | Colorless multichannel tunable dispersion compensator |
US7689077B1 (en) | 2008-11-07 | 2010-03-30 | International Business Machines Corporation | Low differential delay chromatic dispersion compensator |
US9331784B2 (en) * | 2008-11-07 | 2016-05-03 | International Business Machines Corporation | Dynamic tunable low latency chromatic dispersion compensator |
US8331745B2 (en) * | 2009-03-18 | 2012-12-11 | Teraxion Inc. | Assembly for applying a temperature gradient to a refractive index grating and chromatic dispersion compensator |
JP4952744B2 (en) * | 2009-06-15 | 2012-06-13 | 富士通株式会社 | Variable wavelength dispersion compensator and optical receiver module |
US8326153B2 (en) | 2010-04-09 | 2012-12-04 | Oclaro (North America), Inc. | Tunable dispersion compensator configured for continuous setpoint control |
CN102853913B (en) * | 2012-08-30 | 2014-07-16 | 中国科学技术大学 | Real-time spectrum analysis device and method of fiber bragg grating |
US9941973B2 (en) | 2016-08-29 | 2018-04-10 | Ciena Corporation | Phase modulator with reduced residual amplitude modulation |
CN106788865B (en) * | 2016-12-12 | 2018-06-12 | 南京理工大学 | Wavelength-division multiplex based on fiber reflector and LCFBG is really delayed experimental provision and method |
US11226504B2 (en) | 2019-07-19 | 2022-01-18 | Ciena Corporation | Free-carrier absorption variable optical attenuators and thermal phase shifters formed by an optical waveguide having multiple passes in an intrinsic region |
CN113541799B (en) * | 2021-06-15 | 2022-09-06 | 浙江大学 | Digital-analog combined cascade adjustable silicon-based dispersion compensation device |
CN113904726B (en) * | 2021-11-15 | 2022-09-16 | 东南大学 | Large time delay difference dispersion waveguide structure |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020157095A1 (en) * | 2001-03-02 | 2002-10-24 | International Business Machines Corporation | Content digest system, video digest system, user terminal, video digest generation method, video digest reception method and program therefor |
US20030130998A1 (en) * | 1998-11-18 | 2003-07-10 | Harris Corporation | Multiple engine information retrieval and visualization system |
US20030177450A1 (en) * | 2002-03-12 | 2003-09-18 | Alex Nugent | Physical neural network design incorporating nanotechnology |
US20030212663A1 (en) * | 2002-05-08 | 2003-11-13 | Doug Leno | Neural network feedback for enhancing text search |
US6714929B1 (en) * | 2001-04-13 | 2004-03-30 | Auguri Corporation | Weighted preference data search system and method |
US20040162796A1 (en) * | 2002-03-12 | 2004-08-19 | Alex Nugent | Application of Hebbian and anti-Hebbian learning to nanotechnology-based physical neural networks |
US6792412B1 (en) * | 1999-02-02 | 2004-09-14 | Alan Sullivan | Neural network system and method for controlling information output based on user feedback |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5007705A (en) | 1989-12-26 | 1991-04-16 | United Technologies Corporation | Variable optical fiber Bragg filter arrangement |
GB9024326D0 (en) | 1990-11-08 | 1990-12-19 | British Telecomm | Method of forming optical fibre gratings |
CA2134958C (en) | 1994-11-02 | 2004-10-12 | Raymond Massey Measures | Apparatus and method of bragg intra-grating strain control |
US5671307A (en) | 1995-04-10 | 1997-09-23 | Universite Laval | Use of a temperature gradient to impose a chirp on a fibre bragg grating |
CA2229958A1 (en) | 1997-03-26 | 1998-09-26 | Jds Fitel Inc. | A method and device for wavelength and bandwidth tuning of an optical grating |
US5982963A (en) | 1997-12-15 | 1999-11-09 | University Of Southern California | Tunable nonlinearly chirped grating |
US6275629B1 (en) | 1998-09-11 | 2001-08-14 | Lucent Technologies Inc. | Optical grating devices with adjustable chirp |
US6148127A (en) * | 1998-09-23 | 2000-11-14 | Lucent Technologies Inc. | Tunable dispersion compensator and optical system comprising same |
US6122421A (en) | 1998-09-24 | 2000-09-19 | Lucent Technologies Inc. | Magnetostrictive wavelength-shifting devices and optical communication systems comprising same |
US6363187B1 (en) | 1999-08-30 | 2002-03-26 | Northern Telecom Limited | Chromatic dispersion compensation |
CA2324709A1 (en) * | 1999-11-05 | 2001-05-05 | Jds Uniphase Inc. | Tunable dispersion compensator |
WO2001051972A1 (en) | 2000-01-07 | 2001-07-19 | University Of Southern California | Tunable optical dispersion-slope compensation based on a nonlinearly-chirped bragg grating |
US6356684B1 (en) | 2000-04-14 | 2002-03-12 | General Dynamics Advanced Technology Systems, Inc. | Adjustable optical fiber grating dispersion compensators |
US6594410B2 (en) | 2000-08-26 | 2003-07-15 | Cidra Corporation | Wide range tunable optical filter |
JP2002072034A (en) * | 2000-08-29 | 2002-03-12 | Hitachi Cable Ltd | Tunable dispersion compensator |
US6381388B1 (en) | 2000-09-01 | 2002-04-30 | Nortel Networks Limited | Chromatic dispersion compensation |
US20020048430A1 (en) | 2000-10-20 | 2002-04-25 | Minoru Hashimoto | Light dispersion equalizer |
US6360042B1 (en) | 2001-01-31 | 2002-03-19 | Pin Long | Tunable optical fiber gratings device |
US7062123B2 (en) * | 2001-12-31 | 2006-06-13 | 3M Innovative Properties Company | System for higher-order dispersion compensation |
-
2002
- 2002-07-26 CA CA002395905A patent/CA2395905A1/en not_active Abandoned
-
2003
- 2003-03-17 AU AU2003212148A patent/AU2003212148A1/en not_active Abandoned
- 2003-03-17 WO PCT/CA2003/000359 patent/WO2004012365A1/en not_active Application Discontinuation
- 2003-03-17 US US10/390,590 patent/US6937793B2/en not_active Expired - Lifetime
- 2003-09-16 US US10/664,787 patent/US20050055340A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030130998A1 (en) * | 1998-11-18 | 2003-07-10 | Harris Corporation | Multiple engine information retrieval and visualization system |
US6792412B1 (en) * | 1999-02-02 | 2004-09-14 | Alan Sullivan | Neural network system and method for controlling information output based on user feedback |
US20050086186A1 (en) * | 1999-02-02 | 2005-04-21 | Alan Sullivan | Neural network system and method for controlling information output based on user feedback |
US20020157095A1 (en) * | 2001-03-02 | 2002-10-24 | International Business Machines Corporation | Content digest system, video digest system, user terminal, video digest generation method, video digest reception method and program therefor |
US6714929B1 (en) * | 2001-04-13 | 2004-03-30 | Auguri Corporation | Weighted preference data search system and method |
US20030177450A1 (en) * | 2002-03-12 | 2003-09-18 | Alex Nugent | Physical neural network design incorporating nanotechnology |
US20040162796A1 (en) * | 2002-03-12 | 2004-08-19 | Alex Nugent | Application of Hebbian and anti-Hebbian learning to nanotechnology-based physical neural networks |
US20030212663A1 (en) * | 2002-05-08 | 2003-11-13 | Doug Leno | Neural network feedback for enhancing text search |
Cited By (111)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9185067B1 (en) | 1999-12-01 | 2015-11-10 | Facebook, Inc. | System and method for analyzing communications |
US9819629B2 (en) | 1999-12-01 | 2017-11-14 | Facebook, Inc. | System and method for analyzing communications |
US9813370B2 (en) | 1999-12-01 | 2017-11-07 | Facebook, Inc. | System and method for analyzing communications |
US9749279B2 (en) | 1999-12-01 | 2017-08-29 | Facebook, Inc. | System and method for analyzing communications |
US9749276B2 (en) | 1999-12-01 | 2017-08-29 | Facebook, Inc. | System and method for analyzing communications |
US9705834B2 (en) | 1999-12-01 | 2017-07-11 | Facebook, Inc. | System and method for analyzing communications |
US9619575B2 (en) | 1999-12-01 | 2017-04-11 | Facebook, Inc. | System and method for analyzing communications |
US9514233B2 (en) | 1999-12-01 | 2016-12-06 | Facebook, Inc. | System and method for analyzing communications |
US9405843B2 (en) | 1999-12-01 | 2016-08-02 | Facebook, Inc. | System and method for analyzing communications |
US9736209B2 (en) | 2000-03-17 | 2017-08-15 | Facebook, Inc. | State change alerts mechanism |
US9246975B2 (en) | 2000-03-17 | 2016-01-26 | Facebook, Inc. | State change alerts mechanism |
US9203879B2 (en) | 2000-03-17 | 2015-12-01 | Facebook, Inc. | Offline alerts mechanism |
US9621376B2 (en) | 2002-11-18 | 2017-04-11 | Facebook, Inc. | Dynamic location of a subordinate user |
US8775560B2 (en) | 2002-11-18 | 2014-07-08 | Facebook, Inc. | Host-based intelligent results related to a character stream |
US10778635B2 (en) | 2002-11-18 | 2020-09-15 | Facebook, Inc. | People lists |
US8954530B2 (en) | 2002-11-18 | 2015-02-10 | Facebook, Inc. | Intelligent results related to a character stream |
US10389661B2 (en) | 2002-11-18 | 2019-08-20 | Facebook, Inc. | Managing electronic messages sent to mobile devices associated with electronic messaging accounts |
US9560000B2 (en) | 2002-11-18 | 2017-01-31 | Facebook, Inc. | Reconfiguring an electronic message to effect an enhanced notification |
US9515977B2 (en) | 2002-11-18 | 2016-12-06 | Facebook, Inc. | Time based electronic message delivery |
US9571440B2 (en) | 2002-11-18 | 2017-02-14 | Facebook, Inc. | Notification archive |
US9647872B2 (en) | 2002-11-18 | 2017-05-09 | Facebook, Inc. | Dynamic identification of other users to an online user |
US9356890B2 (en) | 2002-11-18 | 2016-05-31 | Facebook, Inc. | Enhanced buddy list using mobile device identifiers |
US9667585B2 (en) | 2002-11-18 | 2017-05-30 | Facebook, Inc. | Central people lists accessible by multiple applications |
US10033669B2 (en) | 2002-11-18 | 2018-07-24 | Facebook, Inc. | Managing electronic messages sent to reply telephone numbers |
US9319356B2 (en) | 2002-11-18 | 2016-04-19 | Facebook, Inc. | Message delivery control settings |
US9313046B2 (en) | 2002-11-18 | 2016-04-12 | Facebook, Inc. | Presenting dynamic location of a user |
US9894018B2 (en) | 2002-11-18 | 2018-02-13 | Facebook, Inc. | Electronic messaging using reply telephone numbers |
US9852126B2 (en) | 2002-11-18 | 2017-12-26 | Facebook, Inc. | Host-based intelligent results related to a character stream |
US9253136B2 (en) | 2002-11-18 | 2016-02-02 | Facebook, Inc. | Electronic message delivery based on presence information |
US9729489B2 (en) | 2002-11-18 | 2017-08-08 | Facebook, Inc. | Systems and methods for notification management and delivery |
US9203794B2 (en) | 2002-11-18 | 2015-12-01 | Facebook, Inc. | Systems and methods for reconfiguring electronic messages |
US8954531B2 (en) | 2002-11-18 | 2015-02-10 | Facebook, Inc. | Intelligent messaging label results related to a character stream |
US9171064B2 (en) | 2002-11-18 | 2015-10-27 | Facebook, Inc. | Intelligent community based results related to a character stream |
US8452849B2 (en) | 2002-11-18 | 2013-05-28 | Facebook, Inc. | Host-based intelligent results related to a character stream |
US9075868B2 (en) | 2002-11-18 | 2015-07-07 | Facebook, Inc. | Intelligent results based on database queries |
US9075867B2 (en) | 2002-11-18 | 2015-07-07 | Facebook, Inc. | Intelligent results using an assistant |
US9053173B2 (en) | 2002-11-18 | 2015-06-09 | Facebook, Inc. | Intelligent results related to a portion of a search query |
US9774560B2 (en) | 2002-11-18 | 2017-09-26 | Facebook, Inc. | People lists |
US8701014B1 (en) | 2002-11-18 | 2014-04-15 | Facebook, Inc. | Account linking |
US9769104B2 (en) | 2002-11-18 | 2017-09-19 | Facebook, Inc. | Methods and system for delivering multiple notifications |
US9053175B2 (en) | 2002-11-18 | 2015-06-09 | Facebook, Inc. | Intelligent results using a spelling correction agent |
US9203647B2 (en) | 2002-11-18 | 2015-12-01 | Facebook, Inc. | Dynamic online and geographic location of a user |
US9053174B2 (en) | 2002-11-18 | 2015-06-09 | Facebook, Inc. | Intelligent vendor results related to a character stream |
US9047364B2 (en) | 2002-11-18 | 2015-06-02 | Facebook, Inc. | Intelligent client capability-based results related to a character stream |
US8819176B2 (en) | 2002-11-18 | 2014-08-26 | Facebook, Inc. | Intelligent map results related to a character stream |
US8965964B1 (en) | 2002-11-18 | 2015-02-24 | Facebook, Inc. | Managing forwarded electronic messages |
US9571439B2 (en) | 2002-11-18 | 2017-02-14 | Facebook, Inc. | Systems and methods for notification delivery |
US8954534B2 (en) | 2002-11-18 | 2015-02-10 | Facebook, Inc. | Host-based intelligent results related to a character stream |
USRE48102E1 (en) | 2002-12-31 | 2020-07-14 | Facebook, Inc. | Implicit population of access control lists |
US8874672B2 (en) | 2003-03-26 | 2014-10-28 | Facebook, Inc. | Identifying and using identities deemed to be known to a user |
US20040205127A1 (en) * | 2003-03-26 | 2004-10-14 | Roy Ben-Yoseph | Identifying and using identities deemed to be known to a user |
US9736255B2 (en) | 2003-03-26 | 2017-08-15 | Facebook, Inc. | Methods of providing access to messages based on degrees of separation |
US9531826B2 (en) | 2003-03-26 | 2016-12-27 | Facebook, Inc. | Managing electronic messages based on inference scores |
US20040210639A1 (en) * | 2003-03-26 | 2004-10-21 | Roy Ben-Yoseph | Identifying and using identities deemed to be known to a user |
US7603417B2 (en) | 2003-03-26 | 2009-10-13 | Aol Llc | Identifying and using identities deemed to be known to a user |
US9516125B2 (en) | 2003-03-26 | 2016-12-06 | Facebook, Inc. | Identifying and using identities deemed to be known to a user |
US20040193684A1 (en) * | 2003-03-26 | 2004-09-30 | Roy Ben-Yoseph | Identifying and using identities deemed to be known to a user |
US7613776B1 (en) | 2003-03-26 | 2009-11-03 | Aol Llc | Identifying and using identities deemed to be known to a user |
US8117265B2 (en) | 2003-03-26 | 2012-02-14 | Aol Inc. | Identifying and using identities deemed to be known to a user |
US20040205126A1 (en) * | 2003-03-26 | 2004-10-14 | Roy Ben-Yoseph | Identifying and using identities deemed to be known to a user |
US9462046B2 (en) | 2003-04-02 | 2016-10-04 | Facebook, Inc. | Degrees of separation for handling communications |
US10102504B2 (en) | 2003-09-05 | 2018-10-16 | Facebook, Inc. | Methods for controlling display of electronic messages captured based on community rankings |
US9070118B2 (en) | 2003-09-05 | 2015-06-30 | Facebook, Inc. | Methods for capturing electronic messages based on capture rules relating to user actions regarding received electronic messages |
US8577972B1 (en) | 2003-09-05 | 2013-11-05 | Facebook, Inc. | Methods and systems for capturing and managing instant messages |
US10187334B2 (en) | 2003-11-26 | 2019-01-22 | Facebook, Inc. | User-defined electronic message preferences |
US7299222B1 (en) * | 2003-12-30 | 2007-11-20 | Aol Llc | Enhanced search results |
US20080082512A1 (en) * | 2003-12-30 | 2008-04-03 | Aol Llc | Enhanced Search Results |
US8473855B2 (en) | 2003-12-30 | 2013-06-25 | Microsoft Corporation | Enhanced search results |
US10341289B2 (en) | 2004-03-05 | 2019-07-02 | Facebook, Inc. | Systems and methods of calculating communications strengths |
US8843486B2 (en) | 2004-09-27 | 2014-09-23 | Microsoft Corporation | System and method for scoping searches using index keys |
US20100017403A1 (en) * | 2004-09-27 | 2010-01-21 | Microsoft Corporation | System and method for scoping searches using index keys |
US7827181B2 (en) | 2004-09-30 | 2010-11-02 | Microsoft Corporation | Click distance determination |
US20060074871A1 (en) * | 2004-09-30 | 2006-04-06 | Microsoft Corporation | System and method for incorporating anchor text into ranking search results |
US7739277B2 (en) | 2004-09-30 | 2010-06-15 | Microsoft Corporation | System and method for incorporating anchor text into ranking search results |
US8082246B2 (en) | 2004-09-30 | 2011-12-20 | Microsoft Corporation | System and method for ranking search results using click distance |
US8635216B1 (en) * | 2004-09-30 | 2014-01-21 | Avaya Inc. | Enhancing network information retrieval according to a user search profile |
US20060069982A1 (en) * | 2004-09-30 | 2006-03-30 | Microsoft Corporation | Click distance determination |
US7761448B2 (en) | 2004-09-30 | 2010-07-20 | Microsoft Corporation | System and method for ranking search results using click distance |
US9727631B2 (en) | 2004-12-20 | 2017-08-08 | Facebook, Inc. | Automatic categorization of entries in a contact list |
US7716198B2 (en) | 2004-12-21 | 2010-05-11 | Microsoft Corporation | Ranking search results using feature extraction |
US20060136411A1 (en) * | 2004-12-21 | 2006-06-22 | Microsoft Corporation | Ranking search results using feature extraction |
US20080104047A1 (en) * | 2005-02-16 | 2008-05-01 | Transaxtions Llc | Intelligent search with guiding info |
US7792811B2 (en) | 2005-02-16 | 2010-09-07 | Transaxtions Llc | Intelligent search with guiding info |
US20060200460A1 (en) * | 2005-03-03 | 2006-09-07 | Microsoft Corporation | System and method for ranking search results using file types |
US20060294100A1 (en) * | 2005-03-03 | 2006-12-28 | Microsoft Corporation | Ranking search results using language types |
US7792833B2 (en) | 2005-03-03 | 2010-09-07 | Microsoft Corporation | Ranking search results using language types |
US9959525B2 (en) | 2005-05-23 | 2018-05-01 | Monster Worldwide, Inc. | Intelligent job matching system and method |
US20060265270A1 (en) * | 2005-05-23 | 2006-11-23 | Adam Hyder | Intelligent job matching system and method |
US20070038622A1 (en) * | 2005-08-15 | 2007-02-15 | Microsoft Corporation | Method ranking search results using biased click distance |
US20070112758A1 (en) * | 2005-11-14 | 2007-05-17 | Aol Llc | Displaying User Feedback for Search Results From People Related to a User |
US10181116B1 (en) | 2006-01-09 | 2019-01-15 | Monster Worldwide, Inc. | Apparatuses, systems and methods for data entry correlation |
US10387839B2 (en) | 2006-03-31 | 2019-08-20 | Monster Worldwide, Inc. | Apparatuses, methods and systems for automated online data submission |
US20090030896A1 (en) * | 2007-07-23 | 2009-01-29 | Autiq As | Inference search engine |
US7840569B2 (en) | 2007-10-18 | 2010-11-23 | Microsoft Corporation | Enterprise relevancy ranking using a neural network |
US9348912B2 (en) | 2007-10-18 | 2016-05-24 | Microsoft Technology Licensing, Llc | Document length as a static relevance feature for ranking search results |
US20090106235A1 (en) * | 2007-10-18 | 2009-04-23 | Microsoft Corporation | Document Length as a Static Relevance Feature for Ranking Search Results |
US20090106221A1 (en) * | 2007-10-18 | 2009-04-23 | Microsoft Corporation | Ranking and Providing Search Results Based In Part On A Number Of Click-Through Features |
US20090259651A1 (en) * | 2008-04-11 | 2009-10-15 | Microsoft Corporation | Search results ranking using editing distance and document information |
US8812493B2 (en) | 2008-04-11 | 2014-08-19 | Microsoft Corporation | Search results ranking using editing distance and document information |
US9830575B1 (en) | 2008-04-21 | 2017-11-28 | Monster Worldwide, Inc. | Apparatuses, methods and systems for advancement path taxonomy |
US9779390B1 (en) | 2008-04-21 | 2017-10-03 | Monster Worldwide, Inc. | Apparatuses, methods and systems for advancement path benchmarking |
US10387837B1 (en) | 2008-04-21 | 2019-08-20 | Monster Worldwide, Inc. | Apparatuses, methods and systems for career path advancement structuring |
US8032469B2 (en) | 2008-05-06 | 2011-10-04 | Microsoft Corporation | Recommending similar content identified with a neural network |
US20090281975A1 (en) * | 2008-05-06 | 2009-11-12 | Microsoft Corporation | Recommending similar content identified with a neural network |
US8688711B1 (en) * | 2009-03-31 | 2014-04-01 | Emc Corporation | Customizable relevancy criteria |
US8719275B1 (en) | 2009-03-31 | 2014-05-06 | Emc Corporation | Color coded radars |
US9195755B1 (en) * | 2009-03-31 | 2015-11-24 | Emc Corporation | Relevancy radar |
US8738635B2 (en) | 2010-06-01 | 2014-05-27 | Microsoft Corporation | Detection of junk in search result ranking |
US9495462B2 (en) | 2012-01-27 | 2016-11-15 | Microsoft Technology Licensing, Llc | Re-ranking search results |
US20140222666A1 (en) * | 2012-10-15 | 2014-08-07 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for processing electronic transaction information |
RU176922U1 (en) * | 2017-07-19 | 2018-02-01 | Федеральное государственное бюджетное образовательное учреждение высшего образования "Московский авиационный институт (национальный исследовательский университет)" | ANALOGUE FUZZY PROCESSOR |
Also Published As
Publication number | Publication date |
---|---|
AU2003212148A1 (en) | 2004-02-16 |
CA2395905A1 (en) | 2004-01-26 |
US6937793B2 (en) | 2005-08-30 |
US20040017972A1 (en) | 2004-01-29 |
WO2004012365A1 (en) | 2004-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20050055340A1 (en) | Neural-based internet search engine with fuzzy and learning processes implemented by backward propogation | |
US20050004905A1 (en) | Search engine with neural network weighting based on parametric user data | |
US8095523B2 (en) | Method and apparatus for context-based content recommendation | |
Menczer et al. | Adaptive retrieval agents: Internalizing local context and scaling up to the Web | |
US6266668B1 (en) | System and method for dynamic data-mining and on-line communication of customized information | |
US8538959B2 (en) | Personalized data search utilizing social activities | |
US20190073420A1 (en) | System for creating a reasoning graph and for ranking of its nodes | |
Velásquez et al. | Adaptive web sites: A knowledge extraction from web data approach | |
Su et al. | How to improve your search engine ranking: Myths and reality | |
Su et al. | How to improve your Google ranking: Myths and reality | |
WO2000063837A1 (en) | System for retrieving multimedia information from the internet using multiple evolving intelligent agents | |
US20040177081A1 (en) | Neural-based internet search engine with fuzzy and learning processes implemented at multiple levels | |
CN111625715B (en) | Information extraction method and device, electronic equipment and storage medium | |
CN112257841A (en) | Data processing method, device and equipment in graph neural network and storage medium | |
Chou et al. | Commercial Internet filters: Perils and opportunities | |
US7962480B1 (en) | Using a weighted tree to determine document relevance | |
Lagopoulos et al. | Content-aware web robot detection | |
Bello et al. | Conversion of website users to customers-The black hat SEO technique | |
Harris | Searching for Diverse Perspectives in News Articles: Using an LSTM Network to Classify Sentiment. | |
Aghabozorgi et al. | Recommender systems: incremental clustering on web log data | |
Zhang et al. | Identification of factors predicting clickthrough in Web searching using neural network analysis | |
Yu et al. | Evolving intelligent text-based agents | |
CN113312479A (en) | Cross-domain false news detection method | |
Meghabghab | Iterative radial basis functions neural networks as metamodels of stochastic simulations of the quality of search engines in the World Wide Web | |
Deng et al. | Relation‐Level User Behavior Modeling for Click‐Through Rate Prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |