WO2003046769A1 - Methods and apparatus for statistical data analysis - Google Patents

Methods and apparatus for statistical data analysis Download PDF

Info

Publication number
WO2003046769A1
WO2003046769A1 PCT/US2002/037727 US0237727W WO03046769A1 WO 2003046769 A1 WO2003046769 A1 WO 2003046769A1 US 0237727 W US0237727 W US 0237727W WO 03046769 A1 WO03046769 A1 WO 03046769A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
store
rdf
triples
epoch
Prior art date
Application number
PCT/US2002/037727
Other languages
French (fr)
Inventor
Colin P. Britton
Ashok Kumar
David Bigwood
Howard Greenblatt
Original Assignee
Metatomix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Metatomix, Inc. filed Critical Metatomix, Inc.
Priority to CA002471468A priority Critical patent/CA2471468A1/en
Priority to EP02791310A priority patent/EP1483688A1/en
Priority to AU2002365577A priority patent/AU2002365577A1/en
Publication of WO2003046769A1 publication Critical patent/WO2003046769A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/80Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
    • G06F16/84Mapping; Conversion
    • G06F16/86Mapping to a database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2452Query translation

Definitions

  • the invention pertains to digital data processing and, more particularly, to methods and apparatus for enterprise business visibility and insight using real-time reporting tools.
  • ERP enterprise resource planning
  • a major impediment to enterprise business visibility is the consolidation of data from these disparate legacy databases with one another and with that from newer e-commerce databases.
  • inventory on-hand data gleaned from a legacy ERP system may be diffi- cult to combine with customer order data gleaned from web servers that support e-commerce (and other web-based) transactions. This is not to mention difficulties, for example, in consolidating resource scheduling data from the ERP system with the forecasting data from the marketing database system.
  • An object of this invention is to provide improved methods and apparatus for digital data processing and, more particularly, for enterprise business visibility and insight (hereinafter, "enterprise business visibility").
  • a further object is to provide such methods and apparatus as can rapidly and accurately retrieve information responsive to user inquiries.
  • a further object of the invention is to provide such methods and apparatus as can be readily and inexpensively integrated with legacy, current and future database management systems.
  • a still further object of the invention is to provide such methods and apparatus as can be implemented incrementally or otherwise without interruption of ente ⁇ rise operation.
  • Yet a still further object of the invention is to provide such methods and apparatus as to facilitate ready access to up-to-date enterprise data, regardless of its underlying source.
  • Yet still a further object of the invention is to provide such methods and apparatus as permit flexible presentation of enterprise data in an easily understood manner.
  • the aforementioned are among the objects attained by the invention, one aspect of which provides a method of time-wise data reduction that includes the steps of inputting data from a source; summarizing that data according to one or more selected epochs in which it belongs; and generating for each such selected epoch one or more RDF triples characterizing the summarized data.
  • the data source may be, for example, a database, a data stream or otherwise.
  • the selected epoch may be a second, minute, hour, week, month, year, or so forth.
  • RDF triples in the form of RDF document objects.
  • RDF document objects can be stored, for example, in a hierarchical data store such as, for example, a WebDAV server.
  • Still further related aspects of the invention provide for parsing triples from the RDF document objects and storing them in a relational data store.
  • a further related aspect of the invention provides for storing the triples in a relational store that is organized according to a hashed with origin approach.
  • Still yet other aspects of the invention provide for retrieving information represented by the triples in the hierarchical and/or relational data stores, e.g., for presentation to a user.
  • Related aspects of the invention provide for retrieving triples containing time-wise reduced data, e.g., for presentation to a user.
  • Related aspects of the invention provide methods as described above including a sum- marizing the input data according to one or more epochs of differing length. Further aspects of the invention provide methods as described above including querying the source, e.g., a legacy database, in order to obtain the input data. Related aspects of the invention provides for generating such queries in SQL format.
  • Still other aspects of the invention provide methods as described above including the step of inputting an XML file that identifies one or more sources of input data, one or more fields thereof to be summarized in the time-wise reduction, and/or one or more epochs for which those fields are to be summarized.
  • Further aspects of the invention provide methods as described above including responding to an input datum by updating summary data for an epoch of the shortest duration, e.g., a store of per day data.
  • Related aspects of the invention provide for updating a store of summary data for epochs of greater duration, e.g., stores of per week or per month data, from summary data maintained in a store for an epoch of lesser duration, e.g., a store of per day data.
  • Figure 1 depicts an improved enterprise business visibility and insight system according invention
  • Figure 1 A depicts an architecture for a hologram data store according to the invention, e.g., in the system of claim 1;
  • Figure IB depicts the tables in a model store and a triples store of the hologram data store of Figure 1A;
  • Figure 2 depicts a directed graph representing data triples of the type maintained in a data store according to the invention.
  • Figure 3 is a functional block diagram of a time-wise data reduction module in a system according to the module.
  • FIG. 1 depicts a real-time enterprise business visibility and insight system according to the invention.
  • the illustrated system 100 includes connectors 108 that provide software interfaces to legacy, e-commerce and other databases 140 (hereinafter, collectively, “legacy databases”).
  • a “hologram” database 114 (hereinafter, “data store” or “hologram data store”), which is coupled to the legacy databases 140 via the connectors 108, stores data from those databases 140.
  • a framework server 116 accesses the data store 114, presenting selected data to (and permitting queries from) a user browser 118.
  • the server 116 can also permit updates to data in the data store 114 and, thereby, in the legacy databases 140.
  • Legacy databases 140 represent existing (and future) databases and other sources of information (including data streams) in a company, organization or other entity (hereinafter
  • databases 140 include a retail e-commerce database (e.g., as indicated by the cloud and server icons adjacent database 140c) maintained with a Sybase® database management system, an inventory database maintained with an Oracle® database management system and an ERP database maintained with a SAP® Enterprise Resource Planning system.
  • a retail e-commerce database e.g., as indicated by the cloud and server icons adjacent database 140c
  • an inventory database maintained with an Oracle® database management system
  • an ERP database maintained with a SAP® Enterprise Resource Planning system.
  • SAP® Enterprise Resource Planning system SAP® Enterprise Resource Planning
  • Connectors 108 serve as an interface to legacy database systems 140. Each connector applies requests to, and receives information from, a respective legacy database, using that database's API or other interface mechanism. Thus, for example, connector 108a applies requests to legacy database 140a using the corresponding SAP API; connector 108b, to legacy database 140b using Oracle API; and connector 108c, to legacy database 140c using the corresponding Sybase API.
  • these requests are for purposes of accessing data stored in the respective databases 140.
  • the requests can be simple queries, such as SQL queries and the like (e.g., depending on the type of the underlying database and its API) or more complex sets of queries, such as those commonly used in data mining.
  • one or more of the connectors can use decision trees, statistical techniques or other query and analysis mechanisms known in the art of data mining to extract information from the databases.
  • Specific queries and analysis methodologies can be specified by the hologram data store 114 or the framework server 116 for application by the connectors.
  • the connectors themselves can construct specific queries and methodologies from more general queries received from the data store 114 or server 116. For example, request-specific items can be "plugged" into query templates thereby effecting greater speed and efficiency.
  • the requests can be stored in the connectors 108 for application and/or reapplication to the respective legacy databases 108 to provide one-time or periodic data store updates.
  • Connectors can use expiration date information to determine which of a plurality of similar data to return to the data store, or if dates are absent, the connectors can mark returned data as being of lower confidence levels.
  • Data and other information generated by the databases 140 in response to the requests are routed by connectors to the hologram data store 114. That other information can include, for example, expiry or other adjectival data for use by the data store in caching, purging, updating and selecting data.
  • the messages can be cached by the connectors 108, though, they are preferably immediately routed to the store 114.
  • the hologram data store 114 stores data from the legacy databases 140 (and from the framework server 116, as discussed below) as RDF triples.
  • the data store 114 can be embodied on any digital data processing system or systems that are in communications coupling (e.g., as defined above) with the connectors 108 and the framework server 116.
  • the data store 114 is embodied in a workstation or other high-end computing device with high capacity storage devices or arrays, though, this may not be required for any given implementation.
  • the hologram data store 114 may be contained on an optical storage device, this is not the sense in which the term "hologram" is used. Rather, it refers to its storage of data from multiple sources (e.g., the legacy databases 140) in a form which permits that data to be queried and coalesced from a variety of perspectives, depending on the needs of the user and the capabilities of the framework server 116.
  • sources e.g., the legacy databases 140
  • a preferred data store 114 stores the data from the legacy databases 140 in subject-predicate-object form, e.g., RDF triples, though those of ordinary skill in the art will appreciate that other forms may be used as well, or instead.
  • RDF is a way of expressing the properties of items of data. Those items are referred to as subjects. Their properties are referred to as predicates. And, the values of those properties are referred to as objects.
  • an expression of a property of an item is referred to as a triple, a convenience reflecting that the expression contains three parts: subject, predicate and object.
  • Subjects also referred to as resources, can be anything that is described by an RDF expression.
  • a subject can be person, place or thing — though, typically, only an identifier of the subject is used in an actual RDF expression, not the person, place or thing itself. Examples of subjects might be "car,” “Joe,” “http://www.metatomix.com.”
  • a predicate identifies a property of a subject. According to the RDF specification, this may be any "specific aspect, characteristic, attribute, or relation used to describe a resource.” For the three exemplary subjects above, examples of predicates might be "make,” “citizenship,” “owner.”
  • - ⁇ Objects can be literals, i.e., strings that identify or name the corresponding property
  • a given subject may have multiple predicates, each predicate indexing an object.
  • a subject postal zip code might have an index to an object town and an index to an object state, either (or both) index being a predicate URL
  • RDF triples here, expressed in extensible markup language (XML) syntax.
  • XML extensible markup language
  • the listing shows only a sampling of the triples in a database 114, which typically would contain tens of thousands or more of such triples.
  • Subjects are indicated within the listing using a "rdf:about” statement.
  • the second line of the listing defines a subject as a resource named "postal://zip#02886.” That subject has predicates and objects that follow the subject declaration.
  • the subjects and predicates are expressed as uniform resource indicators (URTs), e.g., of the type defined in Berniers-Lee et al, Uniform Resource Identifiers fURF): Generic Syntax (RFC 2396) (March 1998), and can be said to be expressed in a form ⁇ scheme>:// ⁇ path># ⁇ fragment>.
  • UTRs uniform resource indicators
  • ⁇ scheme> is "postal”
  • ⁇ path> is "zip”
  • ⁇ f ⁇ agment> is, for example, "02886" and "02901."
  • predicates are expressed in the form ⁇ scheme>:// ⁇ path># ⁇ fragment>, as is evident to those in ordinary skill in the art.
  • predicates that are formally expressed as: "http://www.metatomix.com/postalCode/ 1.0#town,” "http://www.metatomix.eom/postalCode/l .0#state,” "http://www.metatomix.com/ postalCode/1.0#country” and "http://www.metatomix.eom/postalCode/l .0#zip.”
  • the ⁇ scheme> for the predicates is "http” and ⁇ path> is "www.metatomix.com/ postalCode/1.0.”
  • the ⁇ fragment> portions are ⁇ town>, ⁇ state>, ⁇ country> and ⁇ zip>, respectively.
  • Figure 2 depicts a directed graph composed of RDF triples of the type stored by the illustrated data store 114, here, by way of non-limiting example, triples representing relationships among four companies (id#l, id#2, id#3 and id#4) and between two of those companies (id#l and id#2) and their employees.
  • terms and resource-type objects are depicted as oval-shaped nodes; literal-type objects are depicted as rectangular nodes; and predicates are depicted as arcs connecting those nodes.
  • Figure 1A depicts an architecture for a preferred hologram data store 114 according to the invention.
  • the illustrated store 114 includes a model document store 114A and a model document manager 114B. It also includes a relational triples store 114C, a relational triples store manager 114D, and a parser 114E interconnected as shown in the drawing.
  • RDF triples maintained by the store 114 are received ⁇ from the legacy databases 140 (via connectors 108) and/or from time-based data reduction module 150 (described below) ⁇ in the form of document objects, e.g., of the type generated from a Document Object Model (DOM) in a JAVA, C++ or other application.
  • DOM Document Object Model
  • these are stored in the model document store 114A as such (i.e., document objects) particularly, using the tables and inter-table relationships shown in Figure IB (see dashed box labelled 114B).
  • the model document manager 114B manages storage/retrieval of the document object to/from the model document store 114A.
  • the manager 114B comprises the Slide content management and integration framework, publicly available through the Apache Software Foundation. It stores (and retrieves) document objects to (and from) the store U4A in accord with the WebDAV protocol.
  • the manager 114B comprises the Slide content management and integration framework, publicly available through the Apache Software Foundation. It stores (and retrieves) document objects to (and from) the store U4A in accord with the WebDAV protocol.
  • Those skilled in the art will, of course, appreciate that other applications can be used in place of Slide and that document objects can be stored/retrieved from the store 114A in accord with other protocols, industry- standard, proprietary or otherwise.
  • WebDAV protocol allows for adding, updating and deleting RDF document objects using a variety of WebDAV client tools (e.g., Microsoft Windows Explorer, Microsoft Office, XML Spy or other such tools available from a variety of vendors), in addition to adding, updating and deleting document objects via connectors 108 and/or time-based data reduction module 150.
  • WebDAV client tools e.g., Microsoft Windows Explorer, Microsoft Office, XML Spy or other such tools available from a variety of vendors
  • This also allows for presenting the user with a view of a traversable file system, with RDF documents that can be opened directly in XML editing tools or from Java programs supporting WebDAV protocols, or from processes on remote machines via any HTTP protocol on which WebDAV is based.
  • RDF triples received by the store 114 are also stored to a relational database, here, store
  • RDBMS relational database management system
  • the triples are divided into their constituent components (subject, predicate, and object), which are indexed and stored to respective tables in the manner of a "hashed with origin" approach.
  • a parser 114E extracts its triples and conveys them to the RDBMS 114D with a corresponding indicator that they are to be added, updated or deleted from the relational database.
  • Such a parser 114E operates in the conventional manner known in the art for extracting triples from RDF documents.
  • the illustrated database store 114C has five tables interrelated as particularly shown in
  • Figure IB (see dashed box labelled 114C).
  • these tables rely on indexes generated by hashing the triples' respective subjects, predicates and objects using a 64-bit hashing algorithm based on cyclical redundancy codes (CRCs) -- though, it will be appreciated that the indexes can be generated by other techniques as well, industry-standard, proprietary or other- wise.
  • CRCs cyclical redundancy codes
  • the "triples" table 534 maintains one record for each stored triple.
  • Each record contains an aforementioned hash code for each of the subject, predicate and object that make up the respective triple, along with a resource flag (“resource_flg”) indicating whether that object is of the resource or literal type.
  • Each record also includes an aforementioned hash code (“mjiash”) identifying the document object (stored in model document store 114A) from which the triple was parsed, e.g., by parser 114E.
  • the values of the subjects, predicates and objects are not stored in the triples table. Rather, those values are stored in the resources table 530, namespaces table 532 and literals table 536.
  • the resources table 530 in conjunction with the namespaces table 532, stores the subjects, predicates and resource-type objects; whereas, the literals table 536 stores the literal-type objects.
  • the resources table 530 maintains one record for each unique subject, predicate or resource-type object. Each record contains the value of the resource, along with its aforementioned 64-bit hash. It is the latter on which the table is indexed.
  • r_value contained in each record of the resources table 530 reflects only the unique portion (e.g., ⁇ fragment> identifier) of each resource.
  • the namespaces table 532 maintains one record for each unique common portion referred to in the prior paragraph (hereinafter, "namespace"). Each record contains the value of that namespace, along with its aforementioned 64-bit hash. As above, it is the latter on which this table is indexed.
  • the literals table 536 maintains one record for each unique literal-type object. Each record contains the value of the object, along with its aforementioned 64-bit hash. Each record also includes an indicator of the type of that literal (e.g., integer, string, and so forth). Again, it is the latter on which this table is indexed.
  • the models table 538 maintains one record for each RDF document object contained in the model document store 114A.
  • Each record contains the URJ of the corresponding document object ("uri_string”), along with its aforementioned 64-bit hash ("m_hash"). It is the latter on which this table is indexed.
  • uri_string the URJ of the corresponding document object
  • m_hash 64-bit hash
  • each record of the models table 538 also contains the ID of the corresponding document object in the store 114A. That ID can be assigned by the model document manager 114B, or otherwise.
  • relational triples store 114C is a schema- less structure for storing RDF triples.
  • triples maintained in that store can be reconstituted via an SQL query. For example, to reconstitute the RDF triple having a subject equal to "postal://zip#02886", apredicate equal to "http://www.metatomix.com/ postalCode/1.0#town", and an object equal to "Warwick”, the following SQL statement is applied:
  • RDF documents and, more generally, objects maintained in the store 114 can be contained in other stores - structured relation- ally, hierarchically or otherwise ⁇ as well, in addition to or instead of stores 1 14A and 114C.
  • time-wise data reduction component 150 comprises an XML parser 504, a query module 506, an analysis module 507 and an output module 508.
  • the component 150 performs a time-wise reduction on data from the legacy databases 140. In some embodiments, that data is supplied to the component 150 by the connectors 108 in the form of RDF documents. In the illustrated embodiment, the component 150 functions, in part, like a connector itself — obtaining data directly from the legacy databases 140 before time-wise reducing it.
  • illustrated component 150 outputs the reduced data in the form of RDF triples contained in RDF documents.
  • these are stored in the model store 114A (and the underlying triples, in relational store 114C), alongside the RDF documents (and their respective underlying triples) from which the reduced data was gener- ated. This facilitates, for example, reporting of the time-wise reduced data, e.g., by the framework server 116, since that data is readily available for display to the user and does not require ad hoc generation of data summaries in response to user requests.
  • Module 504 parses an XML file 502 which specifies one or more sources of data to be time-wise reduced. That file may be supplied by the framework server 116, or otherwise.
  • the specified sources may be legacy databases, data streams, or otherwise 140. They may also be connectors 108, e.g., identified by symbolic name, virtual port number, or otherwise.
  • the XML specification file 502 specifies the data items which are to be time-wise reduced. These can be field names, identifiers or otherwise.
  • the XML file 502 further specifies the time periods or epochs over which data is to be time-wise reduced. These can be seconds, minutes, hours, days, months, weeks, years, and so forth, depending on the type of data to be reduced. For example, if the data source contains hospital patient data, the specified epochs may be weeks and months; whereas, if the data source contains web site access data, the specified epochs may be hours and days.
  • the parser component 504 parses the XML file 502 to discern the aforementioned data source identifiers, field identifiers, and epochs. To this end, the parser 504 may be constructed and operated in the conventional manner known in the art.
  • the query module 506 generates queries in order to obtain the field specified in the XML specification file 502. It queries the identified data source(s) in the manner appropriate to those sources. For example, the processing module 510 queries SQL-compatible databases using an SQL query. Other data sources are queried via their respective applications program interfaces (APIs), or otherwise. In embodiments where source data is supplied to the component 150 by the connectors 108, querying may be performed explicitly or implicitly by those connectors 108. Moreover, querying might not need to be performed on some data sources, e.g., data streams, from which data is broadcast or otherwise available without the need for request. In such instances, filtering may be substituted for querying in order that the specific fields or other items of data specified in the XML file are obtained.
  • APIs applications program interfaces
  • the analysis module 507 compiles time-wise statistics or summaries for each epoch specified in the XML file 502. To this end, it maintains for each such epoch one or more run- ning statistics (e.g., sums or averages) for each data field specified by the file 502 and received from the sources. As datum for each field are input, the running statistics for that field are updated. Such updating can include incrementing a count maintained for the field, recomput- ing a numerical total, modifying a concatenated string, and so forth, as appropriate to the type of the underlying field data.
  • run- ning statistics e.g., sums or averages
  • the analysis module 507 would maintain a store reflecting the number of hits thus far counted on a given day for that web site (e.g., based on data received from a source identifying each hit as it occurs, or otherwise).
  • the module When no further data is received from the source for that day, the module generates RDF output (via the output module 508) reflecting that number of counts (or other specified summary information) for output to the hologram store 114.
  • the analysis module 507 would maintain a separate store of counts for the month for which data is currently being received from the source. As above, when no further data is received from the source for that month, the module generates RDF output reflecting the total number of counts (or other specified summary information) for output to the hologram store 114.
  • An analysis module 507 maintains stores for each epoch for which running statistics (.i.e., time-wise summaries) are to be maintained.
  • the stores 514 can be allocated from an array, a pointer table or other data structure, with specific allocations made depending on the specific number of running statistics being tracked.
  • an XML file 502 specifies that access statistics are to be maintained for a web site on daily and monthly bases using data from a first data source, and that running statistics for the numbers of visitors to a retail store are to be maintained on monthly and yearly bases from data from a second data source
  • the analysis module 507 can maintain four stores: store 514A maintaining a daily count for the web site; store 514B maintaining a monthly count for the web site; store 514C maintaining a monthly account for the retail store; and store 514D maintaining a yearly count for the retail store.
  • Each of the stores 514 is updated as corresponding data is received from the respective data sources.
  • a count maintained in the first store 514A is incremented.
  • the output module 508 can generate one or more RDF triples reflecting a count for the (then-complete) prior day for storage in the hologram store 114.
  • the store 514A can be reset to zero and the process restarted for tracking accesses on that succeeding day.
  • the second store 514B i.e., that tracking the longer epoch for data from the first source, can be incremented in parallel with the first store 514A as web access data is received from the source or, alternatively, can be updated when the first store 514A is rolled over, i.e. reset for tracking statistics for each successive day.
  • RDF triples can be generated to reflect web access statistics for the then- completed prior month, concurrently with zeroing the second store 514B for tracking of statistics for the succeeding month.
  • the analysis module 507 maintains running statistics for the epochs specified in the XML file 502, outputting RDF triples reflecting those statistics as data for each successive epoch is received.
  • running statistics may be maintained in other ways, as well. For example, continuing the above example, in instances where data received from the first source is not received ordered by day (but, rather, is intermingled with respect to many days), multiple stores can be maintained ⁇ one for each day (or other epoch).
  • the output module 508 generates RDF documents reflect- ing the summarized data stored in stores 514 for output to the hologram data store 114.
  • This can be performed by generating and RDF stream ad hoc or, preferably, by utilizing native commands, e.g., of the Java programming language, to gather the epoch data into a document object model (DOM).
  • DOM document object model
  • the DOM can be output in RDF format to the hologram store 114 directly.
  • the store 114 supports a SQL-like query languages called HxQL and HxML. This allows retrieval of RDF triples matching defined criteria.
  • the data store 114 includes a graph generator (not shown) that uses RDF triples to generate directed graphs in response to queries (e.g., in HxQL or HxML form) from the framework server 116. These may be queries for information reflected by triples originating from data in one or more of the legacy databases 140 (one example might be a request for the residence cities of hotel guests who booked reservations on account over Independence Day weekend, as reflected by data from an e-Commerce database and an Accounts Receivable database).
  • queries e.g., in HxQL or HxML form
  • queries e.g., in HxQL or HxML form
  • queries e.g., in HxQL or HxML form
  • queries e.g., in HxQL or HxML form
  • queries e.g., in HxQL or HxML form
  • queries e.g., in HxQL or HxML form
  • These may be queries for information reflected by triples originating from data in
  • the data store 114 utilizes genetic, self- adapting, algorithms to traverse the RDF triples in response to queries from the framework server 116.
  • genetic, self-adapting, algorithms can be beneficially applied to the RDF database which, due to its inherently flexible (i.e., schema-less) structure, is not readily searched using traditional search techniques.
  • the data store utilizes a genetic algorithm that performs several searches, each utilizing a different methodol- ogy but all based on the underlying query from the framework server, against the RDF triples. It compares the results of the searches quantitatively to discern which produce(s) the best results and reapplies that search with additional terms or further granularity.
  • the framework server 116 generates requests to the data store 114 (and/or indirectly to the legacy databases via connectors 108, as discussed above) and presents information therefrom to the user via browser 118.
  • the requests can be based on
  • HxQL or HxML requests entered directly by the user though are generated by the server 116 based on user selections/responses to questions, dialog boxes or other user-input controls.
  • the framework server includes one or more user interface modules, plug-ins, or the like, each for generating queries of a particular nature.
  • One such module for example, generates queries pertaining to marketing information, another such module generates queries pertaining to financial information, and so forth.
  • queries to the data store are structured on a SQL based RDF query language, in the general manner of SquishQL, as known in the art.
  • the framework server In addition to generating queries, the framework server (and/or the aforementioned modules) "walks" directed graphs generated by the data store 114 to present to the user (via browser 118) any specific items of requested information. Such walking of the directed graphs can be accomplished via any conventional technique known in the art. Presentation of questions, dialog boxes or other user-input controls to the user and, likewise, presentation of responses thereto based on the directed graph can be accomplished via conventional server/ browser or other user interface technology.
  • the framework server 116 permits a user to update data stored in the data store 114 and, thereby, that stored in the legacy databases 140.
  • changes made to data displayed by the browser 1 18 are transmitted by server 116 to data store 114.
  • any triples implicated by the change are updated in store 114C, as are the corresponding RDF document objects in store 114A.
  • An indication of these changes can be forwarded to the respective legacy databases 140, which utilize the corresponding API (or other interface mechanisms) to update their respective stores.
  • changes made directly to the store 114C as discussed above, e.g., using a WebDAV client can be forwarded to the respective legacy database.
  • the server 116 can present to the user not only data from the data store 114, but also data gleaned by the server directly from other sources.
  • the server 116 can directly query an enterprise web site for statistics regarding web page usage, or otherwise.

Abstract

Methods of time-wise data reduction (152) that includes the steps of inputting data from a source (206); summarizies data (530) according to one or more RDF triples (534). The data source (206) may be, for example, a database (114, 114C), a datastream (114, 114C) or otherwise. The selected epoch (514A-514E) may be a second, minute, hour, week, month (514B), year (514A-514D), or so forth. The triples (114c-114D) may be output in the form of RDF document objects (150). These can be stored, for example, in a hierachical data store (506) such as a WebDAV server (140). Triples (534, 114C) parsed from the document objects (114, 206) may be maintained in a relational store (114C) that is organized according to a hashed (538) with origin approach.

Description

METHODS AND APPARATUS FOR STATISTICAL DATA ANALYSIS
Background
This application claims the benefit of priority of United States Provisional Patent Application Serial No. 60/332,053, filed November 21, 2001, entitled "Methods And Apparatus For Querying A Relational Database In A System For Real-Time Business Visibility" and U.S. Provisional Patent Application Serial No. 60/332,219, filed on November 21, 2001, entitled "Methods And Apparatus For Calculation and Reduction of Time-Series Metrics From Event Streams Or Legacy Databases In A System For Real-Time Business Visibility." This application is also a continuation-in-part of United States Patent Application Serial No. 09/917,264, filed July 27, 2001, entitled "Methods and Apparatus for Enterprise Application Integration" and United States Patent Application Serial No. 10/051,619, filed October 29, 2001, entitled "Methods And Apparatus For Real-Time Business Visibility Using Persistent Schema-Less Data Storage." The teachings of all of the foregoing applications are incorporated herein by reference.
The invention pertains to digital data processing and, more particularly, to methods and apparatus for enterprise business visibility and insight using real-time reporting tools.
It is not uncommon for a single enterprise to have several separate database systems to track internal and external planning and transactional data. Such systems might have been developed at different times throughout the history of the enterprise and, therefore, represent differing generations of computer technology. For example, a marketing database system tracking customers may be ten years old, while an enterprise resource planning (ERP) system tracking inventory might be two or three years old. Integration between these systems is dif- ficult at best, consuming specialized programming skill and constant maintenance expenses.
A major impediment to enterprise business visibility is the consolidation of data from these disparate legacy databases with one another and with that from newer e-commerce databases. For instance, inventory on-hand data gleaned from a legacy ERP system may be diffi- cult to combine with customer order data gleaned from web servers that support e-commerce (and other web-based) transactions. This is not to mention difficulties, for example, in consolidating resource scheduling data from the ERP system with the forecasting data from the marketing database system.
An object of this invention is to provide improved methods and apparatus for digital data processing and, more particularly, for enterprise business visibility and insight (hereinafter, "enterprise business visibility"). A further object is to provide such methods and apparatus as can rapidly and accurately retrieve information responsive to user inquiries.
A further object of the invention is to provide such methods and apparatus as can be readily and inexpensively integrated with legacy, current and future database management systems.
A still further object of the invention is to provide such methods and apparatus as can be implemented incrementally or otherwise without interruption of enteφrise operation.
Yet a still further object of the invention is to provide such methods and apparatus as to facilitate ready access to up-to-date enterprise data, regardless of its underlying source.
Yet still a further object of the invention is to provide such methods and apparatus as permit flexible presentation of enterprise data in an easily understood manner.
Summary of the Invention
The aforementioned are among the objects attained by the invention, one aspect of which provides a method of time-wise data reduction that includes the steps of inputting data from a source; summarizing that data according to one or more selected epochs in which it belongs; and generating for each such selected epoch one or more RDF triples characterizing the summarized data. The data source may be, for example, a database, a data stream or otherwise. The selected epoch may be a second, minute, hour, week, month, year, or so forth.
Further aspects of the invention provide a method as described above including the step of outputting the RDF triples in the form of RDF document objects. These can be stored, for example, in a hierarchical data store such as, for example, a WebDAV server.
Still further related aspects of the invention provide for parsing triples from the RDF document objects and storing them in a relational data store. A further related aspect of the invention provides for storing the triples in a relational store that is organized according to a hashed with origin approach.
Still yet other aspects of the invention provide for retrieving information represented by the triples in the hierarchical and/or relational data stores, e.g., for presentation to a user. Related aspects of the invention provide for retrieving triples containing time-wise reduced data, e.g., for presentation to a user.
Related aspects of the invention provide methods as described above including a sum- marizing the input data according to one or more epochs of differing length. Further aspects of the invention provide methods as described above including querying the source, e.g., a legacy database, in order to obtain the input data. Related aspects of the invention provides for generating such queries in SQL format.
Still other aspects of the invention provide methods as described above including the step of inputting an XML file that identifies one or more sources of input data, one or more fields thereof to be summarized in the time-wise reduction, and/or one or more epochs for which those fields are to be summarized.
Further aspects of the invention provide methods as described above including responding to an input datum by updating summary data for an epoch of the shortest duration, e.g., a store of per day data. Related aspects of the invention provide for updating a store of summary data for epochs of greater duration, e.g., stores of per week or per month data, from summary data maintained in a store for an epoch of lesser duration, e.g., a store of per day data.
These and other aspects of the invention are evident in the drawings and in the descrip- tion that follows.
Brief Description of the Drawings
The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following detailed description of the drawings in which:
Figure 1 depicts an improved enterprise business visibility and insight system according invention;
Figure 1 A depicts an architecture for a hologram data store according to the invention, e.g., in the system of claim 1;
Figure IB depicts the tables in a model store and a triples store of the hologram data store of Figure 1A;
Figure 2 depicts a directed graph representing data triples of the type maintained in a data store according to the invention.
Figure 3 is a functional block diagram of a time-wise data reduction module in a system according to the module.
Detailed Description of the Illustrated Embodiment
Figure 1 depicts a real-time enterprise business visibility and insight system according to the invention. The illustrated system 100 includes connectors 108 that provide software interfaces to legacy, e-commerce and other databases 140 (hereinafter, collectively, "legacy databases"). A "hologram" database 114 (hereinafter, "data store" or "hologram data store"), which is coupled to the legacy databases 140 via the connectors 108, stores data from those databases 140. A framework server 116 accesses the data store 114, presenting selected data to (and permitting queries from) a user browser 118. The server 116 can also permit updates to data in the data store 114 and, thereby, in the legacy databases 140.
Legacy databases 140 represent existing (and future) databases and other sources of information (including data streams) in a company, organization or other entity (hereinafter
"enteφrise"). In the illustration, these include a retail e-commerce database (e.g., as indicated by the cloud and server icons adjacent database 140c) maintained with a Sybase® database management system, an inventory database maintained with an Oracle® database management system and an ERP database maintained with a SAP® Enterprise Resource Planning system. Of course, these are merely examples of the variety of databases or other sources of information with which methods and apparatus as described herein can be used. Common features of illustrated databases 140 are that they maintain information of interest to an enterprise and that they can be accessed via respective software application program interfaces (API) or other mechanisms known in the art.
Connectors 108 serve as an interface to legacy database systems 140. Each connector applies requests to, and receives information from, a respective legacy database, using that database's API or other interface mechanism. Thus, for example, connector 108a applies requests to legacy database 140a using the corresponding SAP API; connector 108b, to legacy database 140b using Oracle API; and connector 108c, to legacy database 140c using the corresponding Sybase API.
In the illustrated embodiment, these requests are for purposes of accessing data stored in the respective databases 140. The requests can be simple queries, such as SQL queries and the like (e.g., depending on the type of the underlying database and its API) or more complex sets of queries, such as those commonly used in data mining. For example, one or more of the connectors can use decision trees, statistical techniques or other query and analysis mechanisms known in the art of data mining to extract information from the databases. Specific queries and analysis methodologies can be specified by the hologram data store 114 or the framework server 116 for application by the connectors. Alternatively, the connectors themselves can construct specific queries and methodologies from more general queries received from the data store 114 or server 116. For example, request-specific items can be "plugged" into query templates thereby effecting greater speed and efficiency.
Regardless of their origin, the requests can be stored in the connectors 108 for application and/or reapplication to the respective legacy databases 108 to provide one-time or periodic data store updates. Connectors can use expiration date information to determine which of a plurality of similar data to return to the data store, or if dates are absent, the connectors can mark returned data as being of lower confidence levels.
Data and other information (collectively, "messages") generated by the databases 140 in response to the requests are routed by connectors to the hologram data store 114. That other information can include, for example, expiry or other adjectival data for use by the data store in caching, purging, updating and selecting data. The messages can be cached by the connectors 108, though, they are preferably immediately routed to the store 114.
The hologram data store 114 stores data from the legacy databases 140 (and from the framework server 116, as discussed below) as RDF triples. The data store 114 can be embodied on any digital data processing system or systems that are in communications coupling (e.g., as defined above) with the connectors 108 and the framework server 116. Typically, the data store 114 is embodied in a workstation or other high-end computing device with high capacity storage devices or arrays, though, this may not be required for any given implementation.
Though the hologram data store 114 may be contained on an optical storage device, this is not the sense in which the term "hologram" is used. Rather, it refers to its storage of data from multiple sources (e.g., the legacy databases 140) in a form which permits that data to be queried and coalesced from a variety of perspectives, depending on the needs of the user and the capabilities of the framework server 116.
To this end, a preferred data store 114 stores the data from the legacy databases 140 in subject-predicate-object form, e.g., RDF triples, though those of ordinary skill in the art will appreciate that other forms may be used as well, or instead. By way of background, RDF is a way of expressing the properties of items of data. Those items are referred to as subjects. Their properties are referred to as predicates. And, the values of those properties are referred to as objects. In RDF, an expression of a property of an item is referred to as a triple, a convenience reflecting that the expression contains three parts: subject, predicate and object. Subjects, also referred to as resources, can be anything that is described by an RDF expression. A subject can be person, place or thing — though, typically, only an identifier of the subject is used in an actual RDF expression, not the person, place or thing itself. Examples of subjects might be "car," "Joe," "http://www.metatomix.com."
A predicate identifies a property of a subject. According to the RDF specification, this may be any "specific aspect, characteristic, attribute, or relation used to describe a resource." For the three exemplary subjects above, examples of predicates might be "make," "citizenship," "owner."
10
An object gives a "value" of a property. These might be "Ford," "United Kingdom," "Metatomix, Inc." for the subject and objects given in the prior paragraphs, forming the following RDF triples:
15 Subject Predicate Object
"car" "make" "Ford"
"Joe" "citizenship" "United Kingdom'
"http://metatomix.com" "owner" "Metatomix, Inc."
-^ Objects can be literals, i.e., strings that identify or name the corresponding property
(predicate). They can also be resources. In the example above, rather than merely the string "Metatomix, Inc." further triples may be specified — presumably, ones identifying that company in the subject and giving details in predicates and objects.
~r A given subject may have multiple predicates, each predicate indexing an object. For example, a subject postal zip code might have an index to an object town and an index to an object state, either (or both) index being a predicate URL
Listed below is a portion of a data set of the type with which the invention can be prac- ~„ ticed. The listing contains RDF triples, here, expressed in extensible markup language (XML) syntax. Those skilled in the art will, of course, appreciate that RDF triples can be expressed in other syntaxes and that the teachings hereof are equally applicable to those syntaxes. Further, the listing shows only a sampling of the triples in a database 114, which typically would contain tens of thousands or more of such triples.
35 <rdf:RDF...xmlns="http://www.metatomix.com/postalCode/l .0#> <rdf:Description rdf:about="postal://zip#02886">
<town>Warwic </town>
<state>RK/state>
<country>USA</country>
<zip>02886</zip> <rdf:Description>
<rdf:Description rdf:about="postal://zip#02901">
<to n>Providence</town>
<state>RI</state>
<country>USA</country>
<zip>02901</zip> </rdf:Description>
Subjects are indicated within the listing using a "rdf:about" statement. For example, the second line of the listing defines a subject as a resource named "postal://zip#02886." That subject has predicates and objects that follow the subject declaration.
One predicate, <town>, is associated with a value "Warwick". Another predicate, <state>, is associated with a value "RI". The same follows for the predicates <country> and <zip>, which are associated with values "USA" and "02886," respectively. Similarly, the listing shows properties for the subject "postal ://zip#02901 ," namely, <town> "Providence," <state> "RI," <country> "US" and <zip> "02901."
In the listing, the subjects and predicates are expressed as uniform resource indicators (URTs), e.g., of the type defined in Berniers-Lee et al, Uniform Resource Identifiers fURF): Generic Syntax (RFC 2396) (August 1998), and can be said to be expressed in a form <scheme>://<path>#<fragment>. For the subjects given in the example, <scheme> is "postal," <path> is "zip," and <fτagment> is, for example, "02886" and "02901."
The predicates, too, are expressed in the form <scheme>://<path>#<fragment>, as is evident to those in ordinary skill in the art. In accord with XML syntax, the predicates in lines two, et seq., of the listing must be inteφreted as suffixes to the string provided in the namespace directive "xmlns=http://www.metatomix.com/postalCode/1.0#" in line one of the listing. This results in predicates that are formally expressed as: "http://www.metatomix.com/postalCode/ 1.0#town," "http://www.metatomix.eom/postalCode/l .0#state," "http://www.metatomix.com/ postalCode/1.0#country" and "http://www.metatomix.eom/postalCode/l .0#zip." Hence, the <scheme> for the predicates is "http" and <path> is "www.metatomix.com/ postalCode/1.0." The <fragment> portions are <town>, <state>, <country> and <zip>, respectively. It is important to note that the listing is in some ways simplistic in that each of its objects is a literal value. Commonly, an object may itself be another subject, with its own objects and predicates. In such cases, a resource can be both a subject and an object, e.g., an object to all "upstream" resources and a subject to all "downstream" resources and properties. Such "branching" allows for complex relationships to be modeled within the RDF triple framework.
Figure 2 depicts a directed graph composed of RDF triples of the type stored by the illustrated data store 114, here, by way of non-limiting example, triples representing relationships among four companies (id#l, id#2, id#3 and id#4) and between two of those companies (id#l and id#2) and their employees. Per convention, subjects and resource-type objects are depicted as oval-shaped nodes; literal-type objects are depicted as rectangular nodes; and predicates are depicted as arcs connecting those nodes.
Figure 1A depicts an architecture for a preferred hologram data store 114 according to the invention. The illustrated store 114 includes a model document store 114A and a model document manager 114B. It also includes a relational triples store 114C, a relational triples store manager 114D, and a parser 114E interconnected as shown in the drawing.
As indicated in the drawing, RDF triples maintained by the store 114 are received ~ from the legacy databases 140 (via connectors 108) and/or from time-based data reduction module 150 (described below) ~ in the form of document objects, e.g., of the type generated from a Document Object Model (DOM) in a JAVA, C++ or other application. In the illustrated embodiment, these are stored in the model document store 114A as such (i.e., document objects) particularly, using the tables and inter-table relationships shown in Figure IB (see dashed box labelled 114B).
The model document manager 114B manages storage/retrieval of the document object to/from the model document store 114A. In the illustrated embodiment, the manager 114B comprises the Slide content management and integration framework, publicly available through the Apache Software Foundation. It stores (and retrieves) document objects to (and from) the store U4A in accord with the WebDAV protocol. Those skilled in the art will, of course, appreciate that other applications can be used in place of Slide and that document objects can be stored/retrieved from the store 114A in accord with other protocols, industry- standard, proprietary or otherwise. However, use of the WebDAV protocol allows for adding, updating and deleting RDF document objects using a variety of WebDAV client tools (e.g., Microsoft Windows Explorer, Microsoft Office, XML Spy or other such tools available from a variety of vendors), in addition to adding, updating and deleting document objects via connectors 108 and/or time-based data reduction module 150. This also allows for presenting the user with a view of a traversable file system, with RDF documents that can be opened directly in XML editing tools or from Java programs supporting WebDAV protocols, or from processes on remote machines via any HTTP protocol on which WebDAV is based.
RDF triples received by the store 114 are also stored to a relational database, here, store
114C, that is managed and accessed by a conventional relational database management system (RDBMS) 114D operating in accord with the teachings hereof. In that database, the triples are divided into their constituent components (subject, predicate, and object), which are indexed and stored to respective tables in the manner of a "hashed with origin" approach. Whenever an RDF document is added, updated or deleted, a parser 114E extracts its triples and conveys them to the RDBMS 114D with a corresponding indicator that they are to be added, updated or deleted from the relational database. Such a parser 114E operates in the conventional manner known in the art for extracting triples from RDF documents.
The illustrated database store 114C has five tables interrelated as particularly shown in
Figure IB (see dashed box labelled 114C). In general, these tables rely on indexes generated by hashing the triples' respective subjects, predicates and objects using a 64-bit hashing algorithm based on cyclical redundancy codes (CRCs) -- though, it will be appreciated that the indexes can be generated by other techniques as well, industry-standard, proprietary or other- wise.
Referring to Figure IB, the "triples" table 534 maintains one record for each stored triple. Each record contains an aforementioned hash code for each of the subject, predicate and object that make up the respective triple, along with a resource flag ("resource_flg") indicating whether that object is of the resource or literal type. Each record also includes an aforementioned hash code ("mjiash") identifying the document object (stored in model document store 114A) from which the triple was parsed, e.g., by parser 114E.
In the illustrated embodiment, the values of the subjects, predicates and objects are not stored in the triples table. Rather, those values are stored in the resources table 530, namespaces table 532 and literals table 536. Particularly, the resources table 530, in conjunction with the namespaces table 532, stores the subjects, predicates and resource-type objects; whereas, the literals table 536 stores the literal-type objects. The resources table 530 maintains one record for each unique subject, predicate or resource-type object. Each record contains the value of the resource, along with its aforementioned 64-bit hash. It is the latter on which the table is indexed. To conserve space, portions of those values common to multiple resources (e.g., common <scheme>://<path> identifiers) are stored in the namespaces table 532. Accordingly the field, "r_value," contained in each record of the resources table 530 reflects only the unique portion (e.g., <fragment> identifier) of each resource.
The namespaces table 532 maintains one record for each unique common portion referred to in the prior paragraph (hereinafter, "namespace"). Each record contains the value of that namespace, along with its aforementioned 64-bit hash. As above, it is the latter on which this table is indexed.
The literals table 536 maintains one record for each unique literal-type object. Each record contains the value of the object, along with its aforementioned 64-bit hash. Each record also includes an indicator of the type of that literal (e.g., integer, string, and so forth). Again, it is the latter on which this table is indexed.
The models table 538 maintains one record for each RDF document object contained in the model document store 114A. Each record contains the URJ of the corresponding document object ("uri_string"), along with its aforementioned 64-bit hash ("m_hash"). It is the latter on which this table is indexed. To facilitate associating document objects identified in the models table 538 with document objects maintained by the model document store 114A, each record of the models table 538 also contains the ID of the corresponding document object in the store 114A. That ID can be assigned by the model document manager 114B, or otherwise.
From the above, it can be appreciated that the relational triples store 114C is a schema- less structure for storing RDF triples. As suggested by Melnik, supra, triples maintained in that store can be reconstituted via an SQL query. For example, to reconstitute the RDF triple having a subject equal to "postal://zip#02886", apredicate equal to "http://www.metatomix.com/ postalCode/1.0#town", and an object equal to "Warwick", the following SQL statement is applied:
SELECT m.uri_string, t.resource_flg,
concat (nl.n_value, rl .r_value) as subj,
concat (n2.n_value, r2.r_value) as pred, concat (n3.n_value,r3.r_value),
l.l value
FROM triples t, models m, resources rl , resources r2, namespaces nl, namespaces n2
LEFT JOIN literals 1 on t.object=l.l_hash
LEFT JOIN resources r3 on t.object=r3.r_hash
LEFT JOIN namespaces n3 on r3.r_value=n3.n_value
WHERE t.subject=rl .r hash AND rl .n_hash=nl .n hash AND
t.predicate=r2.r_hash AND r2.n_hash=n2.n_hash AND
m.uri_id=t.m_hash AND t.subject=hash("postal://zip#02886") AND
t.predicate-hash('http://www.metatomix.com/postalcode/1.0#town') AND
t.object=hash('warwick')
Those skilled in the art will, of course, appreciate that RDF documents and, more generally, objects maintained in the store 114 can be contained in other stores - structured relation- ally, hierarchically or otherwise ~ as well, in addition to or instead of stores 1 14A and 114C.
Referring to Figure 3, time-wise data reduction component 150 comprises an XML parser 504, a query module 506, an analysis module 507 and an output module 508. The component 150 performs a time-wise reduction on data from the legacy databases 140. In some embodiments, that data is supplied to the component 150 by the connectors 108 in the form of RDF documents. In the illustrated embodiment, the component 150 functions, in part, like a connector itself — obtaining data directly from the legacy databases 140 before time-wise reducing it.
Regardless, illustrated component 150 outputs the reduced data in the form of RDF triples contained in RDF documents. In the illustrated embodiment, these are stored in the model store 114A (and the underlying triples, in relational store 114C), alongside the RDF documents (and their respective underlying triples) from which the reduced data was gener- ated. This facilitates, for example, reporting of the time-wise reduced data, e.g., by the framework server 116, since that data is readily available for display to the user and does not require ad hoc generation of data summaries in response to user requests.
Module 504 parses an XML file 502 which specifies one or more sources of data to be time-wise reduced. That file may be supplied by the framework server 116, or otherwise. The specified sources may be legacy databases, data streams, or otherwise 140. They may also be connectors 108, e.g., identified by symbolic name, virtual port number, or otherwise. Along with the data source identifier(s), the XML specification file 502 specifies the data items which are to be time-wise reduced. These can be field names, identifiers or otherwise.
The XML file 502 further specifies the time periods or epochs over which data is to be time-wise reduced. These can be seconds, minutes, hours, days, months, weeks, years, and so forth, depending on the type of data to be reduced. For example, if the data source contains hospital patient data, the specified epochs may be weeks and months; whereas, if the data source contains web site access data, the specified epochs may be hours and days.
The parser component 504 parses the XML file 502 to discern the aforementioned data source identifiers, field identifiers, and epochs. To this end, the parser 504 may be constructed and operated in the conventional manner known in the art.
The query module 506 generates queries in order to obtain the field specified in the XML specification file 502. It queries the identified data source(s) in the manner appropriate to those sources. For example, the processing module 510 queries SQL-compatible databases using an SQL query. Other data sources are queried via their respective applications program interfaces (APIs), or otherwise. In embodiments where source data is supplied to the component 150 by the connectors 108, querying may be performed explicitly or implicitly by those connectors 108. Moreover, querying might not need to be performed on some data sources, e.g., data streams, from which data is broadcast or otherwise available without the need for request. In such instances, filtering may be substituted for querying in order that the specific fields or other items of data specified in the XML file are obtained.
The analysis module 507 compiles time-wise statistics or summaries for each epoch specified in the XML file 502. To this end, it maintains for each such epoch one or more run- ning statistics (e.g., sums or averages) for each data field specified by the file 502 and received from the sources. As datum for each field are input, the running statistics for that field are updated. Such updating can include incrementing a count maintained for the field, recomput- ing a numerical total, modifying a concatenated string, and so forth, as appropriate to the type of the underlying field data.
By way of example, if the XML specification file 502 specifies that a summary of the number of "hits" of a web site are to be maintained on a per day basis, the analysis module 507 would maintain a store reflecting the number of hits thus far counted on a given day for that web site (e.g., based on data received from a source identifying each hit as it occurs, or otherwise). When no further data is received from the source for that day, the module generates RDF output (via the output module 508) reflecting that number of counts (or other specified summary information) for output to the hologram store 114.
If the XML file 502 additionally specifies that summary data of web site accesses is to be maintained on a per month basis, the analysis module 507 would maintain a separate store of counts for the month for which data is currently being received from the source. As above, when no further data is received from the source for that month, the module generates RDF output reflecting the total number of counts (or other specified summary information) for output to the hologram store 114.
As an alternative to simultaneously updating stores for each of multiple epochs as new data is received, other embodiments of the invention increment (or otherwise update) the store for the epoch of shortest relevant duration (e.g., the per day store) as each such data item is received. Additional stores reflecting epochs of longer duration (e.g., the per month store) are only updated as those for the shorter duration epochs are completed.
An analysis module 507 according to a preferred practice of the invention maintains stores for each epoch for which running statistics (.i.e., time-wise summaries) are to be maintained. In order to accommodate the maintenance of running statistics for epochs from a plurality of sources, the stores 514 can be allocated from an array, a pointer table or other data structure, with specific allocations made depending on the specific number of running statistics being tracked.
For example if an XML file 502 specifies that access statistics are to be maintained for a web site on daily and monthly bases using data from a first data source, and that running statistics for the numbers of visitors to a retail store are to be maintained on monthly and yearly bases from data from a second data source, the analysis module 507 can maintain four stores: store 514A maintaining a daily count for the web site; store 514B maintaining a monthly count for the web site; store 514C maintaining a monthly account for the retail store; and store 514D maintaining a yearly count for the retail store. Each of the stores 514 is updated as corresponding data is received from the respective data sources.
Thus, continuing the above example, as data (in the form of records, packets, or so forth) are received from the first data source reflecting web site accesses on a given day, a count maintained in the first store 514A is incremented. When the received data begins to reflect accesses on the succeeding day, the output module 508 can generate one or more RDF triples reflecting a count for the (then-complete) prior day for storage in the hologram store 114. Concurrently, the store 514A can be reset to zero and the process restarted for tracking accesses on that succeeding day.
The second store 514B, i.e., that tracking the longer epoch for data from the first source, can be incremented in parallel with the first store 514A as web access data is received from the source or, alternatively, can be updated when the first store 514A is rolled over, i.e. reset for tracking statistics for each successive day. As above, when data received from the first source begins to reflect web accesses for a succeeding month (i.e., the period associated with the second store 514B), RDF triples can be generated to reflect web access statistics for the then- completed prior month, concurrently with zeroing the second store 514B for tracking of statistics for the succeeding month.
In this way, the analysis module 507 maintains running statistics for the epochs specified in the XML file 502, outputting RDF triples reflecting those statistics as data for each successive epoch is received. Those skilled in the art will appreciate that running statistics may be maintained in other ways, as well. For example, continuing the above example, in instances where data received from the first source is not received ordered by day (but, rather, is intermingled with respect to many days), multiple stores can be maintained ~ one for each day (or other epoch).
Referring again to FIG. 1A, the output module 508 generates RDF documents reflect- ing the summarized data stored in stores 514 for output to the hologram data store 114. This can be performed by generating and RDF stream ad hoc or, preferably, by utilizing native commands, e.g., of the Java programming language, to gather the epoch data into a document object model (DOM). In such a language, the DOM can be output in RDF format to the hologram store 114 directly.
A more complete understanding of the store 114 may be attained by reference to the aforementioned incoφorated-by-reference applications. Referring to copending, commonly assigned United States Patent Application Serial No. , filed this day herewith, entitled "Methods and Apparatus for Querying a Relational Data Store Using Schema-Less Queries," the teachings of which are incoφorated herein by reference, the data store 114 supports a SQL-like query languages called HxQL and HxML. This allows retrieval of RDF triples matching defined criteria.
The data store 114 includes a graph generator (not shown) that uses RDF triples to generate directed graphs in response to queries (e.g., in HxQL or HxML form) from the framework server 116. These may be queries for information reflected by triples originating from data in one or more of the legacy databases 140 (one example might be a request for the residence cities of hotel guests who booked reservations on account over Independence Day weekend, as reflected by data from an e-Commerce database and an Accounts Receivable database). Such generation of directed graphs from triples can be accomplished in any conventional manner known the art (e.g., as appropriate to RDF triples or other manner in which the information is stored) or, preferably, in the manner described in co-pending, commonly assigned United States Patent Application Serial No. 10/138,725, filed May 3, 2002, entitled METHODS AND APPARATUS FOR VISUALIZING RELATIONSHIPS AMONG TRIPLES OF RESOURCE DESCRIPTION FRAMEWORK (RDF) DATA SETS and Serial No. 60/416,616, filed October 7, 2002, entitled METHODS AND APPARATUS FOR IDENTIFYING RELATED NODES IN A DIRECTED GRAPH HAVING NAMED ARCS, the teachings of both of which are incorporated herein by reference. Directed graphs so generated are passed back to the server 116 for presentation to the user.
According to one practice of the invention, the data store 114 utilizes genetic, self- adapting, algorithms to traverse the RDF triples in response to queries from the framework server 116. Though not previously known in the art for this purpose, such techniques can be beneficially applied to the RDF database which, due to its inherently flexible (i.e., schema-less) structure, is not readily searched using traditional search techniques. To this end, the data store utilizes a genetic algorithm that performs several searches, each utilizing a different methodol- ogy but all based on the underlying query from the framework server, against the RDF triples. It compares the results of the searches quantitatively to discern which produce(s) the best results and reapplies that search with additional terms or further granularity.
Referring back to Figure 1, the framework server 116 generates requests to the data store 114 (and/or indirectly to the legacy databases via connectors 108, as discussed above) and presents information therefrom to the user via browser 118. The requests can be based on
HxQL or HxML requests entered directly by the user though, preferably, they are generated by the server 116 based on user selections/responses to questions, dialog boxes or other user-input controls. In a preferred embodiment, the framework server includes one or more user interface modules, plug-ins, or the like, each for generating queries of a particular nature. One such module, for example, generates queries pertaining to marketing information, another such module generates queries pertaining to financial information, and so forth.
In some embodiments, queries to the data store are structured on a SQL based RDF query language, in the general manner of SquishQL, as known in the art.
In addition to generating queries, the framework server (and/or the aforementioned modules) "walks" directed graphs generated by the data store 114 to present to the user (via browser 118) any specific items of requested information. Such walking of the directed graphs can be accomplished via any conventional technique known in the art. Presentation of questions, dialog boxes or other user-input controls to the user and, likewise, presentation of responses thereto based on the directed graph can be accomplished via conventional server/ browser or other user interface technology.
In some embodiments, the framework server 116 permits a user to update data stored in the data store 114 and, thereby, that stored in the legacy databases 140. To this end, changes made to data displayed by the browser 1 18 are transmitted by server 116 to data store 114. There, any triples implicated by the change are updated in store 114C, as are the corresponding RDF document objects in store 114A. An indication of these changes can be forwarded to the respective legacy databases 140, which utilize the corresponding API (or other interface mechanisms) to update their respective stores. (Likewise, changes made directly to the store 114C as discussed above, e.g., using a WebDAV client, can be forwarded to the respective legacy database.)
In some embodiments, the server 116 can present to the user not only data from the data store 114, but also data gleaned by the server directly from other sources. Thus, for example, the server 116 can directly query an enterprise web site for statistics regarding web page usage, or otherwise.
A further understanding of the operation of the framework server 116 may be attained by reference to the appendix filed with United States Patent Application Serial No. 09/917,264, filed July 27, 2001, and entitled "Methods and Apparatus for Enteφrise Application Integra- tion," which appendix is incoφorated herein by reference.
Described herein are methods and apparatus meeting the above-mentioned objects. It will be appreciated that the illustrated embodiment is merely an example of the invention and that other embodiments, incorporating changes to those described herein, fall within the scope of the invention, of which we claim:

Claims

1. A method of time-wise data reduction and storage, comprising
A. inputting data from at least one source,
B. summarizing that data according to a specified epoch in which it belongs,
C. generating for each such epoch, one or more RDF triples characterizing the summarized data.
2. The method of claim 1, comprising outputting the RDF triples in one or more RDF document objects.
3. The method of claim 2, comprising storing the RDF document objects in a hierarchical data store.
4. The method of claim 3, comprising store the RDF document objects in accord with a WebDAV protocol.
5. The method of claim 1 , comprising storing the RDF triples in a relational data store.
6. The method of claim 5, comprising storing the RDF triples in a relational data organized according to a hashed with origin approach.
7. A method of time-wise data reduction and storage, comprising
A. querying one or more data sources,
B. summarizing data received from the data sources in response to querying, where the data is summarized by selected epoch,
C. generating for each such epoch, one or more RDF triples characterizing the summarized data,
D. storing the RDF triples to one or more data stores, along with further RDF triples characterizing the data from which the summaries were generated, where the one or more data stores include any of a hierarchical data store and a relational data store.
8. The method of claim 7, comprising summarizing data received from the data sources with respect to multiple epochs of differing length.
9. The method of claim 7, comprising querying one or more data sources in an SQL format.
10. The method of claim 7, comprising parsing an XML file that identifies one or more of the data sources, one or more fields thereof to be summarized, and/or one or more epochs for which those fields are to be summarized.
11. The method of claim 7, comprising responding data received from a data source by updating a store associated with an epoch of shorter duration.
12. The method of claim 11 , comprising updating a store associated with an epoch of longer duration based on information maintained in an epoch of shorter duration.
13. A method of time-wise data reduction and storage, comprising
A. at least one of querying and filtering data from one or more data sources,
B. summarizing the data received in one or more selected epochs of differing length,
C. generating RDF document objects comprising one or more RDF triples characterizing the summarized data,
D. storing the RDF documents to a first, hierarchical data store,
E. storing the triples therein to a second, relational data store.
14. The method of claim 13, comprising querying one or more data sources in an SQL format.
15. The method of claim 13, comprising parsing an XML file that identifies one or more of the data sources, one or more fields thereof to be summarized, and/or one or more epochs for which those fields are to be summarized.
16. The method of claim 13, comprising responding data received from a data source by updating a store associated with an epoch of shorter duration.
17. The method of claim 16, comprising updating a store associated with an epoch of longer duration based on information maintained in an epoch of shorter duration.
18. The method of claim 13, comprising generating a display or other presentation based on the RDF triples characterizing the summarized data.
PCT/US2002/037727 2001-11-21 2002-11-21 Methods and apparatus for statistical data analysis WO2003046769A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA002471468A CA2471468A1 (en) 2001-11-21 2002-11-21 Methods and apparatus for statistical data analysis
EP02791310A EP1483688A1 (en) 2001-11-21 2002-11-21 Methods and apparatus for statistical data analysis
AU2002365577A AU2002365577A1 (en) 2001-11-21 2002-11-21 Methods and apparatus for statistical data analysis

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US33205301P 2001-11-21 2001-11-21
US33221901P 2001-11-21 2001-11-21
US60/332,053 2001-11-21
US60/332,219 2001-11-21

Publications (1)

Publication Number Publication Date
WO2003046769A1 true WO2003046769A1 (en) 2003-06-05

Family

ID=26988039

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2002/037727 WO2003046769A1 (en) 2001-11-21 2002-11-21 Methods and apparatus for statistical data analysis
PCT/US2002/037729 WO2003044634A2 (en) 2001-11-21 2002-11-21 System for querying a relational database using schema-less queries

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2002/037729 WO2003044634A2 (en) 2001-11-21 2002-11-21 System for querying a relational database using schema-less queries

Country Status (4)

Country Link
EP (2) EP1483688A1 (en)
AU (2) AU2002346510A1 (en)
CA (2) CA2471468A1 (en)
WO (2) WO2003046769A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460129B2 (en) 2013-10-01 2016-10-04 Vmware, Inc. Method for tracking a schema in a schema-less database
US8458191B2 (en) 2010-03-15 2013-06-04 International Business Machines Corporation Method and system to store RDF data in a relational store
US10353966B2 (en) 2015-11-19 2019-07-16 BloomReach, Inc. Dynamic attributes for searching
CN108762915B (en) * 2018-04-19 2020-11-06 上海交通大学 Method for caching RDF data in GPU memory
CN113836316B (en) * 2021-09-23 2023-01-03 北京百度网讯科技有限公司 Processing method, training method, device, equipment and medium for ternary group data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5822780A (en) * 1996-12-31 1998-10-13 Emc Corporation Method and apparatus for hierarchical storage management for data base management systems
US5907837A (en) * 1995-07-17 1999-05-25 Microsoft Corporation Information retrieval system in an on-line network including separate content and layout of published titles

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001052090A2 (en) * 2000-01-14 2001-07-19 Saba Software, Inc. Method and apparatus for a web content platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5907837A (en) * 1995-07-17 1999-05-25 Microsoft Corporation Information retrieval system in an on-line network including separate content and layout of published titles
US5822780A (en) * 1996-12-31 1998-10-13 Emc Corporation Method and apparatus for hierarchical storage management for data base management systems

Also Published As

Publication number Publication date
WO2003044634A3 (en) 2003-12-11
CA2471468A1 (en) 2003-06-05
CA2471467A1 (en) 2003-05-30
AU2002346510A8 (en) 2003-06-10
WO2003044634A2 (en) 2003-05-30
AU2002346510A1 (en) 2003-06-10
EP1483688A1 (en) 2004-12-08
EP1546921A2 (en) 2005-06-29
AU2002365577A1 (en) 2003-06-10

Similar Documents

Publication Publication Date Title
US7302440B2 (en) Methods and apparatus for statistical data analysis and reduction for an enterprise application
US10275540B2 (en) Methods and apparatus for querying a relational data store using schema-less queries
US6856992B2 (en) Methods and apparatus for real-time business visibility using persistent schema-less data storage
US6826557B1 (en) Method and apparatus for characterizing and retrieving query results
US7805465B2 (en) Metadata management for a data abstraction model
US7673065B2 (en) Support for sharing computation between aggregations in a data stream management system
US7606791B2 (en) Internal parameters (parameters aging) in an abstract query
CN103460208B (en) For loading data into the method and system of temporal data warehouse
US8521867B2 (en) Support for incrementally processing user defined aggregations in a data stream management system
Snodgrass et al. Aggregates in the temporal query language TQuel
US20030093407A1 (en) Incremental maintenance of summary tables with complex grouping expressions
US20070130171A1 (en) Techniques for implementing indexes on columns in database tables whose values specify periods of time
US20100293209A1 (en) Batching heterogeneous database commands
US20100250574A1 (en) User dictionary term criteria conditions
WO2003046769A1 (en) Methods and apparatus for statistical data analysis
US11347804B2 (en) Methods and apparatus for querying a relational data store using schema-less queries
Chandrasekaran Query processing over live and archived data streams
Lee et al. Query optimization for web BBS by analytic function and function-based index in oracle DBMS
Parsian Exploring the Statement
Griffiths Capturing Indeterminate and Relative Temporal Semantics in an Historical Database Management System

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2002791310

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2471468

Country of ref document: CA

WWP Wipo information: published in national office

Ref document number: 2002791310

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP

WWW Wipo information: withdrawn in national office

Ref document number: 2002791310

Country of ref document: EP