Recherche Images Maps Play YouTube Actualités Gmail Drive Plus »
Connexion
Les utilisateurs de lecteurs d'écran peuvent cliquer sur ce lien pour activer le mode d'accessibilité. Celui-ci propose les mêmes fonctionnalités principales, mais il est optimisé pour votre lecteur d'écran.

Brevets

  1. Recherche avancée dans les brevets
Numéro de publicationUS20080286742 A1
Type de publicationDemande
Numéro de demandeUS 12/170,356
Date de publication20 nov. 2008
Date de dépôt9 juil. 2008
Date de priorité6 avr. 2004
Autre référence de publicationUS7418458, US20050222799, WO2005101244A2, WO2005101244A3
Numéro de publication12170356, 170356, US 2008/0286742 A1, US 2008/286742 A1, US 20080286742 A1, US 20080286742A1, US 2008286742 A1, US 2008286742A1, US-A1-20080286742, US-A1-2008286742, US2008/0286742A1, US2008/286742A1, US20080286742 A1, US20080286742A1, US2008286742 A1, US2008286742A1
InventeursDaniel Bolt, Jianbin Fu
Cessionnaire d'origineDaniel Bolt, Jianbin Fu
Exporter la citationBiBTeX, EndNote, RefMan
Liens externes: USPTO, Cession USPTO, Espacenet
Method for estimating examinee attribute parameters in a cognitive diagnosis model
US 20080286742 A1
Résumé
A method and system for determining attribute score levels from an assessment are disclosed. An assessment includes items each testing for at least one attribute. A first distribution is generated having a response propensity represented by a highest level of execution for each attribute tested by the item. An item threshold is determined for at least one score for the first distribution. Each item threshold corresponds to a level of execution corresponding to the score for which the item threshold is determined. For each attribute tested by the item, a second distribution is generated having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item. A mean parameter is determined for the second distribution. An attribute score level is determined for the scores based on the item thresholds and the mean parameters.
Images(4)
Previous page
Next page
Revendications(22)
1. A method for determining attribute score levels from an assessment, the method comprising:
for at least one item on the assessment, wherein the item tests for at least one attribute:
generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item;
for at least one score, determining an item threshold for the first distribution corresponding to a level of execution corresponding to the score;
for at least one attribute tested by the item:
generating a second distribution having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, and
determining a mean parameter for the second distribution; and
determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
2. The method of claim 1 wherein the first distribution comprises a standard normal distribution.
3. The method of claim 1 wherein the item threshold for a first distribution corresponding to a first score is selected from a uniform distribution defined by Unif(−5, 5).
4. (canceled)
5. The method of claim 1 wherein the first distribution comprises a first distribution mean parameter, and wherein the first distribution mean parameter is greater than the mean parameter for each second distribution.
6. The method of claim 1 wherein the item threshold corresponding to a first score is greater than an item threshold corresponding to a second score if the first score is greater than the second score.
7. The method of claim 1 wherein a second distribution comprises a standard normal distribution.
8. The method of claim 1 wherein the mean parameter for a second distribution is less than 0.
9. The method of claim 1 wherein the mean parameter for a second distribution is selected from a uniform distribution defined by Unif(−10, 0).
10. (canceled)
11. (canceled)
12. (canceled)
13. A method for determining one or more examinee attribute mastery levels from an assessment, the method comprising:
receiving a covariate vector for an examinee, wherein the covariate vector includes a value for each of one or more covariates for the examinee; and
for each of one or more attributes:
computing an examinee attribute value based on at least the covariate vector and one or more responses made by the examinee to one or more questions pertaining to the attribute on an assessment, and
assigning an examinee attribute mastery level for the examinee with respect to the attribute based on whether the examinee attribute value surpasses one or more thresholds.
14. (canceled)
15. (canceled)
16. A system for determining attribute score levels from an assessment, the system comprising:
a processor; and
a processor-readable storage medium in communication with the processor,
wherein the processor-readable storage medium contains one or more programming instructions for performing a method of determining attribute score levels from an assessment, the method comprising:
for at least one item on the assessment, wherein the item tests for at least one attribute:
generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item,
for at least one score, determining an item threshold for the first distribution corresponding to a level of execution corresponding to the score,
for at least one attribute tested by the item:
generating a second distribution having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, and
determining a mean parameter for the second distribution, and
determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
17. The system of claim 16 wherein the first distribution comprises a standard normal distribution.
18. The system of claim 16 wherein the first distribution comprises a first distribution mean parameter, and wherein the first distribution mean parameter is greater than the mean parameter for each second distribution.
19. The system of claim 16 wherein the item threshold corresponding to a first score is greater than an item threshold corresponding to a second score if the first score is greater than the second score.
20. The system of claim 16 wherein a second distribution comprises a standard normal distribution.
21. The system of claim 16 wherein the mean parameter for a second distribution is less than 0.
22. (canceled)
Description
    RELATED APPLICATIONS AND CLAIM OF PRIORITY
  • [0001]
    This application claims priority to, and incorporates herein by reference, U.S. provisional patent application No. 60/559,922, entitled “A Polynomous Extension of the Fusion Model and Its Bayesian Parameter Estimation,” filed Apr. 6, 2004, and parent application U.S. patent application Ser. No. 11/100,364 entitled “Method For Estimating Examinee Attribute Parameters In A Cognitive Diagnosis Model” filed Apr. 6, 2005, of which it is a continuation.
  • TECHNICAL FIELD
  • [0002]
    The embodiments disclosed herein generally relate to the field of assessment evaluation. The embodiments particularly relate to methods for evaluating assessment examinees on a plurality of attributes based on responses to assessment items.
  • BACKGROUND
  • [0003]
    Standardized testing is prevalent in the United States today. Such testing is often used for higher education entrance examinations and achievement testing at the primary and secondary school levels. The prevalence of standardized testing in the United States has been further bolstered by the No Child Left Behind Act of 2001, which emphasizes nationwide test-based assessment of student achievement.
  • [0004]
    The typical focus of research in the field of assessment measurement and evaluation has been on methods of item response theory (IRT). A goal of IRT is to optimally order examinees along a low dimensional plane (typically unidimensional) based on the examinee's responses and the characteristics of the test items. The ordering of examinees is done via a set of latent variables presupposed to measure ability. The item responses are generally considered to be conditionally independent of each other.
  • [0005]
    The typical IRT application uses a test to estimate an examinee's set of abilities (such as verbal ability or mathematical ability) on a continuous scale. An examinee receives a scaled score (a latent trait scaled to some easily understood metric) and/or a percentile rank. The final score (an ordering of examinees along a latent dimension) is used as the standardized measure of competency for an area-specific ability.
  • [0006]
    Although achieving a partial ordering of examinees remains an important goal in some settings of educational measurement, the practicality of such methods is questionable in common testing applications. For each examinee, the process of acquiring the knowledge that each test purports to measure seems unlikely to occur via this same low dimensional approach of broadly defined general abilities. This is, at least in part, because such testing can only assess a student's abilities generally, but cannot adequately determine whether a student has mastered a particular ability or not.
  • [0007]
    Because of this limitation, cognitive modeling methods, also known as skills assessment or skills profiling, have been developed for assessing students' abilities. Cognitive diagnosis statistically analyzes the process of evaluating each examinee on the basis of the level of competence on an array of skills and using this evaluation to make relatively fine-grained categorical teaching and learning decisions about each examinee. Traditional educational testing, such as the use of an SAT score to determine overall ability, performs summative assessment. In contrast, cognitive diagnosis performs formative assessment, which partitions answers for an assessment examination into fine-grained (often discrete or dichotomous) cognitive skills or abilities in order to evaluate an examinee with respect to his level of competence for each skill or ability. For example, if a designer of an algebra test is interested in evaluating a standard set of algebra attributes, such as factoring, laws of exponents, quadratic equations and the like, cognitive diagnosis attempts to evaluate each examinee with respect to each such attribute. In contrast, summative analysis simply evaluates each examinee with respect to an overall score on the algebra test.
  • [0008]
    Numerous cognitive diagnosis models have been developed to attempt to estimate examinee attributes. In cognitive diagnosis models, the atomic components of ability, the specific, finely grained skills (e.g., the ability to multiply fractions, factor polynomials, etc.) that together comprise the latent space of general ability, are referred to as attributes. Due to the high level of specificity in defining attributes, an examinee in a dichotomous model is regarded as either a master or non-master of each attribute. The space of all attributes relevant to an examination is represented by the set {α1, . . . , αk}. Given a test with items i=1, . . . , I, the attributes necessary for each item can be represented in a matrix of size I×K. This matrix is referred to as a Q-matrix having values Q={qik}, where qik=1 when attribute k is required by item i and qik=0 when attribute k is not required by item i. Typically, the Q-matrix is constructed by experts and is pre-specified at the time of the examination analysis.
  • [0009]
    Cognitive diagnosis models can be sub-divided into two classifications: compensatory models and conjunctive models. Compensatory models allow for examinees who are non-masters of one or more attributes to compensate by being masters of other attributes. An exemplary compensatory model is the common factor model. High scores on some factors can compensate for low scores on other factors.
  • [0010]
    Numerous compensatory cognitive diagnosis models have been proposed including: (1) the Linear Logistic Test Model (LLTM) which models cognitive facets of each item, but does not provide information regarding the attribute mastery of each examinee; (2) the Multicomponent Latent Trait Model (MLTM) which determines the attribute features for each examinee, but does not provide information regarding items; (3) the Multiple Strategy MLTM which can be used to estimate examinee performance for items having multiple solution strategies; and (4) the General Latent Trait Model (GLTM) which estimates characteristics of the attribute space with respect to examinees and item difficulty.
  • [0011]
    Conjunctive models, on the other hand, do not allow for compensation when critical attributes are not mastered. Such models more naturally apply to cognitive diagnosis due to the cognitive structure defined in the Q-matrix and will be considered herein. Such conjunctive cognitive diagnosis models include: (1) the DINA (deterministic inputs, noisy “AND” gate) model which requires the mastery of all attributes by the examinee for a given examination item; (2) the NIDA (noisy inputs, deterministic “AND” gate) model which decreases the probability of answering an item for each attribute that is not mastered; (3) the Disjunctive Multiple Classification Latent Class Model (DMCLCM) which models the application of non-mastered attributes to incorrectly answered items; (4) the Partially Ordered Subset Models (POSET) which include a component relating the set of Q-matrix defined attributes to the items by a response model and a component relating the Q-matrix defined attributes to a partially ordered set of knowledge states; and (5) the Unified Model which combines the Q-matrix with terms intended to capture the influence of incorrectly specified Q-matrix entries.
  • [0012]
    The Unified Model specifies the probability of correctly answering an item Xij for a given examinee j, item i, and set of attributes k=1, . . . , K as:
  • [0000]
    P ( X ij = 1 α j , θ j ) = ( 1 - p ) [ d j k = 1 K π ik α jk xq ik r ik ( 1 - α jk xq ik ) P i ( θ j + Δ c i ) + ( 1 - d i ) P i ( θ j ) ] ,
  • [0000]
    where
  • [0013]
    θj is the latent trait of examinee j; p is the probability of an erroneous response by an examinee that is a master; di is the probability of selecting the pre-defined Q-matrix strategy for item i;
  • [0014]
    πik is the probability of correctly applying attribute k to item i given mastery of attribute k; rik is the probability of correctly applying attribute k to item i given non-mastery of attribute k; αjk is an examinee attribute mastery level, and ci is a value indicating the extent to which the Q-matrix entry for item i spans the latent attribute space.
  • [0015]
    One problem with the Unified Model is that the number of parameters per item is unidentifiable. The Reparameterized Unified Model (RUM) attempted to reparameterize the Unified Model in a manner consistent with the original interpretation of the model parameters. For a given examinee j, item i, and Q-matrix defined set of attributes k=1, . . . , K, the RUM specifies the probability of correctly answering item Xij as:
  • [0000]
    P ( X ij α j , θ j ) = π i * k = 1 K r ik * ( 1 - α jk ) xq ik P c i ( θ j ) ,
  • [0000]
    where
  • [0000]
    π i * = k = 1 K π ik q ik
  • [0000]
    (the probability of correctly applying all K Q-matrix specified attributes for item i),
  • [0000]
    r ik * = r ik π ik
  • [0000]
    (the penalty imposed for not mastering attribute k), and
  • [0000]
    P c i ( θ j ) = ( θ j + c i ) 1 + ( θ j + c i )
  • [0000]
    (a measure of the completeness of the model).
  • [0016]
    The RUM is a compromise of the Unified Model parameters that allow the estimation of both latent examinee attribute patterns and test item parameters.
  • [0017]
    Another cognitive diagnosis model derived from the Model is the Fusion Model. In the Fusion Model, the examinee parameters are defined as αj, a K-element vector representing examinee j's mastery/non-mastery status on each of the attributes specified in the Q matrix. For example, if a test measures five skill attributes, an examinee's αj vector might be ‘11010’, implying mastery of skill attributes 1, 2 and 4, and non-mastery of attributes 3 and 5. The examinee variable θj is normalized as in traditional IRT applications (mean of 0, variance of 1). The probability that examinee j answers item i correctly is expressed as:
  • [0000]
    P ( X ij = 1 α _ j , θ j ) = π i * k = 1 K r ik * ( 1 - α jk ) xq ik P c i ( θ j )
  • [0000]
    where
  • [0018]
    π*i is the probability of correctly applying all K Q-matrix specified attributes for item i, given that an examinee is a master of all of the attributes required for the item,
  • [0019]
    r*ik is the ratio of (1) the probability of successfully applying attribute k on item i given that an examinee is a non-master of attribute k and (2) the probability of successfully applying attribute k on item i given that an examinee is a master of attribute k, and
  • [0000]
    P c i ( θ j ) = 1 1 + - ( θ j + c i )
  • [0000]
    is the Rasch Model with easiness parameter ci(0≦ci≦3) for item i.
  • [0020]
    Based on this equation, it is common to distinguish two components of the Fusion Model: (1) the diagnostic component:
  • [0000]
    π i * k = 1 K t ik * ( 1 - α jk ) xq ik ,
  • [0000]
    which is concerned with the influence of the skill attributes on item performance, and (2) the residual component: Pc i j), which is concerned with the influence of the residual ability. These components interact conjunctively in determining the probability of a correct response. That is, successful execution of both the diagnostic and residual components of the model is needed to achieve a correct response on the item.
  • [0021]
    The r*ik parameter assumes values between 0 and 1 and functions as a discrimination parameter in describing the power of the ith item in distinguishing masters from non-masters on the kth attribute. The r*ik parameter functions as a penalty by imposing a proportional reduction in the probability of correct response (for the diagnostic part of the model) for a non-master of the attribute, assuming the attribute is needed to solve the item. The ci parameters are completeness indices, indicating the degree to which the attributes specified in the Q-matrix are “complete” in describing the skills needed to successfully execute the item. Values of ci close to 3 represent items with high levels of completeness; values close to 0 represent items with low completeness.
  • [0022]
    The item parameters in the Fusion model have a prior distribution that is a Beta distribution, β(a, b), where (a, b) are defined for each set of item parameters, π*, r*, and c/3. Each set of hyperparameters is then estimated within the MCMC chain to determine the shape of the prior distribution.
  • [0023]
    One difference between the RUM and Fusion Model is that the αjk term is replaced in the Fusion Model with a binary indicator function, I( α jkk), where α jk is the underlying continuous variable of examinee j for attribute k (i.e., an examinee attribute value), and κk is the mastery threshold value that α jk must exceed for αjk=1.
  • [0024]
    MCMC algorithms estimate the set of item (b) and latent examinee (θ) parameters by using a stationary Markov chain, (A0, A1, A2, . . . ), with At=(bt, θt). The individual steps of the chain are determined according to the transition kernel, which is the probability of a transition from state t to state t+1, P[(bt+1, θt+1)|(bt, θt)]. The goal of the MCMC algorithm is to use a transition kernel that will allow sampling from the posterior distribution of interest. The process of sampling from the posterior distribution can be evaluated by sampling from the distribution of each of the different types of parameters separately. Furthermore, each of the individual elements of the vector can be sampled separately. Accordingly, the posterior distribution to be sampled for the item parameters is P(bi|X, θ) (across all i) and the posterior distribution to be sampled for the examinee parameters is P(θj|X, b) (across all j).
  • [0025]
    One problem with MCMC algorithms is that the choice of a proposal distribution is critical to the number of iterations required for convergence of the Markov Chain. A critical measure of effectiveness of the choice of proposal distribution is the proportion of proposals that are accepted within the chain. If the proportion is low, then many unreasonable values are proposed, and the chain moves very slowly towards convergence. Likewise, if the proportion is very high, the values proposed are too close to the values of the current state, and the chain will converge very slowly.
  • [0026]
    While MCMC algorithms suffer from the same pitfalls of JML optimization algorithms, such as no guarantee of consistent parameter estimates, a potential strength of the MCMC approaches is the reporting of examinee (binary) attribute estimates as posterior probabilities. Thus, MCMC algorithms can provide a more practical way of investigating cognitive diagnosis models.
  • [0027]
    Different methods of sampling values from the complete conditional distributions of the parameters of the model include the Gibbs sampling algorithm and the Metropolis-Hastings within Gibbs (MHG) algorithm. Each of the cognitive diagnosis models fit with MCMC used the MHG algorithm to evaluate the set of examinee variables because the Gibbs sampling algorithm requires the computation of a normalizing constant. A disadvantage of the MHG algorithm is that the set of examinee parameters are considered within a single block (i.e., only one parameter is variable while other variables are fixed). While the use of blocking speeds up the convergence of the MCMC chain, efficiency may be reduced. For example, attributes with large influences on the likelihood may overshadow values of individual attributes that are not as large.
  • [0028]
    One problem with current cognitive diagnosis models is that they do not adequately evaluate examinees on more than two skill levels, such as master and non-master. While some cognitive diagnosis models do attempt to evaluate examinees on three or more skill levels, the number of variables used by such models is excessive.
  • [0029]
    Accordingly, what is needed is a method for performing cognitive diagnosis using a model that evaluates examinees on individual skills using polytomous attribute skill levels.
  • [0030]
    A further need exists for a method that considers each attribute separately when assessing examinees.
  • [0031]
    A still further need exists for a method of classifying examinees using a reduced variable set for polytomous attribute skill levels.
  • [0032]
    The present disclosure is directed to solving one or more of the above-listed problems.
  • SUMMARY
  • [0033]
    Before the present methods, systems and materials are described, it is to be understood that this invention is not limited to the particular methodologies, systems and materials described, as these may vary. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the invention which will be limited only by the appended claims.
  • [0034]
    It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, reference to an “attribute” is a reference to one or more attributes and equivalents thereof known to those skilled in the art, and so forth. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. Although any methods, materials, and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, the preferred methods, materials, and devices are now described. All publications mentioned herein are incorporated by reference. Nothing herein is to be construed as an admission that the invention is not entitled to antedate such disclosure by virtue of prior invention.
  • [0035]
    In an embodiment, a method for determining attribute score levels from an assessment may include, for at least one item, each testing at least one attribute, on the assessment, generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item, determining an item threshold for at least one score for the first distribution corresponding to a level of execution corresponding to the score, generating a second distribution for at least one attribute tested by the item having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, determining a mean parameter for the second distribution, and determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
  • [0036]
    In an embodiment, a method for determining one or more examinee attribute mastery levels from an assessment may include receiving a covariate vector including a value for each of one or more covariates for the examinee for an examinee, and, for each of one or more attributes, computing an examinee attribute value based on at least the covariate vector and one or more responses made by the examinee to one or more questions pertaining to the attribute on an assessment, and assigning an examinee attribute mastery level for the examinee with respect to the attribute based on whether the examinee attribute value surpasses one or more thresholds.
  • [0037]
    In an embodiment, a system for determining attribute score levels from an assessment may include a processor, and a processor-readable storage medium in communication with the processor. The processor-readable storage medium may contain one or more programming instruction for performing a method of determining attribute score levels from an assessment including, for at least one item, each testing for at least one attribute, on the assessment, generating a first distribution having a response propensity represented by a highest level of execution for each attribute tested by the item, for at least one score, determining an item threshold for the first distribution corresponding to a level of execution corresponding to the score, for at least one attribute tested by the item, generating a second distribution having a response propensity represented by a lowest level of execution for the attribute and the highest level of execution for all other attributes tested by the item, and determining a mean parameter for the second distribution, and determining an attribute score level for at least one score based on the at least one item threshold and the at least one mean parameter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0038]
    Aspects, features, benefits and advantages of the embodiments of the present invention will be apparent with regard to the following description, appended claims and accompanying drawings where:
  • [0039]
    FIG. 1 illustrates an exemplary parameterization for the diagnostic part of a model for dichotomously scored items according to an embodiment.
  • [0040]
    FIG. 2 illustrates an exemplary parameterization for the diagnostic part of a model for polytomously scored items according to an embodiment.
  • [0041]
    FIG. 3 is a block diagram of exemplary internal hardware that may be used to contain or implement program instructions according to an embodiment.
  • DETAILED DESCRIPTION
  • [0042]
    The present disclosure discusses embodiments of the Fusion Model, described above, extended to cover polytomous attribute skill levels. The disclosed embodiments may generalize and extend the teachings of the Fusion Model for polytomously-scored items with ordered score categories.
  • [0043]
    In an embodiment, the cumulative score probabilities of polytomously-scored M-category items may be expressed as follows:
  • [0000]
    P im * ( α _ j , θ j ) = P ( X ij m α _ j , θ j ) = { 1 m = 0 π im * k = 1 K r imk * ( 1 - α jk ) xq ik P c im ( θ j ) m = ( 1 , , M i - 1 ) ( 1 )
  • [0000]
    resulting in item score probabilities that may be expressed as follows:
  • [0000]
    P im ( α _ j , θ j ) = P ( X ij = m α _ j , θ j ) = { P im * ( α _ j , θ j ) - P i ( m + 1 ) * ( α _ j , θ j ) m = ( 0 , , M i - 2 ) P im * ( α _ j , θ j ) m = M i - 1 ( 2 )
  • [0000]
    where
  • [0044]
    π*im is the probability of sufficiently applying all item i required attributes to achieve a score of at least m, given that an examinee has mastered all required attributes for the item (π*i1≧π*i2≧ . . . ≧π*im);
  • [0045]
    r*imk is the ratio of (1) the probability of sufficiently applying attribute k required for item to achieve a score of at least m given that an examinee is a non-master of attribute k, and (2) the probability of sufficiently applying attribute k required for item i to achieve a score of at least m given that an examinee is a master of attribute k (r*i1k≧r*i1k≧ . . . ≧r*i1k); and
  • [0046]
    Pc im j) is a Rasch model probability with easiness parameter cim, m=1, . . . , M−1. The easiness parameters are ordered such that ci1>ci2> . . . >ci(M-1).
  • [0047]
    A feature of the Fusion Model—its synthesis of a diagnostic modeling component with a residual modeling component—may be seen in Equation (1). In the dichotomous case, each item requires successful execution of both the diagnostic and residual parts of the model; that is, an overall correct response to an item occurs only when both latent responses are positive. In the polytomous case disclosed herein, where multiple score categories may be used, a different metric may be relevant. Instead of a correct response, the polytomous case may calculate whether an examinee's execution is sufficient to achieve a score of at least m, where m=0, 1, . . . , M−1 (assuming an M-category item is scored 0, 1, . . . , M−1). In other words, if the separate latent responses to the diagnostic and residual parts of the model are being scored 0, 1, 2, . . . , M−1, an examinee may only receive a score of m or higher on the item when both latent responses are m or higher. When translated to actual item score probabilities in Equation (2), an examinee may achieve a score that is the minimum of what is achieved across both parts of the model.
  • [0048]
    Controlling the number of new parameters introduced to a polytomous cognitive diagnosis model is important in order to develop a computable model. If too many parameters exist, the processing power needed to compute examinee attribute skill levels using the model may be excessive. Based on Equation (1), every score category in every item (with the exception of the first score category) may include a π*im, a cim, and as many r*imk parameters as there are attributes needed to solve the item. This may result in too many parameters per item to make estimation feasible.
  • [0049]
    Alternate parameterization may be used to introduce a mechanism by which realistic constraints may be imposed on the diagnosis-related item parameters (the π*'s and r*'s), while also ensuring that all score category probabilities remain positive for examinees of all latent attribute mastery patterns and all residual ability levels.
  • [0050]
    FIG. 1 illustrates an exemplary parameterization for the diagnostic part of the model for dichotomously scored items according to an embodiment. As shown in FIG. 1, item i requires two attributes (attributes 1 and 2). Underlying normal distributions may represent the likelihood that an examinee in a particular class successfully executes all required attributes in solving the item. For example, the classes may include (1) examinees that have mastered both attributes 1 and 2 105; (2) examinees that have mastered attribute 1, but not attribute 2 110; and (3) examinees that have mastered attribute 2, but not attribute 1 115. An item threshold τi1 120 may define the location corresponding to the level of execution needed for a correct response. Accordingly, the area under the normal curve 105 above τi1 for examinees that have mastered both attributes may be equivalent to π*i in the Fusion model. The second normal distribution 110 may represent examinees who have mastered attribute 1, but not attribute 2. The second normal distribution 110 may have a mean parameter μi1 125 that is constrained to be less than 0 (the mean of the response propensity distribution for masters of both attributes), and a fixed variance of 1. The area above τi1 for this class may be equal to π*i×r*i2 in the ordinary Fusion Model parameterization. The third normal distribution 115 may represent examinees who have mastered attribute 2, but not attribute 1. The third normal distribution 115 may have a mean parameter μi2 130 that is constrained to be less than 0 (the mean of the response propensity distribution for masters of both attributes), and a fixed variance of 1. The area above τi1 for this class may be equal to πi*×r*i1 in the ordinary Fusion Model parameterization. As in the Fusion Model, the probability that an examinee that has not mastered either attribute will successfully execute them is equal to π*i×r*i1×r*i2.
  • [0051]
    As such, three parameters may be estimated for this item in the parameterization: τi1 120, μi1 125, and μi2 130. Each of these parameters may be directly translated into π*i, r*i1 and r*i2 based on the usual parameterization of the Fusion Model. The three classes considered above may thus be sufficient to determine the πi, ri1, and r*i2 parameters, which may be applied to determine the diagnostic component probability for the class that are non-masters of both attributes. In general, it may only be necessary to determine as many μ parameters as there are attributes for the item.
  • [0052]
    By parameterizing the model in this manner, the number of parameters for polytomously-scored items may be minimized. In a polytomously-scored item, additional item threshold parameters τi2, τi3, . . . , τi(M-1) may be added for an M-category item (along with the additional threshold parameters ci2, ci3, . . . , ci(M-1) for the residual part). The area under each normal distribution may be separated into M regions. The area of each region may represent a function of the π*'s and r*'s needed to reproduce the cumulative score probabilities in Equation (1).
  • [0053]
    For example, as shown in FIG. 2, a three-category item (item scores 0, 1, and 2) may include two attributes. FIG. 2 is analogous to FIG. 1 except for an additional threshold parameter is added to account for the added score category. The cumulative score probabilities in Equation (1) may be a function of both a diagnostic component and a residual component. For examinees that have mastered both required attributes (i.e., examinees whose response propensities are represented by the top distribution), the probability of executing the attributes sufficiently well to achieve a score of at least 1 may be given by the area above the first threshold τi1 120 under the normal distribution 205. The probability of executing the attributes sufficiently well to achieve a score of at least 2 is given by the area above the second threshold τi2 220 under the normal distribution 205. For examinees that have failed to master the second attribute only, the areas above τi1 and τi2 in the second distribution 210 may likewise represent the probabilities of executing the attributes sufficiently well to obtain scores of at least 1 and 2, respectively. For examinees that have failed to master the first attribute only, the areas above τi1 and τi2 in the third distribution 215 may likewise represent the probabilities of executing the attributes sufficiently well to obtain scores of at least 1 and 2, respectively.
  • [0054]
    A Bayesian estimation strategy for the model presented in Equations (1) and (2) may be formally specified using the τ, μ, and c parameters that are estimated. The π's and r*'s may then be derived from these parameters. The τ, μ, and c parameters may be assigned non-informative uniform priors with order constraints to ensure positive score category probabilities under all conditions. For example, the following priors may be assigned:
  • [0000]

    τi1˜Unif(−5,5),
  • [0000]

    τim˜Unif(τi(m-1),5), for m=(2, . . . , Mi−1)
  • [0000]

    ci1˜Unif(0,3),
  • [0000]

    cim˜Unif(0,ci(m-1)), for m=(2, . . . , Mi−1)
  • [0000]

    μik˜Unif(−10,0) for k=(1, . . . , Ki) where Ki is the number of attributes required by item i=(1, . . . , I) in the Q-matrix.
  • [0055]
    From these parameters, the more traditional polytomous Fusion Model parameters in Equation (1) may be derived as follows:
  • [0000]

    π*im=1−Φ(τim) for m=(1, . . . , Mi−1) where Φ denotes the cumulative density function (CDF) of a standard normal distribution; and
  • [0000]

    r* imk=[1−Φ(τim−μik)]/π*im for m=(1, . . . , Mi−1) and k=(1, . . . , Ki).
  • [0056]
    The quantile range (−5, 5) may cover 99.99% of the area under a standard normal curve. This may imply vague priors between 0 and 1 for all π*im and r*imk.
  • [0057]
    The correlational structure of the examinee attributes αj may be modeled through the introduction of a multivariate vector of continuous variables {tilde over (α)}j that is assumed to underlie the dichotomous attributes αj. Similar to the theory underlying the computation of tetrachoric correlations, αj may be assumed to be a multivariate normal, with mean 0, a covariance matrix having diagonal elements of 1, and all correlations estimated. A K-element vector κ may determine the thresholds along {tilde over (α)}j that distinguish masters from non-masters on each attribute. Accordingly, the vector κ may control the proportion of masters on each attribute (pk), where higher settings imply a smaller proportion of masters. Each element of κ may be assigned a normal prior with mean 0 and variance 1. Likewise, for the residual parameters θj, normal priors may be imposed having mean 0 and variance 1.
  • [0058]
    In an embodiment, a covariance matrix Σ may be used instead of the correlation matrix to specify the joint multivariate normal distribution for the ã's and θ's for each examinee. This covariance matrix may be assigned a non-informative inverse-Wishart prior with K+1 degrees of freedom and symmetric positive definite (K+1)×(K+1) scale matrix R, Σ˜Inv-WishartK+1(R). An informative inverse-Wishart prior for Σ may also be used by choosing a larger number of degrees of freedom (DF) relative to the number of examinees, and scale matrix R=E(R)*(DF−K−2) where E(R) is the anticipated covariance (or correlation) matrix. Because the ãjk are latent, they may have no predetermined metric. Accordingly, their variances may not be identified. However, such variances may only be required in determining αjk. This indeterminacy may not affect the determination of the dichotomous αjk since the threshold κk may adjust according to the variance of ãjk. This may result because the sampling procedure used for MCMC estimation may sample parameters from their full conditional distribution such that κk is sampled conditionally upon {tilde over (α)}jk. As a result, if the variances drift over the course of the chain, the κk may tend to follow the variance drift such that the definition of attribute mastery remains largely consistent (assuming the mastery proportions are estimable). The latent attribute correlation matrix may be derived from the covariance matrix once a MCMC chain has finished.
  • [0059]
    In an embodiment, a covariance structure may be applied for the latent attribute correlations. For example, since many tests are substantially unidimensional in nature, the latent attribute correlations may conform to a single factor model. For an examinee j and an attribute k, this may be expressed as:
  • [0000]

    {tilde over (α)}jkk F j +e jk,
  • [0000]
    where
  • [0060]
    Fj is the level on the second order factor underlying the attribute correlations for examinee j, specified to have mean 0 and variance 1;
  • [0061]
    λk represents the factor loading for attribute k on the second order factor; and
  • [0062]
    ejk represents a uniqueness term with mean 0 across examinees and variance Ψk.
  • [0063]
    Accordingly, a new matrix Σ* based on the factor loadings and uniqueness variances may be used to replace the covariance matrix Σ described above. λ parameters may be sampled for each attribute in place of the covariance matrix Σ. In addition, Ψk may be set to (1−λk 2). As such, a consistent metric for the {tilde over (α)}jk parameters may be imposed with a variance of 1. In an embodiment, a uniform prior may be imposed on each λk with bounds of, for example, 0.2 and 1.0.s
  • [0064]
    FIG. 3 is a block diagram of exemplary internal hardware that may be used to contain or implement program instructions according to an embodiment. Referring to FIG. 3, a bus 328 serves as the main information highway interconnecting the other illustrated components of the hardware. CPU 302 is the central processing unit of the system, performing calculations and logic operations required to execute a program. Read only memory (ROM) 318 and random access memory (RAM) 320 constitute exemplary memory devices.
  • [0065]
    A disk controller 304 interfaces with one or more optional disk drives to the system bus 328. These disk drives may be external or internal floppy disk drives such as 310, CD ROM drives 306, or external or internal hard drives 308. As indicated previously, these various disk drives and disk controllers are optional devices.
  • [0066]
    Program instructions may be stored in the ROM 318 and/or the RAM 320. Optionally, program instructions may be stored on a computer readable medium such as a floppy disk or a digital disk or other recording medium, a communications signal or a carrier wave.
  • [0067]
    An optional display interface 322 may permit information from the bus 328 to be displayed on the display 324 in audio, graphic or alphanumeric format. Communication with external devices may optionally occur using various communication ports 326. An exemplary communication port 326 may be attached to a communications network, such as the Internet or an intranet.
  • [0068]
    In addition to the standard computer-type components, the hardware may also include an interface 312 which allows for receipt of data from input devices such as a keyboard 314 or other input device 316 such as a remote control, pointer and/or joystick.
  • [0069]
    An embedded system may optionally be used to perform one, some or all of the disclosed operations. Likewise, a multiprocessor system may optionally be used to perform one, some or all of the disclosed operations.
  • [0070]
    As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for the designing of other structures, methods and systems for carrying out the several purposes of the present invention. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed embodiments.
Citations de brevets
Brevet cité Date de dépôt Date de publication Déposant Titre
US5259766 *13 déc. 19919 nov. 1993Educational Testing ServiceMethod and system for interactive computer science testing, anaylsis and feedback
US5326270 *29 août 19915 juil. 1994Introspect Technologies, Inc.System and method for assessing an individual's task-processing style
US5749736 *31 mai 199612 mai 1998Taras DevelopmentMethod and system for computerized learning, response, and evaluation
US5797753 *26 déc. 199625 août 1998William M. BancroftMethod and system for computerized learning response, and evaluation
US6105046 *25 févr. 199815 août 2000Screenplay Systems, Inc.Method and apparatus for identifying, predicting, and reporting object relationships
US6125358 *22 déc. 199826 sept. 2000Ac Properties B.V.System, method and article of manufacture for a simulation system for goal based education of a plurality of students
US6144838 *18 déc. 19987 nov. 2000Educational Testing ServicesTree-based approach to proficiency scaling and diagnostic assessment
US6260033 *16 nov. 199810 juil. 2001Curtis M. TatsuokaMethod for remediation based on knowledge and/or functionality
US6419496 *28 mars 200016 juil. 2002William Vaughan, Jr.Learning method
US6484010 *31 juil. 200019 nov. 2002Educational Testing ServiceTree-based approach to proficiency scaling and diagnostic assessment
US6524109 *2 août 199925 févr. 2003Unisys CorporationSystem and method for performing skill set assessment using a hierarchical minimum skill set definition
US6526258 *27 juil. 200125 févr. 2003Educational Testing ServiceMethods and systems for presentation and evaluation of constructed responses assessed by human evaluators
US6688889 *8 mars 200110 févr. 2004Boostmyscore.ComComputerized test preparation system employing individually tailored diagnostics and remediation
US6705872 *13 mars 200216 mars 2004Michael Vincent PearsonMethod and system for creating and maintaining assessments
US6778986 *1 nov. 200017 août 2004Eliyon Technologies CorporationComputer method and apparatus for determining site type of a web site
US6790045 *18 juin 200114 sept. 2004Unext.Com LlcMethod and system for analyzing student performance in an electronic course
US6808393 *21 nov. 200126 oct. 2004Protigen, Inc.Interactive assessment tool
US6832069 *20 avr. 200114 déc. 2004Educational Testing ServiceLatent property diagnosing procedure
US6978115 *29 mars 200120 déc. 2005Pointecast CorporationMethod and system for training in an adaptive manner
US7095979 *29 avr. 200422 août 2006Educational Testing ServiceMethod of evaluation fit of raw data to model data
US7418458 *6 avr. 200526 août 2008Educational Testing ServiceMethod for estimating examinee attribute parameters in a cognitive diagnosis model
US7440725 *18 août 200621 oct. 2008Educational Testing ServiceMethod of evaluation fit of raw data to model data
US20020146676 *11 mai 200110 oct. 2002Reynolds Thomas J.Interactive method and system for teaching decision making
US20030232314 *20 avr. 200118 déc. 2003Stout William F.Latent property diagnosing procedure
US20040014016 *11 juil. 200222 janv. 2004Howard PopeckEvaluation and assessment system
US20040202987 *13 févr. 200414 oct. 2004Scheuring Sylvia TidwellSystem and method for creating, assessing, modifying, and using a learning map
US20040265784 *29 avr. 200430 déc. 2004Stout William F.Method of evaluation fit of raw data to model data
US20070179827 *27 août 20042 août 2007Sandeep GuptaApplication processing and decision systems and processes
Référencé par
Brevet citant Date de dépôt Date de publication Déposant Titre
US20140272897 *14 mars 201318 sept. 2014Oliver W. CummingsMethod and system for blending assessment scores
Classifications
Classification aux États-Unis434/362
Classification internationaleG09B7/00, G06F17/30, G09B7/02
Classification coopérativeG09B7/00, G09B7/02, Y10S707/99943
Classification européenneG09B7/00, G09B7/02
Événements juridiques
DateCodeÉvénementDescription
31 juil. 2008ASAssignment
Owner name: EDUCATIONAL TESTING SERVICE, NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOLT, DANIEL;FU, JIANBIN;REEL/FRAME:021323/0213
Effective date: 20050419