IMS Logo IMS Question & Test Interoperability:
An Overview

Final Specification Version 1.2
Copyright © 2002 IMS Global Learning Consortium, Inc. All Rights Reserved.
The IMS Logo is a trademark of IMS Global Learning Consortium, Inc.
Document Name: IMS Question & Test Interoperability: An Overview
Date: 11 February 2002

 

IPR and Distribution Notices

Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the specification set forth in this document, and to provide supporting documentation.

IMS takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on IMS's procedures with respect to rights in IMS specifications can be found at the IMS Intellectual Property Rights web page: http://www.imsglobal.org/ipr/imsipr_policyFinal.pdf.

Copyright © 2002 IMS Global Learning Consortium. All Rights Reserved.

Permission is granted to all parties to use excerpts from this document as needed in producing requests for proposals.

Use of this specification to develop products or services is governed by the license with IMS found on the IMS website: http://www.imsglobal.org/license.html.

The limited permissions granted above are perpetual and will not be revoked by IMS or its successors or assigns.

THIS SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY AT THE IMPLEMENTER'S OWN RISK, AND NEITHER THE CONSORTIUM, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE USE OF THIS SPECIFICATION.


 

Table of Contents


1. The Question & Test Interoperability Specification
     1.1 Historical Perspective
     1.2 The Requirements
     1.3 Key Terminology
     1.4 The Documents
           1.4.1 Documentation Overview
           1.4.2 QTI Overview
           1.4.3 The ASI Information Model
           1.4.4 The ASI XML Binding Document
           1.4.5 The ASI Best Practices & Implementation Guide
           1.4.6 The ASI Selection & Ordering Specification
           1.4.7 The ASI Outcomes Processing Specification
           1.4.8 The Results Reporting Information Model
           1.4.9 The Results Reporting XML Binding Document
           1.4.10 The Results Reporting Best Practices & Implementation Guide
           1.4.11 The QTILite Specification
     1.5 The QTI Structures
     1.6 Using the IMS QTI Specification

Bibliography

Appendix A - Glossary of Terms

About This Document
     List of Contributors


1. The Question & Test Interoperability Specification

The IMS Question & Test Interoperability (QTI) specification describes a basic structure for the representation of question (item) and test (assessment) data and their corresponding results reports. Therefore, the specification enables the exchange of this item, assessment and results data between Learning Management Systems, as well as content authors and, content libraries and collections. The IMS QTI specification is defined in XML to promote the widest possible adoption. XML is a powerful, flexible, industry standard markup language used to encode data models for Internet-enabled and distributed applications. The IMS QTI specification is extensible and customisable to permit immediate adoption, even in specialized or proprietary systems. Leading suppliers and consumers of learning products, services and content contributed time and expertise to produce this final specification. The IMS QTI specification, like all IMS specifications, does not limit product designs by specifying user interfaces, pedagogical paradigms, or establishing technology or policies that constrain innovation, interoperability, or reuse.

1.1 Historical Perspective

An initial V0.5 specification was released for discussion in March 1999 and the corresponding Base Document was agreed in November 1999. The first Public Draft Specification was released in February 2000 and the IMS Question & Test Interoperability v1.0 specifications were released in their final form in May 2000. A version 1.01 update was released in August 2000. During the development of these specifications and their subsequent adoption by the community, several areas of further work were identified and the '1.x' versions of the IMS QTI Specifications were scoped to address these issues1. Version 1.1 was released in March 2001 and contained the introduction of the QTILite specification. Results reporting was introduced in V1.2, released in January 2002, as was the concept of 'selection & ordering' and 'outcomes processing'. To-date, in excess of 6000 copies of the IMS QTI V1.0/V1.01/V1.1/1.2 specifications have been downloaded from the IMS web-site.

1.2 The Requirements

The IMS QTI Working Group's work specifically relates to content providers (that is, question and test creators), virtual learning environment and tool vendors, and question/test users (that is, learners and teachers or trainers needing assessment tools). The targeted markets include primary and secondary education, community, junior and vocational colleges, higher education, and commercial and military training. The IMS QTI specifications are intended to meet international needs as well. Therefore the IMS QTI Working Group was focussed on enabling the following functionality:

  • The ability to provide question/item banks to users regardless of virtual learning environment (VLE) deployed by the user;
  • The ability to use question/item banks from various sources within a single VLE;
  • Support for tools to develop new question/item banks in a consistent manner;
  • The ability to report test results in a consistent manner.

Consequently, the following requirements have been suggested and are presented in order of priority:

  • Definition of standardised attributes (question meta-data) for questions, choices, feedback/branch, scoring, meta-data along with identification of required and optional elements;
  • Interoperability of question/item banks - definition for packaging and distribution;
  • Extended schema for results reporting;
  • Extended schema for assessment, tracking and presentation;
  • APIs for dynamic interface into question retrieval and scoring/assessment engines.

It is also considered essential that the specification allows for extensibility and flexibility based on yet unidentified future needs and necessitated by specific customised implementations.

1.3 Key Terminology

Despite its name, the IMS QTI specification details more than how to tag questions, tests and results. The standard Question types e.g. multiple choice, fill in the blank, or true/false choice, etc. can be constructed using a core set of presentation and response structures, and results of questions can be collected and scored by using a variety of methods. To represent these options, the IMS QTI specification defines the 'Item'. Items contain all the necessary data elements required to compose, render, score and provide feedback from questions. Therefore, the key difference between a 'Question' and 'Item' is that an 'Item' contains the 'Question', layout rendering information, the associated response processing information, and the corresponding hints, solutions and feedback.

Similarly, the 'test' is an instance of an Assessment. Assessments are assembled from Items that are contained within a 'Section' to resemble a traditional test. Additionally, Assessments might be assembled from blocks of Items that are logically related. These groups are also defined as 'Sections' and so Assessments are composed of one or more Sections which themselves are composed of Items, or more Sections. Collectively, these three data objects are referred to as the ASI (Assessment, Section, Item) structures. These evaluation objects can be bundled together to create an object bank. This object bank can then be externally referenced and used as a single evaluation object. To avoid limitations associated with words like user, student, or learner the IMS QTI working group adopted the term 'participant' to refer to the person interacting with an assessment. Thus, the key definitions are:

  • Item - A combination of interrogatory, rendering, and scoring information;
  • Section - A collection of zero or more items and/or other Sections;
  • Assessment - A collection of one or more Sections;
  • Object Bank - A group of Items and/or Sections that have been bundled e.g. to create an Item-bank;
  • Participant - The user interacting with an assessment.

When constructing the results report for an Assessment, a Section or an Item, a similar structure is used. A results report can contain either a summary of the results themselves and/or the detailed set of results with respect to the assessment, section(s) and item(s). Each results report is contained within its own package that also describes the context for the evaluation e.g. participant identifier, etc.

1.4 The Documents

1.4.1 Documentation Overview

The QTI specification is comprised of ten separate documents. Different documents have particular relevance to the set of released version of the QTI specification, as shown in Table 1.1.

 

Table 1.1 Documentation for each version of the QTI.

 
Documentation
V1.0
V1.01
V1.1
V1.2
ASI Information Model
*
*
*
*
ASI XML Binding
*
*
*
*
ASI Best Practice Guide
*
*
*
*
QTILite Specification
 
 
*
*
ASI Selection & Ordering
 
 
 
*
ASI Outcomes Processing
 
 
 
*
Results Reporting Information Model
 
 
 
*
Results Reporting XML Binding
 
 
 
*
Results Reporting Best Practice Guide
 
 
 
*
QTI Overview
 
 
 
*

'*' denotes the documents that were produced for each version of the IMS QTI specifications.

The technical structure of the IMS QTI specification is based upon two components:

  • The ASI components that are used to describe the actual evaluation objects to be presented to the participants;
  • The results reporting objects that are used to contain the results to be reported for an evaluation undertaken by a participant.

It is important to stress that the two components of the IMS QTI specification do not have to be used together. The results reporting part of the specification can be used to contain results obtained from an evaluation based upon QTI ASI but this is not a prerequisite. This means that either one or both of the parts of the QYI specification can be used as appropriate.

1.4.2 QTI Overview

The role of the QTI Overview (this document) is to provide a brief description of the full IMS QTI specification. This is the only document that brings together all of the other documents to provide a perspective on the full specification.

1.4.3 The ASI Information Model

The QTI ASI Information Model document is comprised of several sections. The first section contains use cases in which the underlying usage, processing control, and core data structures of the QTI ASI specification are described. It also details the taxonomy of responses, as well as their relationship to questions type and the larger group of 'items'. The basic information model itself is outlined in conceptual terms by using a tabular layout of the Assessment, Section, and Item objects in terms of their elements, sub-elements and attributes. The Item, Section, and Assessment meta-data, which are used to catalogue these objects, are also described. In addition, the document contains a conformance statement to be used by vendors who plan to implement the specification; we have adopted a descriptive approach to conformance thereby enabling vendors to implement subsets of the full specification.

1.4.4 The ASI XML Binding Document

The XML Binding document describes the implementation of the ASI information model in XML. XML is introduced by outlining XML basics, including a conceptual discussion of the XML schema. The XML schema description of the QTI specification (ims_qtiasiv1p2.xsd and ims_qtiasiv1p2.dtd) defines the Assessment, Section, and Item as XML elements. An example schema for Assessments, Sections, and Items is included, along with details of the meta-data used to catalogue Assessment, Sections, and Items. Some of the XML Binding documents also include, as appendices, a copy of the uncommented XSD, as well as the uncommented DTD and XDR (XDR document is a Microsoft Corporation XML schema implementation)2.

1.4.5 The ASI Best Practices & Implementation Guide

This document is intended to provide vendors with an overall understanding of the ASI specification, the relationship of this specification with other IMS specifications, and a best practices guide derived from experiences of those using the specification.3 Example Item types supported by the specification, examples of composite Item types, and a complete XML example for presenting an Assessment, Section, and Item is included. The Best Practices & Implementation Guide also includes a significant number of actual examples that describe how vendors can make the best use of the IMS QTI specification. These examples, approximately eighty, are also useful as a starting template for each of the different forms of Assessment, Section and Item. Appendices provide the range of available DTDs, XDRs and XSDs (as appropriate), as well as a glossary of the XML structures used throughout the specification.

1.4.6 The ASI Selection & Ordering Specification

The 'Selection & Ordering' specification contains the description of how the order in which Sections and/or Items are presented can be controlled. The selection and ordering process is a two-stage operation in which the child objects are selected according to some defined criteria e.g. meta-data content, etc. and the order of their presentation is then determined. The selection and ordering process within an object is limited to the immediate children of the object and so complex requirements must be based upon the appropriate usage of Sections to contain the Section/Item hierarchies. This document contains the relevant information model, XML binding and best practices guidance but it should be read in the context of the core ASI documents.

1.4.7 The ASI Outcomes Processing Specification

The 'Outcomes Processing' specification contains the description of how the aggregated scores at the Assessment and Section levels can be derived. These scoring outcomes are based upon the child Sections and/or Items. Several scoring algorithms are supported (sum-of-scores, number correct, guessing penalty and best-K-ofN) through the usage of a predefined set of parameterised instructions; these avoid the realisation of the algorithms within the XML. This document contains the relevant information model, XML binding and best practices guidance but it should be read in the context of the core ASI documents. Each of the supported scoring algorithms in explained in an Appendix.

1.4.8 The Results Reporting Information Model

The IMS QTI Results Reporting Information Model document is comprised of several sections. The first section contains use cases in which the underlying usage, processing control, and core data structures of the results to be reported are described. The basic information model itself is outlined in conceptual terms by using a tabular layout of the context, summary, Assessment results, Section results, and Item results objects in terms of their elements, sub-elements and attributes. The corresponding meta-data, which are used to catalogue these objects, are also described. In addition, the document contains a conformance statement to be used by vendors who plan to implement the specification; we have adopted a descriptive approach to conformance thereby enabling vendors to implement subsets of the full specification.

1.4.9 The Results Reporting XML Binding Document

The XML Binding document describes the implementation of the Results Reporting information model in XML. XML is introduced by outlining XML basics, including a conceptual discussion of the XML schema. The XML schema description of the QTI specification (ims_qtiresv1p2.xsd and ims_qtiresv1p2.dtd) defines the results report objects as XML elements. An example schema is included, along with details of the meta-data used to catalogue the results report.

1.4.10 The Results Reporting Best Practices & Implementation Guide

This document is intended to provide vendors with an overall understanding of the results reporting specification, the relationship of this specification with other IMS specifications (including the QTI ASI), and a best practices guide derived from experiences of those using the specification. Example results reports are included along with a demonstration of how SCORMv1.2 results reporting can be supported. The Best Practices & Implementation Guide also includes a significant number of actual examples that describe how vendors can make the best use of the results reporting specification. These examples are also useful as a starting template for each of the different forms of results report. Appendices provide the range of available DTDs and XSDs (as appropriate), as well as a glossary of the XML structures used throughout the specification.

1.4.11 The QTILite Specification

This document describes the components that are required to construct the simplest form of an IMS QTI-compliant system. QTILite supports multiple-choice questions (this includes the true/false questions) only and limits the rendering form to the classical one response from a set of choices. Multiple Items can be exchanged in a single QTI-XML instance but Assessments and Sections are not supported. The QTILite specification is a standalone document in that none of the others are required to understand and construct QTILite-compliant systems. All QTILite compliant Items are compliant with the full IMS QTI V1.1 and V1.2 specifications but they are not backwards compatible with V1.0 or 1.01 of the specification. QTILite was introduced as an aide to understanding the QTI specification and is not intended for wide-spread long-term adoption.

1.5 The QTI Structures

The core data structures that can be exchanged using IMS QTI are shown schematically in Figures 1.1 and 1.2. The four ASI structures are:

  • Item(s) - one of more Items can be contained within a QTI-XML instance. The Item is the smallest independent unit that can be exchanged using IMS QTI. An Item cannot be composed of other Items. An Item is more than a 'Question' in that it contains the 'Question', the presentation/rendering instructions, the response processing to be applied to the participant's response(s), the feedback that may be presented (including hints and solutions) and the meta-data describing the Item;

The core ASI data structures that can be exchanged using IMS QTI

 

Figure 1.1 The core ASI data structures that can be exchanged using IMS QTI.
  • Section(s) - one or more Sections can be contained within a QTI-XML instance. A Section can contain any mixture of Sections and/or Items. A Section is used to support two different needs:-
    • To represent different grouping constructs as defined by the appropriate educational paradigm e.g. a Section could be equivalent to a subject topic
    • To constrain the extent of the sequencing instructions and to control the ways in which the different possible sequences may be constructed;
  • Assessment - only one Assessment can be contained within a QTI-XML instance. It is not possible to define relationships between the Assessments. Each Assessment must contain at least one Section, thus it is not possible to have Items housed directly within an Assessment. The Assessment contains all of the necessary instructions to enable variable sequencing of the Items and the corresponding aggregated scoring for all of the Items to produce the final score;
  • Object bank - the bundling together a collection of data objects (sections and/or Items) can be labelled as an object bank i.e. a group of Items can be grouped to produce an Item-bank.

While the definition of an Item and Assessment is well established it must be stressed that the 'Section' is merely a grouping construct. This allows any level of grouping of the Items and/or Sections. What these Sections actually mean in an assessment environment is dependent on the ways in which the contents are to be used.

The core results reporting data structures that can be exchanged using IMS QTI

 

Figure 1.2 The core results reporting data structures that can be exchanged using IMS QTI.

The core Results Reporting data structures are:

  • Result - the set of results relevant to an actual attempt of an assessment or some other form of evaluation. Multiple instances for a single participant can be contained within the IMS QTI Results Reporting package;
  • Context - the contextual information concerning the actual result being reported e.g. the name, of the participant, participant identifiers, etc;
  • Summary_result - the summary information for a particular instance of the evaluation. Each result can contain only one set of summary information;
  • Assessment_result - the detailed assessment information for a particular attempt at the assessment. Each result can contain information about one assessment only (this includes descriptions of any contained section and items);
  • Section_result - the detailed information about the section(s) completed, or to being attempted. Each result can contain information about one section (this includes descriptions of any contained sections and/or items);
  • Item_result - the detailed information about the item(s) completed, or to being attempted. Each result can contain information about one item.

The results reporting components are capable of containing the results from all of the components of an Assessment, Section and Item. This means that it is simple to use IMS QTI Results Reporting XML binding to reports the results obtained from an evaluation based upon IMS QTI ASI.

1.6 Using the IMS QTI Specification

Users wishing to adopt the IMS QTI specifications are advised to start with either the QTILite specification or the Best Practice & Implementation Guide documents. These documents contain extensive examples. All of these examples are available as part of the IMS QTI toolkit and as such they make excellent templates. Several versions of the XSDs, DTDs and XDRs exist in terms of file structure (IBM, Unix, MacOS) and functional complexity (QTILite; Item-only; Items and Sections; core elements i.e. excluding the extension and V1.x/V2.0 features, the full uncommented; the full commented version). Beginners should focus on the QTILite and Item-only versions. The fully commented version should be avoided unless a documented version of the schema is required. There are core three IMS QTI XML DTD/XSDs:

  • 'ims_qtiasiv1p2.dtd' and 'ims_qtiasiv1p2.xsd';
  • 'ims_qtiresv1p2.dtd' and 'ims_qtiresv1p2.xsd';
  • 'ims_qtilitev1p2.dtd.

The IMS QTI specification includes its own IMS QTI-specific meta-data features but the IMS Meta-data can also be used. The IMS Content Packaging specification incorporates the IMS QTI specification as a native structure i.e. the Assessment, Section and Item XML can be contained within a Content Package's XML. It is recommended that the IMS Content Packaging specification mechanism is used for the physical exchange of the QTI-XML instances.

Bibliography

[ETS, 99] A Sample Assessment Using the Four Process Framework, R.Almond, L.Steinberg and R.Mislevy, ETS Working Paper, October 1998.
[IMS, 01a] IMS Persistent, Location-Independent Resource Identifier Implementation Handbook, M.McKell, Version 1.0, IMS, May 2001.
[IMS, 01b] Using the IMS Content Packaging to Package Instances of LIP and Other IMS Specifications Implementation Handbook, V1.0, B.Olivier and M.McKell, IMS Specifications, August 2001.
[QTI, 99a] IMS Question & Test Interoperability Requirement Specification, C.Smythe, Version 1.0, Draft 0.3, IMS, November 1999.
[QTI, 00] IMS Question & Test Interoperability Version 1.x Scoping Statement, C.Smythe and E.Shepherd, Version 1.0, IMS, November 2001.
[QTI, 02a] IMS Question & Test Interoperability: ASI Information Model, C.Smythe, E.Shepherd, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02b] IMS Question & Test Interoperability: ASI XML Binding Specification, C.Smythe, E.Shepherd, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02c] IMS Question & Test Interoperability: ASI Best Practice & Implementation Guide, C.Smythe, E.Shepherd, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02d] IMS Question & Test Interoperability: ASI Selection & Ordering Specification, C.Smythe, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02e] IMS Question & Test Interoperability: ASI Outcomes Processing Specification, C.Smythe, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02f] IMS Question & Test Interoperability: Results Reporting Information Model, C.Smythe, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02g] IMS Question & Test Interoperability: Results Reporting XML Binding, C.Smythe, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02h] IMS Question & Test Interoperability: Results Reporting Best Practice & Implementation Guide, C.Smythe, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[QTI, 02i] IMS Question & Test Interoperability: QTILite Specification, C.Smythe, E.Shepherd, L.Brewer and S.Lay, Final Specification, Version 1.2, IMS, February 2002.
[RFC1521] MIME (Multipurpose Internet Mail Extensions) Part One: Mechanisms for Specifying and Describing the Format of Internet Message Bodies, N.Borenstein and N.Freed, IETF, IETF Request for Comment, September 1993.
[RFC1630] Universal Resource Identifiers in WWW: A Unifying Syntax for the Expression of Names and Addresses of Objects on the Network as used in the World-Wide Web, T. Berners-Lee, IETF, IETF Request for Comment, June 1994.

Appendix A - Glossary of Terms

adaptive testing A sequential form of individual testing in which successive items in the test are chosen based primarily on the psychometric properties and content of the items and the test taker's response to previous items.
ADL The Advanced Distributed Learning group that was started by the United States White House in 1997. It aims to advance the use of online training for usage by the Department of Defense.
AICC Aviation Industry CBT Committee is a membership-based international forum that develops recommendations on interoperable learning technologies.
answer key The key that describes the scoring scenario for a question or test.
arrange objects response A response style in which the test taker arranges one of more objects.
assessment Any systematic method of obtaining evidence from tests, examinations, questionnaires, surveys and collateral sources used to draw inferences about characteristics of people, objects, or programs for a specific purpose. One of the IMS QTI core data objects.
assessment engine The process that supports the evaluation of the responses to produce scores and feedback.
authoring system A generic name for one or more computer programs that allow a user to author, and edit items (i.e. questions, choices, correct answer, scoring scenarios and outcomes) and maintain test definitions (i.e. how items are delivered with a test).
battery A set of tests standardized on the same population, so that norm-referenced scores on the several tests can be compared or used in combination for decision making.
bilingual The characteristic of being relatively proficient in two languages.
candidate A person that participates in a test, assessment or exam by answering questions.
candidate data repository The database of candidate specific information.
certification A form of credentialing, usually used to refer to voluntary credential not involving governmental sanction. See also licensing.
certification processing The process of matching an individual's accomplishments against the requirements for a certification program, and awarding certifications when all requirements have been met.
character set The characters used by a computer to display information.
choice One of the possible responses that a test taker might select. Choices contain the correct answer/s and distracters.
composite response A combination of response styles presented within a single item.
composite score A score that combines several scores by a specified formula.
computerized adaptive test An adaptive test administered by computer. See adaptive test.
conditional measurement error variance The variance of measurement efforts that affect the scores of examinees at a specified test score level; the square of the conditional standard error of measurement.
conformance statement A conformance statement provides a mechanism for customers to fairly compare vendors of assessment tools and content.
construct response item An exercise for which examinees must create their own responses or products rather than choose a response from an enumerated set.
cut score A specified point on a score scale, such that scores at or above that point are interpreted differently from scores below that point. Sometimes there is only one cut score, dividing the range of possible scores into "passing" and "failing" or "mastery" and "non-mastery" regions. Sometimes two or more cut scores may be used to define three or more score categories, as in establishing performance standards. See also, performance standards.
database A collection of information/data, often organized within tables, within a computer's mass storage system. Databases are structured in a way to provide for rapid search and retrieval by computer software. The following databases are used by testing systems; item, test definition, scheduling and results.
delivery channel One of more testing centers, usually managed by a delivery provider (i.e. an organization that provides candidate scheduling services, computers, proctoring services, and the space in which to conduct a computerized test).
delivery provider An organization that provides candidate scheduling services, computers, proctoring services, and the space in which to conduct a computerized test.
distracter One of the choices, that a test taker may select, that is not the correct answer.
difficulty A statistical property, sometimes known as facility, indicating the level of a question, from 0.0 to 1.0. Calculated as the average score for the question divided by the maximum achievable score. A facility of 0.0 means that the question is very hard (no-one got it right) and 1.0 means that it is very easy (no-one got it wrong). The 0.5 is the ideal score.
drag and drop response A response style where the test taker indicates their choice by dragging an image from one place to another.
DTD Document Type Definition.
dynamic sequencing The sequencing of items or sections is based upon previous responses from a test taker.
element An XML term that defines a component within an XML document that has been identified in a way a computer can understand.
element contents An XML term used to describe the content of the element.
element attributes Provides additional information about an element.
essay response A response style where the test taker enters an essay in response to the stimulus.
facility A statistical property, indicating the level of a question, from 0.0 to 1.0. Calculated as the average score for the question divided by the maximum achievable score.
feedback Information provided to a participant to aid the learning process.
fill-in-the-blank(s) A response style where the test taker completes a phrase by entering a word, words or a number.
gain score The difference between the score on a test and the score on an earlier administration of the same or an equivalent test.
grade equivalent score The school grade level for which a given score is the real or estimated median or mean.
high-stakes test A test whose result has important, direct consequences for examinees, program, or institutions tested.
holistic scoring A method of obtaining a score on a test, or a test item, that results from an overall judgement of performance using specified criteria.
hot-spot response A response style where the test taker indicates their selection by using a mouse or pointing device on a graphic display.
IEEE Institute of Electrical and Electronics Engineers that provides a forum for developing specifications and standards.
image hotspot response A response style where the test taker indicates their selection by using a mouse or pointing device on a graphic display.
IMS An organization dedicated to developing specifications for distributed learning.
intelligence test A psychological or educational test designed to measure intellectual processes in accord with some evidence-based theory of intelligence.
invigilator A person who proctors a test.
item The questions, choices, correct answer, scoring scenarios and outcomes used within a test. One of the IMS QTI core data objects.
item analysis The process of studying the responses to questions delivered in the pilot study or prototype in order to select the best questions in terms of facility and discrimination.
item pool The aggregate of items from which a test or test scale's items are selected during test development, or the total set of items from which a particular test is selected for test taker during adaptive testing.
item prompt The question, stimulus, or instructions that direct the efforts of examinees in formulating their responses to a constructed-response exercise.
item response theory (IRT) A theory of test performance that emphasises the relationship between mean item score (P) and level (0) of the ability or trait measured by the item. In the case of an item scored 0 (incorrect response) or 1 (correct response), the mean item score equals the proportion of correct responses. In most applications, the mathematical function relating P to 0 is assumed to be a logistic function that closely resembles the cumulative normal distribution.
job analysis Any of several methods of identifying the tasks performed on a job or the knowledge, skills, abilities, and other personal characteristics relevant to job performance.
licensing The issuing, usually by a government agency, of a credential indicating competence in some profession or client-centered activity. See also certification.
logical identifier A category of response styles that presents various choices and provides a mechanism for the test taker to select one or more choices.
logical group A category of response styles that allows a test taker to group objects together to indicate their choice e.g. drag-and-drop.
low-stakes test A test whose result has only minor or indirect consequences for examinees, programs, or institutions tested.
LTSC Learning Technology Standards Committee
LMS Learning Management System. The system responsible for the management of the learning experience.
mandated tests Tests that are administered because of a mandate from an external authority.
mastery test A test designed to indicate that the test taker has or has not mastered some domain or knowledge or skill. Mastery is generally indicated by a passing score or cut score. See cut score.
mean Arithmetic average of some scores, i.e. the sum of the scores divided by the number of scores.
meta-data Tags that described the content of the associated data.
multiple choice A response style where the test taker selects one choice from several to indicate their opinion as to the correct answer.
multiple response A response style where the test taker selects more than one choice from several to indicate their opinion as to the correct answers. Multiple response questions have answer keys that describe various combinations of choices being right or wrong with different possible outcome for the different combination of selections.
normalized standard score A derived test score in which a numerical transformation has been chosen so that the score distribution closely approximates a normal distribution, for some specific population.
numeric response A response style where the test taker enters a number to indicate their choice
outcome The event that will occur after a question or questions have been answered (i.e. the item is scored, feedback is provided, etc.)
outcome evaluation The activity of a practitioner that evaluates the efficacy of an intervention.
participant A person that participates in a testing, assessment or survey process by answering questions.
participant mean The mean of the percentage score achieved by candidates. Used to determine validity of choices, within an item, by examining the choices selected by the higher and/or lower scoring candidates.
platform The computing environment that hosts the assessment system.
pilot test A test administered to a representative sample of test takers solely for the purpose of determining the properties of the test. See field test.
pre-test An administration of test items to a representative sample of test takers solely for the purpose of determining the characteristics of the item.
proctor A person who invigilates a test.
program evaluation The collection of systematic evidence to determine the extent to which a planned set of procedures obtains particular effects.
psychometric Properties of the item(s) and test(s) such as the distribution of item difficulty and discrimination indices.
psychometrician A qualified person who analyses the psychometrics of a test or item.
publish test To release a test from the development system to the production or release system.
questionnaire One or more questions presented and answered together.
Question & Test (QTI) The formal title for the IMS team and specification dealing with the development of the IMS specification for Question (item) and Test (assessment) Interoperability.
reliability The degree to which the scores of every individual are consistent over repeated applications of a measurement procedure and hence are dependable, and repeatable; the degree to which scores are free of errors of measurement.
rendering The process by which an item is presented on a computer screen.
respondent A person that participates in a survey process by answering questions
response processing The process of evaluating the test takers responses.
response type The method by which the test taker provides their answer to the question.
scheduling system The generic name for one or more computer programs that allows a user to track candidate appointments. Scheduling systems may also provide bill collection information, testing centre resource scheduling and candidate demographics.
score Any specific number resulting from the assessment of an individual; a generic term applied for convenience to such diverse measures as test scores, estimates of latent variables, production counts, absence records, course grades, ratings, and so forth.
scoring formula The formula by which the raw score on a test is obtained. The simplest scoring formula is "raw score equals number correct". Other formulae differentially weight item responses, sometimes in an attempt to correct for guessing or non-response, by assigning zero weights to non-responses and negative weights to incorrect responses.
scoring protocol The established criteria, including rules, principles, and illustrations, used in scoring responses to individual items and clusters of items. The term usually refers to the scoring procedures for assessment tasks that do not provide enumerated responses from which test-takers make a choice.
scoring rubric The principles, rules, and standards used in scoring an examinee performance, product, or constructed response to a test item. Scoring rubrics vary in the degree of judgement entailed, in the number of distinct score levels defined, in the latitude given scorers for assigning intermediate or fractional score values, and in other ways.
section A collection of items (generated either statically or dynamically) normally focused at a particular objective. This is one of the IMS QTI core data objects.
selection response A response style where the test taker selects from a pull-down list.
sequence response A response style where the test taker orders a list of objects or text to formulate their response.
speeded test A test in which performance is measured primarily or exclusively by the time to perform a specified task, or the number of tasks performed in a given time, such as tests of typing speed and reading speed.
speededness A test characteristic, dictated by the test's time limits, that result in a test-taker's score being dependent on the rate at which work is performed as well as the correctness of the responses.
standard deviation A statistical measure of the spread of results. The higher the standard deviation, the greater the spread of data.
standards-based assessment Assessments intended to represent systematically described content and performance standards.
static sequencing The sequencing of items or sections is fixed and does not vary with previous responses from a test taker.
string response A category of response styles that allows the test taker to enter text and/or numbers.
technical manual A publication prepared by test authors and publishers to provide technical and psychometric information on a test.
test center A facility that provides computers and proctoring services in which to conduct tests.
test center administration system The generic name for one or more computer programs used by a test center to administer tests to candidates. This may include, but is not limited to, starting tests, stopping tests and communicating item, test and results data back and forth.
test developer The person(s) or agency responsible for the construction of a test and for the documentation regarding its technical quality for an intended purpose.
test development The process through which a test is planned, constructed, evaluated and modified, including consideration of content, format, administration, scoring, item properties, scaling, and technical quality for its intended purpose.
test development system A generic name for one or more computer programs that allow a user to author, and edit items (i.e. questions, choices, correct answer, scoring scenarios and outcomes) and maintain test definitions (i.e. how items are delivered with a test).
test driver A generic name for one or more computer programs that displays test items to a computer screen, collects candidate's responses score and stores the results.
test taker The participant or candidate taking a test.
test sponsor The person(s) or agency responsible for the choice and administration of a test, the interpretation of test scores produced in a given context, and for any decisions or actions that are based, in part, on test scores.
topic The subject matter of a question.
true/false A response style where the test taker selects from two choices, one labeled true and one false.
validation The process of investigation by which the validity of the proposed interpretation of test scores is evaluated.
validity An overall evaluation of the degree to which accumulated evidence and theory support specific interpretations of test scores.
W3C World Wide Web Consortium.
weight The number of points awarded for a given response.
weighted scoring A method of scoring a test in which the number of points awarded for a correct (or diagnostically relevant) response is not the same for all items in the test. In some cases, the scoring formula awards more points for one response to an item than for another.
word response A response style where the test taker enters a word to indicate their choice.
XML Extensible Mark-up Language is a specification, produced by the World Wide Web Consortium.
XY Co-ordinate A category of response styles that presents an image, or various images, for the test taker to select a position on the image or images to indicate their choice.

About This Document

 
Title IMS Question & Test Interoperability: An Overview
Editors Colin Smythe, Eric Shepherd, Lane Brewer, and Steve Lay
Version 1.2
Version Date 11 February 2002
Status Final Specification
Summary This document describes the IMS Question & Test Interoperability specification. It contains an overview of the full specification and explains the relationship between the different components of the specification.
Revision Information 22 January 2002
Purpose To provide an overview of the IMS Question & Test Interoperability specification.
Document Location http://www.imsglobal.org/question/v1p2/imsqti_oviewv1p2.html

List of Contributors

The following individuals contributed to the development of this document:

Russell Almond ETS, USA
Lane Brewer Galton Technologies Inc.
Todd Brewer Galton Technologies Inc.
Russell Grocott Can Studios Ltd.
Andy Heath CETIS/JISC, UK
Paul Hilton Can Studios Ltd.
Steven Lay University of Cambridge Local Examinations Syndicate, UK
Jez Lord Can Studios Ltd.
Richard Johnson Goal Design Inc.
John Kleeman Question Mark Computing Ltd.
Paul Roberts Question Mark Computing Ltd.
Nial Sclater CETIS/University of Strathclyde, UK
Eric Shepherd Question Mark Corporation
Colin Smythe Dunelm Services Ltd.

1
The version '1.x' nomenclature is used as a generic reference to all future releases that are derivative of the version 1.0 specification i.e. v1.1, v1.2, v1.3, etc.
2
For Versions 1.0 and 1.01 the specification bindings were based upon Document Type Definitions (DTDs) and XML Data Representations (XDRs). For version 1.1 and later the specification bindings are based upon XML Schema (XSD) and DTDs.
3
We recommend that new users of the IMS QTI specification start with the Best Practices & Implementation Guide. The examples in this document show how we intend the specification to be used whereas the other two documents (the Information Model and XML binding) contain the formal description of the structures, their syntax and semantics.

 

 

 

IMS Global Learning Consortium, Inc. ("IMS") is publishing the information contained in this IMS Question & Test Interoperability: An Overview ("Specification") for purposes of scientific, experimental, and scholarly collaboration only.

IMS makes no warranty or representation regarding the accuracy or completeness of the Specification.
This material is provided on an "As Is" and "As Available" basis.

The Specification is at all times subject to change and revision without notice.

It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.

IMS would appreciate receiving your comments and suggestions.

Please contact IMS through our website at http://www.imsglobal.org

Please refer to Document Name:
IMS Question & Test Interoperability: An Overview Date: 11 February 2002