1EdTech Question & Test Interoperability Implementation Guide
Version: 2.1 Final
Date Issued: 31 August 2012
Latest version: http://www.imsglobal.org/question/
IPR and Distribution Notices
Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the specification set forth in this document, and to provide supporting documentation.
1EdTech takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on 1EdTech's procedures with respect to rights in 1EdTech specifications can be found at the 1EdTech Intellectual Property Rights web page: http://www.imsglobal.org/ipr/imsipr_policyFinal.pdf.
Copyright © 2005-2012 1EdTech Consortium. All Rights Reserved.
Use of this specification to develop products or services is governed by the license with 1EdTech found on the 1EdTech website: http://www.imsglobal.org/license.html.
Permission is granted to all parties to use excerpts from this document as needed in producing requests for proposals.
The limited permissions granted above are perpetual and will not be revoked by 1EdTech or its successors or assigns.
THIS SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY AT THE IMPLEMENTER'S OWN RISK, AND NEITHER THE CONSORTIUM, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE USE OF THIS SPECIFICATION.
Join the discussion and post comments on the QTI Public Forum: http://www.imsglobal.org/community/forum/categories.cfm?catid=52
The 1EdTech Logo is a trademark of the 1EdTech Consortium, Inc. in the United States and/or other countries.
Document Name: 1EdTech Question & Test Interoperability (QTI) Implmenetation Guide Final v2.1 Revision: 31 August 2012
Table of Contents
- 1. Introduction
- 2. References
- 3. Items
- 3.1. How Big is an Item?
- 3.2. Simple Items
- 3.3. Composite Items
- 3.4. Response Processing
- 3.4.1. Custom Response Processing
- 3.5. Feedback
- 3.6. Adaptive Items
- 3.7. Item Templates
- 3.8. Item body content
- 3.8.1. Sharing text between different items
- 3.8.2. Stylesheets
- 3.8.3. Alternative Media
- 3.8.4. Alternative Renderings for Interactions
- 3.8.5. Using MathML
- 3.8.6. Number Formatting
- 3.8.7 Markup languages, HTML 'object' and HTML 5
- 4. Tests (Assessments)
- 5. Usage Data (Item Statistics)
- 6. Packaged Items, Tests and Metadata
- 7. Validation
- 8. About This Document
This document contains examples of QTI Version 2 in action. Some of the examples are illustrated by screen shots. All screen shots are taken from a single delivery engine [SMITH] developed during the public draft review period of this specification. They are designed to illustrate how a system might implement the specificiation and are not designed to be prescriptive. Other types of rendering are equally valid.
Each section of this document introduces a new aspect or feature of the specification, starting with the simplest constructions, and continuing to more intricate examples. For those who want to start with a very simple, but complete and usable test package, the CC QTI package is the recommended point of departure.
- Maxima, a Computer Algebra System
- XHTML 1.1: The Extensible HyperText Markup Language
- Extensible Markup Language (XML), Version 1.0 (second edition)
- XML 1.0 Specification Errata
The main purpose of the QTI specification is to define an information model and associated binding that can be used to represent and exchange assessment items. For the purposes of QTI, an item is a set of interactions (possibly empty) collected together with any supporting material and an optional set of rules for converting the candidate's response(s) into assessment outcomes.
3.1. How Big is an Item?
The above definition covers a wide array of possibilities. At one extreme a simple one line question with a response box for entering an answer is clearly an item but at the other, an entire test comprising instructions, stimulus material and a large number of associated questions also satisfies the definition. In the first case, QTI is an appropriate specification to use for representing the information, in the second case it isn't.
To help determine whether or not a piece of assessment content that comprises multiple interactions should be represented as a single assessmentItem (known as a composite item in QTI) the strength of the relationship between the interactions should be examined. If they can stand alone then they may best be implemented as separate items, perhaps sharing a piece of stimulus material like a picture or a passage of text included as an object. If several interactions are closely related then they may belong in a composite item, but always consider the question of how easy it is for the candidate to keep track of the state of the item when it contains multiple related interactions. If the question requires the user to scroll a window on their computer screen just to see all the interactions then the item may be better re-written as several smaller related items. Consider also the difficulty faced by a user interacting with the item through a screen-reader, an item with many possible of points of interaction may be overwhelming in such an interface.
3.2. Simple Items
Simple items are items that contain just one point of interaction, for example a simple multi-choice or multi-response question. This section describes a set of examples illustrating simple items, one for each of the interaction types supported by the specification.
Figure 3.1 Unattended Luggage (Illustration)
This example illustrates the choiceInteraction being used to obtain a single response from the candidate.
Notice that the candidate's response is declared at the top of the item to be a single identifier and that the values this identifier can take are the values of the corresponding identifier attributes on the individual simpleChoices. The correct answer is included in the declaration of the response. In simple examples like this one there is just one response variable and one interaction but notice that the interaction must still be bound to the response declaration using the responseIdentifier attribute of choiceInteraction.
The item is scored using one of the standard response processing templates, Match Correct.
Unattended Luggage (with fixed choice)
This example is a variation on the previous example and illustrates the use of the fixed attribute to fix the location of one of the options in the item.
Composition of Water
Figure 3.2 Composition of Water (Illustration)
This example illustrates the choiceInteraction being used to obtain multiple responses from the candidate.
Notice that the candidate's response is declared to have multiple cardinality and the correct value is therefore composed of more than one value. This example could have been scored in the same way as the previous one, with 1 mark being given for correctly identifying the two correct elements (and only the two correct elements) and 0 marks given otherwise, however, a method that gives partial credit has been adopted instead through the use of the standard response processing template Map Response. This template uses the RESPONSE's mapping to sum the values assigned to the individual choices. As a result, identifying the correct two choices (only) scores 2 points. Notice that selecting a third (incorrect) choice reduces the score by 2 (with the exception of Chlorine) resulting in 0 as unmapped keys are mapped to the defaultValue. To prevent an overall negative score bounds are specified too. The penalty for selecting Chlorine is less, perhaps to reflect its role as a common water additive.
Also note that SCORE needs to be set to float because of the use of the map_response template which returns a float.
This example illustrates the choiceInteraction being used to obtain multiple responses from the candidate with two correct sets of responses.
Grand Prix of Bahrain
Figure 3.3 Grand Prix of Bahrain (Illustration)
This example illustrates the orderInteraction. The candidate's response is declared to have ordered and the correct value is therefore composed of an ordered list of value. The shuffle attribute tells the delivery engine to shuffle the order of the choices before displaying them to the candidate. Note that the fixed attribute is used to ensure that the initially presented order is never the correct answer. The question uses the standard response processing template Match Correct to score 1 for a completely correct answer and 0 otherwise.
Figure 3.4 Shakespearian Rivals (Illustration)
This example illustrates the associateInteraction. The candidate's response is declared to have pair because the task involves pairing up the choices. The maxAssociations attribute on associateInteraction controls the maximum number of pairings the candidate is allowed to make overall. Individually, each choice has a matchMax attribute that controls how many pairings it can be part of. The number of associations that can be made in an associateInteraction is therefore constrained by two methods - in this case they have the same overall effect but this needn't be the case.
The associations created by the candidate are not directed, the pair base-type is an undirected pair so when comparing responses "A P" would be treated as a match for "P A" - the distinction has no meaning to the interaction even though the physical process used by the candidate might be directional, for example, drawing a line between the choices.
Characters and Plays
Figure 3.5 Characters and Plays (Illustration)
This example illustrates the matchInteraction. This time the candidate's response is declared to have directedPair because the task involves pairing up choices from a source set into a target set. In this case characters from plays with the names of the plays from which they are drawn. Notice that matchMax on the characters is one because each character can be in only one play (in fact, Shakespeare often reused character names but we digress) but it is four on the plays because each play could contain all the characters. For example, Demetrius and Lysander were both in A Midsummer-Night's Dream, so in the correct response that play has two associations. In the mapping used for response processing these two associations have been awarded only a half a mark each.
Richard III (Take 1)
Figure 3.6 Richard III (Illustration 1)
This example illustrates the gapMatchInteraction. This interaction is similar to matchInteraction except that the choices in the second set are gaps in a given passage of text and the task involves selecting choices from the first set and using them to fill the gaps. The same attributes are involved in controlling which, and how many, pairings are allowed though there is no matchMax for the gaps because they can only ever have one associated choice. The scoring is again done with a mapping.
Richard III (Take 2)
Figure 3.7 Richard III (Illustration 2)
The Richard III (Take 1) example above demonstrated the use of filling gaps from a shared stock of choices. In cases where you only have one gap, or where you have multiple gaps that are to be filled independently, each from its own list of choices, then you use an inlineChoice interaction.
Richard III (Take 3)
Figure 3.8 Richard III (Illustration 3)
The third, and final method of filling gaps is to use an textEntryInteraction which requires the candidate to construct their own response, typically by typing it in. Notice that a guide to the amount of text to be entered is given in the expectedLength attribute - though candidates should be allowed to enter more if desired.
The scoring for this item could have just matched the correct response but actually uses a mapping to enable partial credit for york (spelled without a capital letter). When mapping strings the mapping always takes place case sensitively. This example also illustrates the use of the mapping when the response only has single cardinality.
Writing a Postcard
Figure 3.9 Writing a Postcard (Illustration)
If an extended response is required from the candidate then the extendedTextInteraction is appropriate. Notice that this example does not contain a responseProcessing section because the scoring of extended text responses is beyond the scope of this specification.
Writing a Postcard with Rubric information
A rubricBlock can be used to add instructions about the way the item should be scored by a human scorer. The view attribute is used to indicate that the information should only be made visible to users in certain roles.
Figure 3.10 Olympic Games (Illustration)
This example illustrates the hottextInteraction. This interaction presents a passage of text with some hot words/phrases highlighted and selectable by the candidate. It differs from the choiceInteraction in that the choices have to be presented in the context of the surrounding text.
Figure 3.11 UK Airports in Unanswered State (Illustration)
Figure 3.12 UK Airports in Answered State (Illustration)
This example illustrates the hotspotInteraction. This is very similar to the hottextInteraction except that instead of having to select hot areas embedded in a passage of text the candidate has to select hotspots of a graphical image.
Note that the response is of type identifier and that each individual hotspotChoice associates an identifier with an area of the image.
Where is Edinburgh?
Figure 3.13 Where is Edinburgh? (Illustration)
This example illustrates the selectPointInteraction. The RESPONSE is declared to be a single point that records the coordinates of the point on the map marked by the candidate. The correctResponse is given in the declaration too, however, for this type of question it is clearly unreasonable to expect the candidate to click exactly on the correct point and there would be too many values to build a workable mapping. To get around this problem an areaMapping is used instead, this allows one or more areas of the coordinate space to be mapped to a numeric value (for scoring). In this example, just one area is defined: a circle with radius 8 pixels centred on the correct (optimal) response. The standard response processing template Map Response Point is used to set the score using the areaMapping.
Figure 3.14 Flying Home (Illustration)
This example illustrates the graphicOrderInteraction. The task is similar to Grand Prix of Bahrain except that the choices are presented as hotspots on a graphic image.
Figure 3.15 Low-cost Flying Unanswered State (Illustration)
Figure 3.16 Low-cost Flying Answered State (Illustration)
This example illustrates the graphicAssociateInteraction. The task is similar to Shakespearian Rivals except that the choices are presented as hotspots on a graphic image. Notice that matchMax is set to three for each of the hotspots allowing the candidate to associate each hotspot up to three times (in other words, with all the other hotspots if desired).
Figure 3.17 Airport Tags (Illustration)
This example illustrates the graphicGapMatchInteraction. The task is similar to Richard III (Take 1) except that the first set of choices are images and the second set are gaps within a larger background image. In graphical system that supports dragging this would typically be implemented using drag and drop.
Figure 3.18 Airport Locations (Illustration)
This example illustrates the positionObjectInteraction. It has a lot in common with Where is Edinburgh? except that the 'point' is selected by positioning a given object on the image (the stage). Notice that the stage is specified outside of the interaction. This allows a single stage to be shared amongst multiple position object interactions.
Figure 3.19 Jedi Knights (Illustration)
This example illustrates the sliderInteraction. It is used in this example to obtain a percentage estimate. The interaction is bound to an integer response which can then be scored using the standard Map Response response processor.
La casa di Giovanni
This example illustrates the drawingInteraction. Notice that the RESPONSE is declared to be of type file. The drawing takes place on a required pre-supplied canvas, in the form of an existing image, which is also used to determine the appropriate size, resolution and image type for the candidate's response.
The Chocolate Factory (Take 1)
This example illustrates the uploadInteraction. The RESPONSE is again declared to be of type file. The candidate is provided with a mechanism to upload their own spreadsheet in response to the task, response processing for file-based questions is out of scope of this specification.
3.3. Composite Items
Composite items are items that contain more than one point of interaction. Composite items may contain multiple instances of the same type of interaction or have a mixture of interaction types.
This text comprehension example combines choiceInteraction , inlineChoiceInteraction and gapMatchInteraction in a single item, sharing one text. It also makes use of inline feedback.
The Chocolate Factory (Take 2)
This example extends The Chocolate Factory (Take 1) with an additional text response field that can be marked objectively.
3.4. Response Processing
So far, all the examples have been scored using one of the standard response processing templates, or have not been suitable for objective scoring. For simple scenarios, use of the response processing templates is encourage as they improve interoperability between systems that only cater for a limited number of fixed scoring methods.
Many items, particularly those involving feedback, will require the use of the more general response processing model defined by this specification. The standard templates are themselves defined using this more general response processing language.
Grand Prix of Bahrain (Partial Scoring)
This example extends Grand Prix of Bahrain to include partial scoring. With three drivers to place on the podium there are 6 possible responses that the candidate can make, only one of which is correct. Previously, the correct answer scored 1 and all other responses scored 0. Now, the correct answer scores 2. Correctly placing Michael Schumacher first scores 1 if the other two drivers have been muddled up. Placing Barichello or Button first scores 0 (all other combinations).
Response processing consists of a sequence of rules that are carried out, in order, by the response processor. A responseCondition rule is a special type of rule which contains sub-sequences of rules divided into responseIf, responseElseIf and responseElse sections. The response processor evaluates the expressions in the responseIf and responseElseIf elements to determine which sub-sequence to follow. In this example, the responseIf section is followed only if the variable with identifier RESPONSE matches the correct response declared for it. The responseElseIf section is followed if RESPONSE matches the response explicitly given (which places the correct driver 1st but confuses the other two). Finally, the responseElse section is followed if neither of the previous two apply. The responseElse section has no corresponding expression of course. The setOutcomeValue element is just a responseRule that tells the processor to set the value of the specified outcome variable to the value of the expression it contains.
The variable, correct and baseValue elements are examples of simple expressions. In other words, expression that are indivisible. In contrast, the match and ordered elements are examples of operators. Operators are expressions that combine other expressions to form new values. For example, match is used to form a boolean depending on whether or not two expressions have matching values.
3.4.1. Custom Response Processing
The customOperator allows the inclusion of non-QTI APIs in response processing. In general, such APIs are likely to be particular to a specific software library or programming language. For that reason, it is difficult to predict what form such custom operators will take, and, by extension, how to generalise functions or syntax between different custom operators.
By way of illustration, the following fragment illustrates the use of the Maxima [Maxima] engine as a response processing library via the customOperator element.
<setOutcomeValue identifier="oDummy"> <customOperator class="org.qtitools.mathassess.ScriptRule" ma:simplify="false" ma:syntax="text/x-maxima"> <baseValue baseType="string"> <![CDATA[ oInput:RESPONSE; equalp(p,q):= block([simp:false], if p=q then return(true) else return(false) )$ isEqual: equalp(RESPONSE,mAns); equivp(p,q):= block([simp:true], if is(equal(p,q))=true then return(true) else return(false) )$ isEquiv: equivp(RESPONSE,mAns); isRecip: equivp(RESPONSE,1/mAns); numOrig: equivp(num(RESPONSE),mNum); denomOrig: equivp(denom(RESPONSE),mDen); isOrig: if (numOrig and denomOrig) then true else false; numR1: equalp(num(RESPONSE),iN^iC); numR2: equivp(num(RESPONSE),1); denR1: equivp(denom(RESPONSE),1); denR2: equalp(denom(RESPONSE),iN^(-iC)); negPower: is(ev(-iC,numer,simp)>0); isSimp: equalp(RESPONSE,ev(RESPONSE,simp)); isNotSimp: if((numR2 and denR2 and not negPower) or (isEquiv and not isSimp)) then true else false; isOK: if ((numR1 and denR1) or (negPower and numR2 and denR2)) then true else false; isAdded: equivp(RESPONSE,mAdd); isSubtracted: equivp(RESPONSE,mSub); isMultiplied: equivp(RESPONSE,mMult); ]]> </baseValue> </customOperator> </setOutcomeValue>
In this case, a customOperator is used as a very slim container for what is effectively a complete script in Maxima's language. A QTI processor designed to work with this customOperator could pass the script verbatim to Maxima, and use its response to set the 'oDummy' outcome value.
Apply the sine rule
This is a simpler example that makes use of the same extension mechanism (inside the package in this location:
Feedback consists of material presented to the candidate conditionally based on the result of responseProcessing. In other words, feedback is controlled by the values of outcome variables. There are two types of feedback material, modal and inline. Modal feedback is shown to the candidate after response processing has taken place and before any subsequent attempt or review of the item. Inline feedback is embedded into the itemBody and is only shown during subsequent attempts or review.
In this example, a straightforward multi-choice question declares an additional outcome variable called FEEDBACK which is used to control the visibility of just modalFeedback.
In this example, the feedback appears within the question, right beside the text of the selected option. The content of feedbackInline is restricted to material which can be displayed "inline", i.e. without moving to a new block or paragraph, so it behaves like the HTML "span" element.
3.6. Adaptive Items
Adaptive items are a feature that allows an item to be scored adaptively over a sequence of attempts. This allows the candidate to alter their answer following feedback or to be posed additional questions based on their current answer. Response processing works differently for adaptive items. Normally (for non-adaptive items) each attempt is independent and the outcome variables are set to their default values each time responseProcessing is carried out. For adaptive items, the outcome variables retain their values across multiple attempts and are only updated by subsequent response processing. This difference is indicated by the value of the adaptive attribute of the assessmentItem. Adaptive items must of course provide feedback to the candidate in order to allow them to adjust their response(s).
Using feedbackBlock to show a solution
In this example, the feedback is used to contain a solution which is displayed when the user clicks the "Show Solution" button.
A randomised version of this question is also available examples/items/Example03-feedbackBlock-solution-random.xml. The randomization does not affect the display of the solution in this example.
Using templateBlock and templateInline inside feedbackBlock to adjust content
The feedbackBlock element can contain subsidiary feedback elements, "template" elements and interactions alongside any of the HTML elements. In this question, the values of template variables are calculated within the templateProcessing element, and the solution is different depending on the value of the variable iA; if iA=90, the right angle in the triangle makes the question easier.
The method for displaying the solution is as in the previous example; here we concentrate on the template elements within the SOLUTION feedbackBlock.
Using feedbackBlock to change the appearance of a question
In this example, the "feedback" forms part of the question. In adaptive questions, feedbackBlock and feedbackInline elements can contain interactions:
Monty Hall (Take 1)
This example takes a famous mathematical problem and presents it to the user as a game. The feedbackBlock element, in association with a number of outcome variables is used to control the flow of the story, from the opening gambit through to whether or not you have won a prize. When the story concludes you are asked about the strategy you adopted. Notice that the scoring for the question is based on the actual strategy you took (one mark) and your answer to the final question (two marks). If you choose a bad strategy initially you are always punished by losing the game. If you feel that this is cheating take a look at a more realistic version of the same question which combines adaptivity with the powerful feature of item templates: Monty Hall (Take 2).
Figure 3.20 Monty Hall First Attempt (Illustration)
Figure 3.21 Monty Hall Second Attempt (Illustration)
Figure 3.22 Monty Hall Third Attempt (Illustration)
Figure 3.23 Monty Hall Final Feedback (Illustration)
In the previous example, the default method of ending an attempt was used to progress through the item, however, sometimes it is desirable to provide alternative ways for the candidate to end an attempt. The most common requirement is the option of requesting a hint instead of submitting a final answer. QTI provides a flexible way to accomodate these alternative paths through the special purpose endAttemptInteraction.
Mexican President with hints
In this example, Mexican President is extended to provide both feedback and the option of requesting a hint. The endAttemptInteraction controls the value of the response variable HINTREQUEST - which is true if the attempt ended with a request for a hint and false otherwise.
3.7. Item Templates
Item templates are a new feature of version 2 that allows many similar items to be defined using the same assessmentItem.
Digging a Hole
This example contains a simple textEntryInteraction but the question (and the correct answer) varies for each item session. In addition to the usual RESPONSE and SCORE variables a number of tempalte variables are declared. Their values are set by a set of templateProcessing rules. Template processing is very similar to response processing. The same condition model and expression language are used. The difference is that templateRules set the values of template variables and not outcome variables. Notice that the declaration of RESPONSE does not declare a value for the correctResponse because the answer varies depending on which values are chosen for A and B. Instead, a special rule is used, setCorrectResponse in the template processing section.
The randomInteger element represents a simple expression that selects a random integer from a specified range. The random element represents an operator that selects a random value from a container.
The itemBody displays the values of the template variables using the printedVariable element.
Sometimes it is desirable to vary some aspect of an item that cannot be represented directly by the value of a template variable. For example, in "Mick's Travels", the itemBody contains an illustration that needs to be varied according to the value chosen for a template variable. To achieve this three templateInline elements are used, each one enclosing a different img element. This element (along with the similar templateBlock) has attributes for controlling its visibility with template variables in the same way as outcome variables are used to control the visibility of feedback.
Item templates can be combined with adaptive items too.
Monty Hall (Take 2)
In Monty Hall (Take 1) we cheated by fixing the game so that the wrong strategy always lost the candidate the prize (and the first mark). In this version we present a more realistic version of the game using an item template. The same outcome variables are defined to control the story and the feedback given but this time a templateDeclaration is used to declare the variable PRIZEDOOR. The templateProcessing rules are then used to preselect the winning door at random making the game more realistic. The responseProcessing rules are a little more complicated as the value of PRIZEDOOR must be checked (a) to ensure that Monty doesn't open the prize winning door after the candidate's first choice and (b) to see if the candidate has actually won the "fantastic prize".
In this example, using the correct strategy will still lose the candidate the prize 1/3 of the time (though they always get the mark). Do you think that the outcome of the game will effect the response to the final strategy question?
The number divisors
This example makes extensive use of templates to test knowledge of calculus. It has modal feedback and includes some mathML.
Test of statistics functions
An example that uses templates extensively, and uses many common numeric operators in the response processing. It has modal feedback and includes some mathML.
Product of a fraction by a number
This is another numeric example that makes use of templates, but is notable for its' use of templateConstraint to determine variables at runtime.
3.8. Item body content
3.8.1. Sharing text between different items
It is often desirable to ask a number of questions all related to some common stimulus material such as a graphic or a passage of text. Graphic files are always stored separately and referenced within the markup using img or object elements making them easy to reference from multiple items but passages of text can also be treated this way. The object element allows externally defined passages (either as plain text files or HTML markup) to be included in the itemBody.
The following two example demonstrate this use of a shared material object.
Orkney Islands Q1
Orkney Islands Q2
Associating a style sheet with an item simply involves using the stylesheet element within an assessmentItem. The Orkney Islands examples above use this element to associate a stylesheet written using the CSS2 language. Notice that the class attribute is used to divide the item's body into two divisions that are styled separately, the shared material appearing in a right-hand pane and the instructions and question appearing in a left-hand pane.
Orkney Islands Stylesheet
This stylesheet also demonstrates a possible approach to providing absolute positioning in QTI version 2 - something which is no longer supported directly by the item information model. In version 1, material elements could have their coordinates set explicitly (see the Migration Guide for more information about migrating content that used this feature).
3.8.3. Alternative Media
The XHTML object element is designed to support the graceful degradation of media objects. The HTML 4.01 specification (the basis for [XHTML]) says "If the user agent is not able to render the object for whatever reason (configured not to, lack of resources, wrong architecture, etc.), it must try to render its contents."
Writing a Postcard (Take 2)
This example is the same as Writing a Postcard except that the picture of the postcard is provided in two different formats. Firstly as an encapsulated PostScript file (EPS) and then, alternatively, as a PNG bitmapped image. Finally, if the delivery engine is unable to handle both offered image types the text of the postcard can be displayed directly. Item authors should consider using this technique for maintaining images suitable for a variety of different output media, e.g., paper, high-resolution display, low-resolution display, etc.
3.8.4. Alternative Renderings for Interactions
The Orkney Islands Stylesheet illustrates the way styles can be applied to the XHTML elements that defined the structure of the item's body. The class attribute can also be applied to interactions and many of the common formatting concepts will still be applicable (font size, colour, etc.). Delivery engines may also use this attribute to choose between multiple ways of presenting the interaction to the candidate - though the vocabulary for class attributes on interactions is currently beyond this specification.
The QTI Questionnaire
This example illustrates an item that is used to present a set of choices commonly known as the likert scale used to obtain responses to attitude-based questions. The question is represented by a normal choiceInteraction but the class attribute of the itemBody is set to likert to indicate to the delivery engine that it should use an appropriate layout for the question, e.g., using a single line for the prompt and the choices with each choice at a fixed tab stop. By applying the style class to the whole of the item body, a delivery engine that renders multiple likert items together might be able choose a more compact rendering. Note that in this example the responseProcessing is absent, there is no right answer!
3.8.5. Using MathML
This simple example illustrates the inclusion of a mathematical expression marked up with MathML into an item.
3.8.6. Number Formatting
The format attribute of printedVariable profiles the formatting rules described by the C standard. The following table illustrates the main features. Spaces are show as the '_' (underscore) character to improve readability
|Format specification||Input||Formatted output||Notes|
|%i||-987||-987||Simple signed decimal format.|
|%.4i||-987||-0987||Precision specifies the minimum number of digits in i, o, x and X formats and defaults to no minimum.|
|%.0i||0||When formatting zero with a precision of 0 no digits are output (i, o, x and X formats only).|
|%8i||987||_____987||Field-width set manually to 8 results in five leading spaces.|
|%2i||987||987||Field-width set manually to 2 is insufficient so ignored.|
|%-8i||987||987_____||Hyphen flag forces left field alignment resulting in five trailing spaces.|
|%08i||987||00000987||Zero flag forces zero-padding resulting in five leading zeros.|
|%+i||987||+987||Plus flag leads postive numbers with plus sign (excluding o, x and X formats).|
|%_i||987||_987||Space flag leads postive numbers with space (excluding o, x and X formats).|
|%o||987||1733||Octal format, number must be positive|
|%#o||987||01733||# flag ensures at least one leading 0 for o format.|
|%x||987||3db||Hex format (lower case), number must be positive|
|%#x||987||0x3db||# flag always displays leading 0x for x format.|
|%X||987||3DB||Hex format (upper case), number must be positive|
|%#X||987||0X3DB||# flag always displays leading 0X for X format.|
|%f||987.654||987.654000||The precision specifies number of decimal places to display for f format and defaults to 6.|
|%.2f||987.654||987.65||Precision set manually to 2.|
|%#.0f||987||987.||# flag forces trailing point for f, e, E, g, G, r and R formats.|
|%e||987.654||9.876540e+02||Forces use of scientific notation. The precision specifies number of figures to the right of the point for e and E formats and defaults to 6.|
|%.2e||987.654||9.88e+02||Precision set manually to 2.|
|%E||987.654||9.876540E+02||Forces use of scientific notation (upper case form).|
|%g||987654.321||987654||Rounded to precision significant figures (default 6) and displayed in normal form when precision is greater than or equal to the number of digits to the left of the point.|
|%g||987||987||Trailing zeros to the right of the point are removed.|
|%g||987654321||9.87654e+08||Scientifc form used when required.|
|%g||0.0000987654321||9.87654e-05||Scientifc form also used when 4 or more leading zeros are required to the right of the point.|
|%#g||987||987.000||# flag also forces display of trailing zeros (up to precision significant figures) in g and G formats.|
|%G||0.0000987654321||9.87654E-05||As for g but uses upper case form.|
|%r||0.0000987654321||0.0000987654||The same as g except that leading zeros to the right of the point are not limited.|
|%R||0.0000987654321||0.0000987654||The same as G except that leading zeros to the right of the point are not limited.|
3.8.7. Markup languages, HTML 'object' and HTML 5
Specialized markup languages such as Chemical Markup Language [CML] exist for many domains that have a need for computer aided assessment. For that reason, integrating such markup languages with QTI seems attractive. One such language, MathML, is supported within itembodies of QTI 2, but no others. The main reason is that MathML is natively supported by many webbrowsers, but many others are not.
One other language that is widely supported by browsers is Scalable Vector Graphics [SVG]. While it is not supported in QTI 2 itembodies at this stage, it is easy to embed via HTML's 'object' tag. Domain specific languages such as CML can often be rendered as SVG, thus providing a convenient way to integrate material with QTI 2. At present, QTI's printedVariable can only be used within MathML and HTML. Other markup languages may be supported in a future version of QTI
Another feature that is considered for future inclusion is the use of SVG and other languages via HTML 5's 'embed' tag [HTML5]. The use of this tag is not currently supported either within or outside HTML's 'object' tag.
4. Tests (Assessments)
Interaction Mix (Sachsen)
A test with a representative mixture of widely used interactiontypes. The test has no feedback at either the test or item level, is in a complete package and is in German.
Simple Feedback Test
This example demonstrates a straightforward use feedback at the testlevel. It is a complete package, with manifest and items.
Feedback Examples Test
In this example, the feedback at the end of the test depends on the scores obtained in the four sections of the test. There is extensive inline documentation in the test. The test is part of a complete package, with manifest and items.
Sets of Items With Leading Material
This example illustrates a test consisting of a set of three items (rtest01-set01.xml, rtest01-set02.xml, rtest01-set02.xml) sharing a single fragment of leading material (rtest01-fragment.xml). The fragment is included in each of the assessmentItems in the set by using the XInclude mechanism.
The submission mode is set to individual mode requiring the candidate to submit their responses on an item-by-item basis.
The navigation mode is set to linear mode restricting the candidate to attempt each item in turn. Once the candidate moves on they are not permitted to return.
Arbitrary Collections of Item Outcomes
This example illustrates the use of two assessmentSections (sectionA and sectionB) and one subsection (sectionB1). Both sectionA and sectionB are visible, meaning that they are identifiable by the candidate. Conversely, sectionB1 is not identifiable as a section.
The submission mode is set to simultaneous. The candidate's responses are all submitted together at the end of the testPart (in this case effectively meaning at the end of the assessmentTest).
The navigation mode is set to nonlinear mode allowing the candidate to navigate to any item in the test at any time.
The test uses weights to determine the contribution of the inidividual item score to the overall test score. In this example the weight of 0 for item160 means that its score isn't taken into account when calculating the overall test score. The weight of 2 for item034 means that the score for item034 is multiplied by 2 when calculating the overall test score.
For the assessmentItems where no weight is given, a weight of 1.0 is assumed.
Categories of Item
This example illustrates the use of categories of assessmentItems in the assessmentTest.
The submission mode is set to simultaneous. The candidate's responses are all submitted together at the end of the testPart (in this case effectively meaning at the end of the assessmentTest).
The navigation mode is set to nonlinear mode allowing the candidate to navigate to any item in the test at any time.
The test uses the category attribute to assign the items to one or more categories. The outcomeprocessing part of the example shows how the category is being used to sum the score of a selection of the questions.
Arbitrary Weighting of Item Outcomes
Specifiying the Number of Allowed Attempts
This example illustrates the use of itemSessionControl to set the number of allowed attempts.
The example contains two testParts, the maximum number of allowed attempts for the first testPart is set to unlimited (maxAttempts = 0) and the maximum number of allowed attempts for the second testPart is 1.
The submission mode for both testParts is set to individual mode requiring the candidate to submit their responses on an item-by-item basis.
The navigation mode for both testParts is set to linear mode restricting the candidate to attempt each item in turn. Once the candidate moves on they are not permitted to return.
Controlling Item Feedback in Relation to the Test
This example illustrates the use of itemSessionControl to set the item feedback in relation to the test.
The submission mode for the second testPart is set to simultaneous. The candidate's responses are all submitted together at the end of the testPart.
The navigation mode of the second testPart is set to nonlinear mode allowing the candidate to navigate to any item in the testPart at any time.
The showFeedback attribute of itemSessionControl is set to true, affecting the visibility of feedback after the end of the last attempt.
Allowing review and feedback in simultaneous mode means that the test is navigable after submission (in this case, in a nonlinear style)
The showSolution attribute of itemSessionControl is set to false, meaning the system maynot provide the candidate with a way of entering the solution state.
Remember that the showFeedback attribute controls the assessmentItem feedback on test level. It doesn't overrule the display of feedback as set inside the item.
Controlling the duration of an item attempt
This example illustrates controlling the duration of an item attempt (both maximum and minimum) in the context of a specific test.
The test shows the use of the timeLimits element to set the maxTime constraint for the complete test, a single assessmentSection and a single assessmentItem.
The example contains one assessmentItemRef (item034) which has a minTime of 3 minutes and a maxTime of 10 minutes. This means that candidates cannot progress to the next item in the test (item160) until they have spent 3 minutes interacting with it. Given that the candidate is limited to a maximum of 1 attempt at each item in the test, this effectively means that the candidate is prevented from submitting their responses until 3 minutes have passed. However, they must submit their responses before 10 minutes have passed. When the time limit is up the current responses would typically be submitted automatically.
It is up to the assessment constructor to make sure that the sum of all maxTime elements in the assessment is smaller or equal to the maxTime of the assessmentTest and that the sum of all minTime elements in the assessment is smaller or equal to the maxTime of the assessmentTest.
Early termination of test based on accumulated item outcomes
This example shows how to provide support for early termination of test based on accumulated item outcomes.
The outcomeProcessing for the test is invoked after each attempt and checkes to see if the SCORE is greater than 3. If that is the case the exitTest terminates the test.
Golden (required) Items and Sections
In assessmentSection B, we select 2 children using the selection element, but assessmentSection B1 is required (because of the required="true" attribute) so we effectively select B1 and one of the other three items. B1 is an invisible section and the three items it contains will be mixed in with the other selected item when shuffling resulting in a an assessmentSection containing four items.
Branching based on the response to an assessmentItem
This example shows the support for branching based on the response to an assessmentItem. The example uses the preCondition element and the branchRule element.
The preCondition element sets the conditions that need to be met for an assessmentItem or assessmentSection to be displayed. In nonlinear mode, pre-conditions are ignored.
The branchRule element contains a rule, evaluated during the test, for setting an alternative target as the next item or section. As with preconditions, branch rules are ignored in nonlinear mode. The second branchRule element contains a special targetItem EXIT_SECTION which means exit this section of the test.
Items Arranged into Sections within Tests
This example shows the use of sections to group individual items.
Randomizing the Order of Items and Sections
This example shows the use of the ordering element to randomize the order of items and sections.
Basic Statistics as Outcomes
This example shows how basic statistics of a test are assinged to outcomes.
A number of build in statistics (numberCorrect, numberIncorrect, numberPresented, numberSelected, numberResponded) are assigned to Outcome Variables.
In addition to that the Outcome Variable "PERCENT_CORRECT" is calculated based on two of those basic statistics.
Mapping item outcomes prior to aggregation
This example shows how item outcomes are mapped prior to aggregation.
The variableMapping element maps the item034.NOTA to the variable SCORE.
5. Usage Data (Item Statistics)
Example Item Statistics
This example demonstrates the construction of a usage-data file. When distributing usage data within a content package the usage-data should be stored in a separate file within the package and referred to in the manifest file by an appropriate cp:resource element. Note that references to the assessment items and other objects within the usage-data file itself are not considered to be dependencies of the resource. The resource type for usage-data files is imsqti_usagedata_xmlv2p1.
6. Packaged Items, Tests and Metadata
Simple Packaging Example
This example demonstrates how a single item is packaged using the techniques described in the Integration Guide. The manifest file demonstrates the use of a resource element to associate metadata (both LOM and QTI) with an item and the file element to reference the assessmentItem XML file and the associated image file.
Shared Image Example
This example demonstrates how multiple items are packaged. Note that where two items share a media object (such as an image) a dependency can be used to enable the object to be represented by its own resource element within the manifest.
CC QTI package
This fully zipped up package contains a linear test and a collection of the most widely used itemtypes: multiple choice (single and multiple), fill-in-the-blank and essay submission. The package is a transcoding from a Common Cartridge QTI 1.2 example, and therefore demonstrates IMS_CC 1.1 metadata usage in the manifest file, including the use of curriculum standards.
The original QTI 1.2 profile for IMS_CC that provided the basis for this CC QTI package was determined by defining an intersection of the assessment capabilities of the most widely used LMSs at the time. This CC QTI package was successfully imported by a number of QTI 2.1 implementations in two interoperability tests. As such, it probably represents the most widely supported minimal subset of the QTI 2.1 specification.
Note that the choice items in the CC QTI package (QUE_102010, QUE_102012, QUE_102013, QUE_104045, QUE_104047, QUE_104048, QUE_104049 and QUE_104051) provide a choice of two types of feedback: inline and modal.
The CC2_match.xml response template provides inline feedback and is the preferred response template. It processes outcomes by doing two things. One is to set the FEEDBACK variable with the RESPONSE variable, so that the feedbackInline element with an identifier that matches the FEEDBACK value can be shown to the candidate. When the CC2_match.xml template is used, the FEEDBACKBASIC variable that is declared in all choice items is not used and won't be set, which means the 'correct' and 'incorrect' modal feedback elements won't ever be shown to the learner.
The CC2_match_basic.xml template is designed for those systems that do not support inline feedback. Instead of inlinefeedback elements, the match basic template triggers the 'correct' and 'incorrect' modalfeedback elements via the FEEDBACKBASIC variable. It does this by comparing the candidate's RESPONSE value with the correct RESPONSE value from the item. If they match, the SCORE variable is set to the MAXSCORE value from the item and the FEEDBACKBASIC variable is set to 'correct'. If the candidate's RESPONSE value and the correct RESPONSE value from the item do not match, the match basic template will set the FEEDBACKBASIC variable to 'incorrect'. When the CC2_match_basic.xml template is used, the FEEDBACK variable will never be set to the 'true' or 'false' value, which means that the inline feedback won't ever be shown to the learner.
The match basic template should only be used by systems that cannot support inline feedback instead of the CC2_match.xml template that is referenced in the items.
Package with Response Processing Templates
The response processing templates feature of QTI allows common sets of response processing rules to be documented in separate XML documents and simply referred to by the items that make use of them. The mechanism for identifying the template to use is the template attribute on the responseProcessing element. This attribute is a URI, but it is not required to be a URL that resolves directly to the appropriate XML document. To help systems that support general response processing find the rule definitions required to support new templates an additional templateLocation attribute is provided which may be used to provide a URL that resolves to the template's XML document. If this URL is given relative to the location of the item then the template should be included in the same content package and listed as a dependency for each of the items that refer to it.
This example package demonstrates the use of a relative URL to refer to response processing templates listed as separate resources within the package as described above. Note that the technique used is similar to that for locating XML schemas from the URIs used to refer to their namespaces, however, XML schemas included in content packages to assist with validation should not be described as separate resources (or file dependencies) in the manifest file.
Package with Externally Defined Response Processing Templates
This examples is the same as the one above (Package with Response Processing Templates) except that response processing templates are not included. The templateLocation attribute is used with absolute URLs of the templates.
Package with Test and Items
This examples demonstrates how to package an assessmentTest together with the assessmentItems referenced by the test. Both the assesmentTest and assessmentItems are represented by resource elements within the manifest. A depency is used to represent the relationship between the assessmentTest and the individual assessmentItems.
BBQs test package
A package with a wide, but representative set of items from UK Higher Education. The set exemplifies the union of most commonly used itemtypes in that sector, regardless of tool or format. There are some items that make use of math extensions.
A representative set of commonly used questiontypes, geared for language learning. Partially in German.
English excercises II
A more advanced language learning test that demonstrates the use of rubricBlock and extendedTextInteraction for reading comprehension.
The QTI schema file imports two externally defined auxiliary schemas, the built-in XML namespace and MathML. The schema imports these from their published locations on the web using absolute URLs. As a result, some XML validation tools may not be able to validate QTI documents when working offline.
There is also some confusion as to whether or not XML schemas that refer to components of the built-in XML namespace (such as the xml:lang atrribute used by QTI) should (or even may) provide an associated namespace prefix declaration. This point was unclear in the first edition of the XML specification and not cleared up until the errata to that addition [XML_ERRATA] was published. The errata has itself now been superseded by the second edition [XML] which makes it clear that the declaration may be included provided it is bound to the reserved prefix xml but that it is not required. In keeping with the latest 1EdTech Content Packaging specification the QTI schema includes the declaration in the root of the schema. It is clear that some tools will still not validate documents against schemas that contain this prefix and a local copy of the QTI schema with the following attribute removed from the schema element may need to be used instead:
The namespace identifier of the QTI schema has changed for version 2.1 of this specification to
http://www.imsglobal.org/xsd/imsqti_v2p1. Use of this namespace is required when using any of the new elements defined by this version. Documents with a namespace of
http://www.imsglobal.org/xsd/imsqti_v2p0 must still be supported. For compatibility systems may wish to use the 2p0 namespace identifier when generating content that conforms to the narrower model defined by version 2.0 of this specification.
About This Document
|Title||1EdTech Question & Test Interoperability Implementation Guide|
|Editors||Wilbert Kraan (JISC/CETIS), Steve Lay (Cambridge Assessment), Pierre Gorissen (SURF)|
|Version Date||31 August 2012|
|Status||Final Release Specification|
|Summary||This document describes the QTI Implementation Guide specification.|
|Revision Information||31 August 2012|
|Purpose||This document has been approved by the 1EdTech Technical Advisory Board and is made available for adoption and conformance.|
|To register any comments or questions about this specification please visit: http://www.imsglobal.org/community/forum/categories.cfm?catid=52|
List of Contributors
The following individuals contributed to the development of this document:
|Odette Auzende||Université Pierre et Marie Curie (France)|
|Dick Bacon||JISC/CETIS (UK)|
|Niall Barr||University of Glasgow/1EdTech (UK)|
|Lance Blackstone||Pearson (USA)|
|Jeanne Ferrante||ETS (USA)|
|Helene Giroire||Université Pierre et Marie Curie (France)|
|Pierre Gorissen||SURF (The Netherlands)|
|Regina Hoag||ETS (USA)|
|Wilbert Kraan||JISC/CETIS (UK)|
|Gopal Krishnan||Pearson (USA)|
|Young Jin Kweon||KERIS (South Korea)|
|Steve Lay||Cambridge Assessment (UK)|
|Francoise LeCalvez||Université Pierre et Marie Curie (France)|
|David McKain||JISC/CETIS (UK)|
|Mark McKell||1EdTech (USA)|
|Sue Milne||JISC/CETIS (UK)|
|Jens Schwendel||BPS Bildungsportal Sachsen GmbH (Germany)|
|Graham Smith||JISC/CETIS (UK)|
|Colin Smythe||1EdTech (UK)|
|Yvonne Winkelmann||BPS Bildungsportal Sachsen GmbH (Germany)|
|Rowin Young||JISC/CETIS (UK)|
|Version No.||Release Date||Comments|
|Base Document 2.1||14 October 2005||The first version of the QTI v2.1 specification.|
|Public Draft 2.1||9 January 2006||The Public Draft v2.1 of the QTI specification.|
|Public Draft 2.1 (revision 2)||8 June 2006||The Public Draft v2.1 (revision 2) of the QTI specification.|
|Final Release v2.1||31 August 2012||The Final Release v2.1 of the QTI specification. Includes updates, error corrections, and additional details.|
1EdTech Consortium, Inc. ("1EdTech") is publishing the information contained in this 1EdTech Question and Test Interoperability Implementation Guide ("Specification") for purposes of scientific, experimental, and scholarly collaboration only.
1EdTech makes no warranty or representation regarding the accuracy or completeness of the Specification.
This material is provided on an "As Is" and "As Available" basis.
The Specification is at all times subject to change and revision without notice.
It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.
1EdTech would appreciate receiving your comments and suggestions.
Please contact 1EdTech through our website at http://www.imsglobal.org
Please refer to Document Name: 1EdTech Question and Test Interoperability Implementation Guide Revision: 31 August 2012