|Date Issued:||1st September, 2015|
IPR and Distribution Notices
Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the specification set forth in this document, and to provide supporting documentation.
IMS takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on IMS's procedures with respect to rights in IMS specifications can be found at the IMS Intellectual Property Rights web page: http://www.imsglobal.org/ipr/imsipr_policyFinal.pdf.
Copyright © 2015 IMS Global Learning Consortium. All Rights Reserved.
Use of this specification to develop products or services is governed by the license with IMS found on the IMS website: http://www.imsglobal.org/speclicense.html.
Permission is granted to all parties to use excerpts from this document as needed in producing requests for proposals.
The limited permissions granted above are perpetual and will not be revoked by IMS or its successors or assigns.
THIS SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY AT THE IMPLEMENTER'S OWN RISK, AND NEITHER THE CONSORTIUM, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE USE OF THIS SPECIFICATION.
Public contributions, comments and questions can be posted in the IMS public forums
© 2015 IMS Global Learning Consortium, Inc.
The IMS Logo is a trademark of the IMS Global Learning Consortium, Inc. in the United States and/or other countries.
All Rights Reserved.Revision: 1st September, 2015
3.1. How Big is an Item
3.2. Simple Items
3.3. Composite Items
3.4. Response Processing
3.4.1. Custom Response Processing
3.4.2. External Scoring
3.6. Adaptive Items
3.7. Item Templates
3.8. Item Body Content
3.8.1. Sharing Text between Different Items
3.8.3. Alternative Media
3.8.4. Alternative Renderings for Interactions
3.8.5. Using MathML
3.8.6. Number Formatting
3.8.7. Markup languages, HTML 'object' and HTML 5
3.8.8. HTML5 Custom Data-* Attribute
3.8.9. HTML5 Figure and FigCaption
3.8.10. HTML 5 Audio and Video Elements
3.8.11. Internationalization Standards
4. Tests (Assessments)
5. Usage Data (Item Statistics)
6. Packaged Items, Tests and Metadata
7. Accessibility and Assessment
7.1. Supporting Assistive Technology through Web Accessibility
7.1.1. Allowable HTML5 Elements in APIP Content
7.1.2. Use of WAI-ARIA 1.0 in QTI
7.1.3. Aiding Pronunciation of Written Text for Text-To-Speech Software
About This Document
List of Contributors
This document contains examples of Question and Test Interoperability (QTI) Version 2.2 in action. Some of the examples are illustrated by screen shots. All screen shots are taken from a single delivery engine [SMITH] developed during the public draft review period of the v2.1 specification. They are designed to illustrate how a system might implement the specification and are not designed to be prescriptive. Other types of rendering are equally valid.
Each section of this document introduces a new aspect or feature of the specification, starting with the simplest constructions, and continuing to more intricate examples. For those who want to start with a very simple, but complete and usable test package, the CC QTI package is the recommended point of departure.
Accessible Portable Item Protocol
Accessible Rich Internet Applications
Mathematical Markup Language
Chemical Markup Language
HTML 4.01 Specification
HyperText Markup Language version 5
IMS Common Cartridge
IMS Online Validator
Pronunciation Lexicon Specification
Development of an implementation of QTI Version 2.0
Dr Graham Smith, with support from CETIS and UCLES
Speech Synthesis Markup Language
Scalable Vector Graphics
Maxima, a Computer Algebra System
Ruby Markup and Styling
Ruby Annotation - HTML5
Accessible Rich Internet Applications
Web Content Accessibility Guidelines
XHTML 1.1: The Extensible HyperText Markup Language
Extensible Markup Language (XML), Version 1.0 (second edition)
XML 1.0 Specification Errata
The main purpose of the QTI specification is to define an information model and associated binding that can be used to represent and exchange assessment items. For the purposes of QTI, an item is a set of interactions (possibly empty) collected together with any supporting material and an optional set of rules for converting the candidate's response(s) into assessment outcomes.
The above definition covers a wide array of possibilities. At one extreme a simple one line question with a response box for entering an answer is clearly an item but at the other, an entire test comprising instructions, stimulus material and a large number of associated questions also satisfies the definition. In the first case, QTI is an appropriate specification to use for representing the information, in the second case it isn't.
To help determine whether or not a piece of assessment content that comprises multiple interactions should be represented as a single assessmentItem (known as a composite item in QTI) the strength of the relationship between the interactions should be examined. If they can stand alone then they may best be implemented as separate items, perhaps sharing a piece of stimulus material like a picture or a passage of text included as an object. If several interactions are closely related then they may belong in a composite item, but always consider the question of how easy it is for the candidate to keep track of the state of the item when it contains multiple related interactions. If the question requires the user to scroll a window on their computer screen just to see all the interactions then the item may be better re-written as several smaller related items. Consider also the difficulty faced by a user interacting with the item through a screen-reader, an item with many possible of points of interaction may be overwhelming in such an interface.
Simple items are items that contain just one point of interaction, for example a simple multi-choice or multi-response question. This section describes a set of examples illustrating simple items, one for each of the interaction types supported by the specification.
Figure 3.1 Unattended Luggage (Illustration)
This example illustrates the choiceInteraction being used to obtain a single response from the candidate.
Notice that the candidate's response is declared at the top of the item to be a single identifier and that the values this identifier can take are the values of the corresponding identifier attributes on the individual simpleChoices. The correct answer is included in the declaration of the response. In simple examples like this one there is just one response variable and one interaction but notice that the interaction must still be bound to the response declaration using the responseIdentifier attribute of choiceInteraction.
The item is scored using one of the standard response processing templates, Match Correct.
Unattended Luggage (with fixed choice)
This example is a variation on the previous example and illustrates the use of the fixed attribute to fix the location of one of the options in the item.
Composition of Water
Figure 3.2 Composition of Water (Illustration)
This example illustrates the choiceInteraction being used to obtain multiple responses from the candidate.
Notice that the candidate's response is declared to have multiple cardinality and the correct value is therefore composed of more than one value. This example could have been scored in the same way as the previous one, with 1 mark being given for correctly identifying the two correct elements (and only the two correct elements) and 0 marks given otherwise, however, a method that gives partial credit has been adopted instead through the use of the standard response processing template Map Response. This template uses the RESPONSE's mapping to sum the values assigned to the individual choices. As a result, identifying the correct two choices (only) scores 2 points. Notice that selecting a third (incorrect) choice reduces the score by 2 (with the exception of Chlorine) resulting in 0 as unmapped keys are mapped to the defaultValue. To prevent an overall negative score bounds are specified too. The penalty for selecting Chlorine is less, perhaps to reflect its role as a common water additive.
Also note that SCORE needs to be set to float because of the use of the map_response template which returns a float.
This example illustrates the choiceInteraction being used to obtain multiple responses from the candidate with two correct sets of responses.
Grand Prix of Bahrain
Figure 3.3 Grand Prix of Bahrain (Illustration)
This example illustrates the orderInteraction. The candidate's response is declared to have ordered and the correct value is therefore composed of an ordered list of value. The shuffle attribute tells the delivery engine to shuffle the order of the choices before displaying them to the candidate. Note that the fixed attribute is used to ensure that the initially presented order is never the correct answer. The question uses the standard response processing template Match Correct to score 1 for a completely correct answer and 0 otherwise.
Figure 3.4 Shakespearian Rivals (Illustration)
This example illustrates the associateInteraction. The candidate's response is declared to have pair because the task involves pairing up the choices. The maxAssociations attribute on associateInteraction controls the maximum number of pairings the candidate is allowed to make overall. Individually, each choice has a matchMax attribute that controls how many pairings it can be part of. The number of associations that can be made in an associateInteraction is therefore constrained by two methods - in this case they have the same overall effect but this needn't be the case.
The associations created by the candidate are not directed, the pair base-type is an undirected pair so when comparing responses "A P" would be treated as a match for "P A" - the distinction has no meaning to the interaction even though the physical process used by the candidate might be directional, for example, drawing a line between the choices.
Characters and Plays
Figure 3.5 Characters and Plays (Illustration)
This example illustrates the matchInteraction. This time the candidate's response is declared to have directedPair because the task involves pairing up choices from a source set into a target set. In this case characters from plays with the names of the plays from which they are drawn. Notice that matchMax on the characters is one because each character can be in only one play (in fact, Shakespeare often reused character names but we digress) but it is four on the plays because each play could contain all the characters. For example, Demetrius and Lysander were both in A Midsummer-Night's Dream, so in the correct response that play has two associations. In the mapping used for response processing these two associations have been awarded only a half a mark each.
Richard III (Take 1)
Figure 3.6 Richard III (Illustration 1)
This example illustrates the gapMatchInteraction. This interaction is similar to matchInteraction except that the choices in the second set are gaps in a given passage of text and the task involves selecting choices from the first set and using them to fill the gaps. The same attributes are involved in controlling which, and how many, pairings are allowed though there is no matchMax for the gaps because they can only ever have one associated choice. The scoring is again done with a mapping.
Additional formatting may be applied within the gapText element allowing for a greater variation.' Allowed formats include:
'br', 'img', 'include', 'math', 'object', 'printedVariable', 'a', 'abbr', 'acronym', 'b',
'big', 'cite', 'code', 'dfn', 'em', 'feedbackInline', 'i', 'kbd', 'q', 'samp', 'small', 'span',
'strong', 'sub', 'sup', 'tt', 'var', 'templateInline'.
Here's an example which uses the img tag within gapText:
Richard III (Take 2)
Figure 3.7 Richard III (Illustration 2)
The Richard III (Take 1) example above demonstrated the use of filling gaps from a shared stock of choices. In cases where you only have one gap, or where you have multiple gaps that are to be filled independently, each from its own list of choices, then you use an inlineChoice interaction.
A 'label' element can be provided to display default text in the place of the inlineChoiceInteraction, which allows some flexibility in display, especially if more than one interaction is used in the same assessmentItem. For another take on Richard II, 'Select Season' is the label of the first interaction, 'Select Dukedom' is the label for the second interaction. The choices could be selected inline, from a dropdown list:
or the choices could be split off into a separate table, linked to the appropriate blank (for the candidate) by the matching label.
Additional formatting may be applied within the inlineChoice element allowing for greater variation. Allowed formats include:
'br', 'img', 'include', 'math', 'object', 'printedVariable', 'a', 'abbr', 'acronym', 'b',
'big', 'cite', 'code', 'dfn', 'em', 'feedbackInline', 'i', 'kbd', 'q', 'samp', 'small', 'span',
'strong', 'sub', 'sup', 'tt', 'var', 'templateInline'.
Here's an example which uses math within the inlineChoice element:
Richard III (Take 3)
Figure 3.8 Richard III (Illustration 3)
The third, and final method of filling gaps is to use an textEntryInteraction which requires the candidate to construct their own response, typically by typing it in. Notice that a guide to the amount of text to be entered is given in the expectedLength attribute - though candidates should be allowed to enter more if desired.
The scoring for this item could have just matched the correct response but actually uses a mapping to enable partial credit for york (spelled without a capital letter). When mapping strings the mapping always takes place case sensitively. This example also illustrates the use of the mapping when the response only has single cardinality.
Writing a Postcard
Figure 3.9 Writing a Postcard (Illustration)
If an extended response is required from the candidate then the extendedTextInteraction is appropriate. Notice that this example does not contain a responseProcessing section because the scoring of extended text responses is beyond the scope of this specification.
Writing a Postcard with Rubric information
A rubricBlock can be used to add instructions about the way the item should be scored by a human scorer. The view attribute is used to indicate that the information should only be made visible to users in certain roles.
Figure 3.10 Olympic Games (Illustration)
This example illustrates the hottextInteraction. This interaction presents a passage of text with some hot words/phrases highlighted and selectable by the candidate. It differs from the choiceInteraction in that the choices have to be presented in the context of the surrounding text.
Figure 3.11 UK Airports in Unanswered State (Illustration)
Figure 3.12 UK Airports in Answered State (Illustration)
This example illustrates the hotspotInteraction. This is very similar to the hottextInteraction except that instead of having to select hot areas embedded in a passage of text the candidate has to select hotspots of a graphical image.
Note that the response is of type identifier and that each individual hotspotChoice associates an identifier with an area of the image.
Where is Edinburgh?
Figure 3.13 Where is Edinburgh? (Illustration)
This example illustrates the selectPointInteraction. The RESPONSE is declared to be a single point that records the coordinates of the point on the map marked by the candidate. The correctResponse is given in the declaration too, however, for this type of question it is clearly unreasonable to expect the candidate to click exactly on the correct point and there would be too many values to build a workable mapping. To get around this problem an areaMapping is used instead, this allows one or more areas of the coordinate space to be mapped to a numeric value (for scoring). In this example, just one area is defined: a circle with radius 8 pixels centred on the correct (optimal) response. The standard response processing template Map Response Point is used to set the score using the areaMapping.
Figure 3.14 Flying Home (Illustration)
This example illustrates the graphicOrderInteraction. The task is similar to Grand Prix of Bahrain except that the choices are presented as hotspots on a graphic image.
Figure 3.15 Low-cost Flying Unanswered State (Illustration)
Figure 3.16 Low-cost Flying Answered State (Illustration)
This example illustrates the graphicAssociateInteraction. The task is similar to Shakespearian Rivals except that the choices are presented as hotspots on a graphic image. Notice that matchMax is set to three for each of the hotspots allowing the candidate to associate each hotspot up to three times (in other words, with all the other hotspots if desired).
Figure 3.17 Airport Tags (Illustration)
This example illustrates the graphicGapMatchInteraction. The task is similar to Richard III (Take 1) except that the first set of choices can be either text or images and the second set are gaps within a larger background image. In graphical system that supports dragging this would typically be implemented using drag and drop. The example above uses images for the choices. However, it is possible to use text as well, as in the alternate version of Airport Tags.
Figure 3.18 Airport Locations (Illustration)
This example illustrates the positionObjectInteraction. It has a lot in common with Where is Edinburgh? except that the 'point' is selected by positioning a given object on the image (the stage). Notice that the stage is specified outside of the interaction. This allows a single stage to be shared amongst multiple position object interactions.
Figure 3.19 Jedi Knights (Illustration)
This example illustrates the sliderInteraction. It is used in this example to obtain a percentage estimate. The interaction is bound to an integer response which can then be scored using the standard Map Response response processor.
La casa di Giovanni
This example illustrates the drawingInteraction. Notice that the RESPONSE is declared to be of type file. The drawing takes place on a required pre-supplied canvas, in the form of an existing image, which is also used to determine the appropriate size, resolution and image type for the candidate's response.
The Chocolate Factory (Take 1)
This example illustrates the uploadInteraction. The RESPONSE is again declared to be of type file. The candidate is provided with a mechanism to upload their own spreadsheet in response to the task, response processing for file-based questions is out of scope of this specification.
Composite items are items that contain more than one point of interaction. Composite items may contain multiple instances of the same type of interaction or have a mixture of interaction types.
This text comprehension example combines 'choiceInteraction', and 'gapMatchInteraction' in a single item, sharing one text. It also makes use of inline feedback.
The Chocolate Factory (Take 2)
This example extends 'The Chocolate Factory (Take 1)' with an additional text response field that can be marked objectively. As the responseDeclaration bound to the textEntryInteraction has an integer baseType, the text the candidate gives as an input is transformed as an integer by the Delivery Engine.
So far, all the examples have been scored using one of the standard response processing templates, or have not been suitable for objective scoring. For simple scenarios, use of the response processing templates is encouraged as they improve interoperability between systems that only cater for a limited number of fixed scoring methods.
Many items, particularly those involving feedback, will require the use of the more general response processing model defined by this specification. The standard templates are themselves defined using this more general response processing language.
Grand Prix of Bahrain (Partial Scoring)
This example extends Grand Prix of Bahrain to include partial scoring. With three drivers to place on the podium there are 6 possible responses that the candidate can make, only one of which is correct. Previously, the correct answer scored 1 and all other responses scored 0. Now, the correct answer scores 2. Correctly placing Michael Schumacher first scores 1 if the other two drivers have been muddled up. Placing Barichello or Button first scores 0 (all other combinations).
Response processing consists of a sequence of rules that are carried out, in order, by the response processor. A responseCondition rule is a special type of rule which contains sub-sequences of rules divided into responseIf, responseElseIf and responseElse sections. The response processor evaluates the expressions in the responseIf and responseElseIf elements to determine which sub-sequence to follow. In this example, the responseIf section is followed only if the variable with identifier RESPONSE matches the correct response declared for it. The responseElseIf section is followed if RESPONSE matches the response explicitly given (which places the correct driver 1st but confuses the other two). Finally, the responseElse section is followed if neither of the previous two apply. The responseElse section has no corresponding expression of course. The setOutcomeValue element is just a responseRule that tells the processor to set the value of the specified outcome variable to the value of the expression it contains.
The variable, correct and baseValue elements are examples of simple expressions. In other words, expression that are indivisible. In contrast, the match and ordered elements are examples of operators. Operators are expressions that combine other expressions to form new values. For example, match is used to form a boolean depending on whether or not two expressions have matching values.
The customOperator allows the inclusion of non-QTI AP response processing. In general, such APIs are likely to be particular to a specific software library or programming language. For that reason, it is difficult to predict what form such custom operators will take, and, by extension, how to generalise functions or syntax between different custom operators.
By way of illustration, the following fragment illustrates the use of the Maxima [Maxima] engine as a response processing library via the customOperator element.
In this case, a customOperator is used as a very slim container for what is effectively a complete script in Maxima's language. A QTI processor designed to work with this customOperator could pass the script verbatim to Maxima, and use its response to set the 'oDummy' outcome value.
Apply the sine rule
This is a simpler example that makes use of the same extension mechanism (inside the package in this location: id-bd44a757f562/SineRule-001.xml).
In some cases, responseProcessing is undertaken by external systems or human scorers. This is typically the case for items asking candidates to write an essay. However, it might be important for external systems or human scorers to know which outcome value has to be set to derive an appropriate score.
Write an essay
This example describes an item with a single extendedTextInteraction asking the candidate to write an essay. As the item does not contain responseProcessing, the SCORE outcomeDeclaration has its externalScored attribute value set to human. This make QTI compliant systems aware that the final value of SCORE has to be set by a human scorer after the Item Session has closed.
Feedback consists of material presented to the candidate conditionally based on the result of responseProcessing. In other words, feedback is controlled by the values of outcome variables. There are two types of feedback material, modal and inline. Modal feedback is shown to the candidate after response processing has taken place and before any subsequent attempt or review of the item. Inline feedback is embedded into the itemBody and is only shown during subsequent attempts or review.
In this example, a straightforward multi-choice question declares an additional outcome variable called FEEDBACK which is used to control the visibility of just modalFeedback.
In this example, the feedback appears within the question, right beside the text of the selected option. The content of feedbackInline is restricted to material which can be displayed "inline", i.e. without moving to a new block or paragraph, so it behaves like the HTML "span" element.
Adaptive items are a feature that allows an item to be scored adaptively over a sequence of attempts. This allows the candidate to alter their answer following feedback or to be posed additional questions based on their current answer. Response processing works differently for adaptive items. Normally (for non-adaptive items) each attempt is independent and the outcome variables are set to their default values each time responseProcessing is carried out. For adaptive items, the outcome variables retain their values across multiple attempts and are only updated by subsequent response processing. This difference is indicated by the value of the adaptive attribute of the assessmentItem. Adaptive items must of course provide feedback to the candidate in order to allow them to adjust their response(s).
Using feedbackBlock to show a solution
In this example, the feedback is used to contain a solution which is displayed when the user clicks the "Show Solution" button.
A randomized version of this question is also available examples/items/Example03-feedbackBlock-solution-random.xml. The randomization does not affect the display of the solution in this example.
Using templateBlock and templateInline inside feedbackBlock to adjust content
The feedbackBlock element can contain subsidiary feedback elements, "template" elements and interactions alongside any of the HTML elements. In this question, the values of template variables are calculated within the templateProcessing element, and the solution is different depending on the value of the variable iA; if iA=90, the right angle in the triangle makes the question easier.
The method for displaying the solution is as in the previous example; here we concentrate on the template elements within the SOLUTION feedbackBlock.
Using feedbackBlock to change the appearance of a question
In this example, the "feedback" forms part of the question. In adaptive questions, feedbackBlock and feedbackInline elements can contain interactions:
Monty Hall (Take 1)
This example takes a famous mathematical problem and presents it to the user as a game. The feedbackBlock element, in association with a number of outcome variables is used to control the flow of the story, from the opening gambit through to whether or not you have won a prize. When the story concludes you are asked about the strategy you adopted. Notice that the scoring for the question is based on the actual strategy you took (one mark) and your answer to the final question (two marks). If you choose a bad strategy initially you are always punished by losing the game. If you feel that this is cheating take a look at a more realistic version of the same question which combines adaptivity with the powerful feature of item templates: Monty Hall (Take 2).
Figure 3.20 Monty Hall First Attempt (Illustration)
Figure 3.21 Monty Hall Second Attempt (Illustration)
Figure 3.22 Monty Hall Third Attempt (Illustration)
Figure 3.23 Monty Hall Final Feedback (Illustration)
In the previous example, the default method of ending an attempt was used to progress through the item, however, sometimes it is desirable to provide alternative ways for the candidate to end an attempt. The most common requirement is the option of requesting a hint instead of submitting a final answer. QTI provides a flexible way to accomodate these alternative paths through the special purpose endAttemptInteraction.
Mexican President with hints
In this example, Mexican President is extended to provide both feedback and the option of requesting a hint. The endAttemptInteraction controls the value of the response variable HINTREQUEST - which is true if the attempt ended with a request for a hint and false otherwise.
Item templates are a new feature of version 2 that allows many similar items to be defined using the same assessmentItem.
Digging a Hole
This example contains a simple textEntryInteraction but the question (and the correct answer) varies for each item session. In addition to the usual RESPONSE and SCORE variables a number of template variables are declared. Their values are set by a set of templateProcessing rules. Template processing is very similar to response processing. The same condition model and expression language are used. The difference is that templateRules set the values of template variables and not outcome variables. Notice that the declaration of RESPONSE does not declare a value for the correctResponse because the answer varies depending on which values are chosen for A and B. Instead, a special rule is used, setCorrectResponse in the template processing section.
The randomInteger element represents a simple expression that selects a random integer from a specified range. The random element represents an operator that selects a random value from a container.
The itemBody displays the values of the template variables using the printedVariable element.
Sometimes it is desirable to vary some aspect of an item that cannot be represented directly by the value of a template variable. For example, in "Mick's Travels", the itemBody contains an illustration that needs to be varied according to the value chosen for a template variable. To achieve this three templateInline elements are used, each one enclosing a different img element. This element (along with the similar templateBlock) has attributes for controlling its visibility with template variables in the same way as outcome variables are used to control the visibility of feedback.
Item templates can be combined with adaptive items too.
Monty Hall (Take 2)
In Monty Hall (Take 1) we cheated by fixing the game so that the wrong strategy always lost the candidate the prize (and the first mark). In this version we present a more realistic version of the game using an item template. The same outcome variables are defined to control the story and the feedback given but this time a templateDeclaration is used to declare the variable PRIZEDOOR. The templateProcessing rules are then used to preselect the winning door at random making the game more realistic. The responseProcessing rules are a little more complicated as the value of PRIZEDOOR must be checked (a) to ensure that Monty doesn't open the prize winning door after the candidate's first choice and (b) to see if the candidate has actually won the "fantastic prize".
In this example, using the correct strategy will still lose the candidate the prize 1/3 of the time (though they always get the mark). Do you think that the outcome of the game will effect the response to the final strategy question?
The number divisors
This example makes extensive use of templates to test knowledge of calculus. It has modal feedback and includes some mathML.
Test of statistics functions
An example that uses templates extensively, and uses many common numeric operators in the response processing. It has modal feedback and includes some mathML.
Product of a fraction by a number
This is another numeric example that makes use of templates, but is notable for its' use of templateConstraint to determine variables at runtime.
It is often desirable to ask a number of questions all related to some common stimulus material such as a graphic or a passage of text. Graphic files are always stored separately and referenced within the markup using img or object elements making them easy to reference from multiple items but passages of text can also be treated this way. The object element allows externally defined passages (either as plain text files or HTML markup) to be included in the itemBody.
The following two example demonstrate this use of a shared material object.
Orkney Islands Q1
Orkney Islands Q2
Associating a style sheet with an item simply involves using the stylesheet element within an assessmentItem. The Orkney Islands examples above use this element to associate a stylesheet written using the CSS2 language. Notice that the class attribute is used to divide the item's body into two divisions that are styled separately, the shared material appearing in a right-hand pane and the instructions and question appearing in a left-hand pane.
Orkney Islands Stylesheet
The XHTML object element is designed to support the graceful degradation of media objects. The HTML 4.01 specification (the basis for [XHTML]) says "If the user agent is not able to render the object for whatever reason (configured not to, lack of resources, wrong architecture, etc.), it must try to render its contents."
Writing a Postcard (Take 2)
This example is the same as Writing a Postcard except that the picture of the postcard is provided in two different formats. Firstly as an encapsulated PostScript file (EPS) and then, alternatively, as a PNG bitmapped image. Finally, if the delivery engine is unable to handle both offered image types the text of the postcard can be displayed directly. Item authors should consider using this technique for maintaining images suitable for a variety of different output media, e.g., paper, high-resolution display, low-resolution display, etc.
The Orkney Islands Stylesheet illustrates the way styles can be applied to the XHTML elements that defined the structure of the item's body. The class attribute can also be applied to interactions and many of the common formatting concepts will still be applicable (font size, colour, etc.). Delivery engines may also use this attribute to choose between multiple ways of presenting the interaction to the candidate - though the vocabulary for class attributes on interactions is currently beyond this specification.
The QTI Questionnaire
This example illustrates an item that is used to present a set of choices commonly known as the likert scale used to obtain responses to attitude-based questions. The question is represented by a normal choiceInteraction but the class attribute of the itemBody is set to likert to indicate to the delivery engine that it should use an appropriate layout for the question, e.g., using a single line for the prompt and the choices with each choice at a fixed tab stop. By applying the style class to the whole of the item body, a delivery engine that renders multiple likert items together might be able choose a more compact rendering. Note that in this example the responseProcessing is absent, there is no right answer!
This simple example illustrates the inclusion of a mathematical expression marked up with MathML into an item.
The format attribute of printedVariable profiles the formatting rules described by the C standard. The following table illustrates the main features. Spaces are show as the '_' (underscore) character to improve readability.
|Format specification||Input||Formatted output||Notes|
|%i||-987||-987||Simple signed decimal format.|
|%.4i||-987||-0987||Precision specifies the minimum number of digits in i, o, x and X formats and defaults to no minimum.|
|%.0i||0||When formatting zero with a precision of 0 no digits are output (i, o, x and X formats only).|
|%8i||987||_____987||Field-width set manually to 8 results in five leading spaces.|
|%2i||987||987||Field-width set manually to 2 is insufficient so ignored.|
|%-8i||987||987_____||Hyphen flag forces left field alignment resulting in five trailing spaces.|
|%08i||987||00000987||Zero flag forces zero-padding resulting in five leading zeros.|
|%+i||987||+987||Plus flag leads postive numbers with plus sign (excluding o, x and X formats).|
|%_i||987||_987||Space flag leads postive numbers with space (excluding o, x and X formats).|
|%o||987||1733||Octal format, number must be positive|
|%#o||987||01733||# flag ensures at least one leading 0 for o format.|
|%x||987||3db||Hex format (lower case), number must be positive|
|%#x||987||0x3db||# flag always displays leading 0x for x format.|
|%X||987||3DB||Hex format (upper case), number must be positive|
|%#X||987||0X3DB||# flag always displays leading 0X for X format.|
|%f||987.654||987.654000||The precision specifies number of decimal places to display for f format and defaults to 6.|
|%.2f||987.654||987.65||Precision set manually to 2.|
|%#.0f||987||987.||# flag forces trailing point for f, e, E, g, G, r and R formats.|
|%e||987.654||9.876540e+02||Forces use of scientific notation. The precision specifies number of figures to the right of the point for e and E formats and defaults to 6.|
|%.2e||987.654||9.88e+02||Precision set manually to 2.|
|%E||987.654||9.876540E+02||Forces use of scientific notation (upper case form).|
|%g||987654.321||987654||Rounded to precision significant figures (default 6) and displayed in normal form when precision is greater than or equal to the number of digits to the left of the point.|
|%g||987||987||Trailing zeros to the right of the point are removed.|
|%g||987654321||9.87654e+08||Scientifc form used when required.|
|%g||0.0000987654321||9.87654e-05||Scientifc form also used when 4 or more leading zeros are required to the right of the point.|
|%#g||987||987.000||# flag also forces display of trailing zeros (up to precision significant figures) in g and G formats.|
|%G||0.0000987654321||9.87654E-05||As for g but uses upper case form.|
|%r||0.0000987654321||0.0000987654||The same as g except that leading zeros to the right of the point are not limited.|
|%R||0.0000987654321||0.0000987654||The same as G except that leading zeros to the right of the point are not limited.|
Specialized markup languages such as Chemical Markup Language [CML] exist for many domains that have a need for computer aided assessment. For that reason, integrating such markup languages with QTI seems attractive. One such language, MathML, is supported within itembodies of QTI 2, but no others. The main reason is that MathML is natively supported by many webbrowsers, but many others are not.
One other language that is widely supported by browsers is Scalable Vector Graphics [SVG]. While it is not supported in QTI 2 itembodies at this stage, it is easy to embed via HTML's 'object' tag. Domain specific languages such as CML can often be rendered as SVG, thus providing a convenient way to integrate material with QTI 2. At present, QTI's printedVariable can only be used within MathML and HTML. Other markup languages may be supported in a future version of QTI.
Another feature that is considered for future inclusion is the use of SVG and other languages via HTML 5's 'embed' tag [HTML5]. The use of this tag is not currently supported either within or outside HTML's 'object' tag.
Some specific elements that aid in accessibility have also been included in QTI v2.2 and are detailed in section 7. Accessibility and Assessment
The custom data-* attribute allows the extension of QTI to support additional features. When defining a custom attribute, it is important to utilize clear naming conventions that describe what the attribute does in order to support interoperability. Below is an example item and details of how each custom attribute is intended to be used. It is recommended that details such as this should be included in any documentation tied to your items.
Example of possible use cases:
details from example:
<gapMatchInteraction responseIdentifier="RESPONSE" shuffle="false" data-group-response="true">
data-group-response is intended to simplify scoring by grouping multiple responses with similar identifiers. This means C1 circle(3) indicates 3 instances of circle are needed in C1 for a correct response.
<gapText identifier="circle" matchMax="5" data-type="clickPop" data-target-container="C1">
data-type defines the type of functionality applied to a particular element. In this case it’s applied to gapText.
data-target-container indicates where an element will move to when clicked. Specifying the gap identifier allows multiple gapText elements to be assigned to their respective gap target.
<gap identifier="C1" data-type="gridContainer" data?centerpoint="left" matchGroup="max-5 min-0"/>
data-type as indicated above is used to define the type of functionality a particular element may have. In this case it defines the gap functionality as a grid container.
data-centerpoint is used to further define the functionality by indicating the alignment of options within a gap.
The use of HTML5 attributes ‘figure’ and ‘figcaption’ allow images to have associated information. The use of these attributes also allows screen readers the ability to provide the necessary context associated to the images for ARIA accessibility.
The example below represents the use of ‘figure’ and ‘figcaption’ attributes.
Figure 3.24 Castles (Illustration)
In the Castles example, a ‘figure’ element is used to be the container of an ‘img’ element. Within the ‘figure’, a ‘figcaption’ element is used to describe the image by the text “Figure 1: A beautiful castle.”.
The following HTML5 Audio and Video elements can be used during item authoring to deliver accessible audio and video with additional tracks such as captions, audio description, and alternate languages:
Audio Video Elements
BI-directional Text and Content
Item authors might want to specify the base directionality of their item contents. This is done by the ‘dir’ attribute, enabling text and content bi-directionality (BIDI). Although the [UNICODE] specification supports directionality of characters, the ‘dir’ attribute enables item authors to specify the direction of texts but also contents, such as tables or interactions. The Content Model described by QTI 2.2 Information Model obey to bidirectional algorithm, inheritance of text direction information, and direction of embedded text specified by the [HTML4] and [HTML5] specifications.
Composition of Water (Hebrew)
The following example is an Hebrew version of the Composition of Water item. An enclosing div’ has a ‘dir’ attribute with a value of "rtl" (Right to Left). As a result, the ‘rtl’ directionality is in effect (by inheritance) for all nested block elements. The choiceInteraction and its content must be then displayed from right to left as well.
Figure 3.25 - Composition of Water, Hebrew version (Illustration)
Grand Prix of Bahrain (Hebrew)
The following example describes the use of the bdo class to to turn off the bidirectional algorithm for given text portions (“F1”, “Rubens Barrichello”, “Jenson Button”, “Michael Schumacher”).
Figure 3.26 - Grand Prix of Bahrain, Hebrew version (Illustration)
QTI 2.2 introduces Ruby Markup support. Its intent is to provide a way to render small annotations rendered alongside base text. As explained in depth by the W3C Ruby Markup and Styling article, “Ruby is used in East Asian countries to describe characters that readers might not be familiar with, or describe the meaning of ideographic characters”. [Ruby] Markup in QTI 2.2 adheres to the description of W3C in [HTML5].
The item example below makes use of the ruby, rb and rt classes to annotate base text in paragraphs and choices.
Figure 3.27 - Hometown (Japanese)
Interaction Mix (Sachsen)
A test with a representative mixture of widely used interaction types. The test has no feedback at either the test or item level, is in a complete package and is in German.
Simple Feedback Test
This example demonstrates a straightforward use feedback at the testlevel. It is a complete package, with manifest and items.
Feedback Examples Test
In this example, the feedback at the end of the test depends on the scores obtained in the four sections of the test. There is extensive inline documentation in the test. The test is part of a complete package, with manifest and items.
Sets of Items With Leading Material
This example illustrates a test consisting of a set of three items (rtest01-set01.xml, rtest01-set02.xml, rtest01-set02.xml) sharing a single fragment of leading material (rtest01-fragment.xml). The fragment is included in each of the assessmentItems in the set by using the XInclude mechanism.
The submission mode is set to individual mode requiring the candidate to submit their responses on an item-by-item basis.
The navigation mode is set to linear mode restricting the candidate to attempt each item in turn. Once the candidate moves on they are not permitted to return.
Arbitrary Collections of Item Outcomes
This example illustrates the use of two assessmentSections (sectionA and sectionB) and one subsection (sectionB1). Both sectionA and sectionB are visible, meaning that they are identifiable by the candidate. Conversely, sectionB1 is not identifiable as a section.
The submission mode is set to simultaneous. The candidate's responses are all submitted together at the end of the testPart (in this case effectively meaning at the end of the assessmentTest).
The navigation mode is set to nonlinear mode allowing the candidate to navigate to any item in the test at any time.
The test uses weights to determine the contribution of the inidividual item score to the overall test score. In this example the weight of 0 for item160 means that its score isn't taken into account when calculating the overall test score. The weight of 2 for item034 means that the score for item034 is multiplied by 2 when calculating the overall test score.
For the assessmentItems where no weight is given, a weight of 1.0 is assumed.
Categories of Item
This example illustrates the use of categories of assessmentItems in the assessmentTest.
The submission mode is set to simultaneous. The candidate's responses are all submitted together at the end of the testPart (in this case effectively meaning at the end of the assessmentTest).
The navigation mode is set to nonlinear mode allowing the candidate to navigate to any item in the test at any time.
The test uses the category attribute to assign the items to one or more categories. The outcomeprocessing part of the example shows how the category is being used to sum the score of a selection of the questions.
Arbitrary Weighting of Item Outcomes
Specifying the Number of Allowed Attempts
This example illustrates the use of itemSessionControl to set the number of allowed attempts.
The example contains two testParts, the maximum number of allowed attempts for the first testPart is set to unlimited (maxAttempts = 0) and the maximum number of allowed attempts for the second testPart is 1.
The submission mode for both testParts is set to individual mode requiring the candidate to submit their responses on an item-by-item basis.
The navigation mode for both testParts is set to linear mode restricting the candidate to attempt each item in turn. Once the candidate moves on they are not permitted to return.
Controlling Item Feedback in Relation to the Test
This example illustrates the use of itemSessionControl to set the item feedback in relation to the test.
The submission mode for the second testPart is set to simultaneous. The candidate's responses are all submitted together at the end of the testPart.
The navigation mode of the second testPart is set to nonlinear mode allowing the candidate to navigate to any item in the testPart at any time.
The showFeedback attribute of itemSessionControl is set to true, affecting the visibility of feedback after the end of the last attempt.
Allowing review and feedback in simultaneous mode means that the test is navigable after submission (in this case, in a nonlinear style)
The showSolution attribute of itemSessionControl is set to false, meaning the system maynot provide the candidate with a way of entering the solution state.
Remember that the showFeedback attribute controls the assessmentItem feedback on test level. It doesn't overrule the display of feedback as set inside the item.
Controlling the duration of an item attempt
This example illustrates controlling the duration of an item attempt (both maximum and minimum) in the context of a specific test.
The test shows the use of the timeLimits element to set the maxTime constraint for the complete test, a single assessmentSection and a single assessmentItem.
The example contains one assessmentItemRef (item034) which has a minTime of 3 minutes and a maxTime of 10 minutes. This means that candidates cannot progress to the next item in the test (item160) until they have spent 3 minutes interacting with it. Given that the candidate is limited to a maximum of 1 attempt at each item in the test, this effectively means that the candidate is prevented from submitting their responses until 3 minutes have passed. However, they must submit their responses before 10 minutes have passed. When the time limit is up the current responses would typically be submitted automatically.
It is up to the assessment constructor to make sure that the sum of all maxTime elements in the assessment is smaller or equal to the maxTime of the assessmentTest and that the sum of all minTime elements in the assessment is smaller or equal to the maxTime of the assessmentTest.
Early termination of test based on accumulated item outcomes
This example shows how to provide support for early termination of test based on accumulated item outcomes.
The outcomeProcessing for the test is invoked after each attempt and checks to see if the SCORE is greater than 3. If that is the case the exitTest terminates the test.
Golden (required) Items and Sections
In assessmentSection B, we select 2 children using the selection element, but assessmentSection B1 is required (because of the required="true" attribute) so we effectively select B1 and one of the other three items. B1 is an invisible section and the three items it contains will be mixed in with the other selected item when shuffling resulting in a an assessmentSection containing four items.
Branching based on the response to an assessmentItem
This example shows the support for branching based on the response to an assessmentItem. The example uses the preCondition element and the branchRule element.
The preCondition element sets the conditions that need to be met for an assessmentItem or assessmentSection to be displayed. In nonlinear mode, pre-conditions are ignored.
The branchRule element contains a rule, evaluated during the test, for setting an alternative target as the next item or section. As with preconditions, branch rules are ignored in nonlinear mode. The second branchRule element contains a special targetItem EXIT_SECTION which means exit this section of the test.
Items Arranged into Sections within Tests
This example shows the use of sections to group individual items.
Randomizing the Order of Items and Sections
This example shows the use of the ordering element to randomize the order of items and sections.
Basic Statistics as Outcomes
This example shows how basic statistics of a test are assinged to outcomes.
A number of build in statistics (numberCorrect, numberIncorrect, numberPresented, numberSelected, numberResponded) are assigned to Outcome Variables.
In addition to that the Outcome Variable "PERCENT_CORRECT" is calculated based on two of those basic statistics.
Mapping item outcomes prior to aggregation
This example shows how item outcomes are mapped prior to aggregation.
The variableMapping element maps the item034.NOTA to the variable SCORE.
Example Item Statistics
This example demonstrates the construction of a usage-data file. When distributing usage data within a content package the usage-data should be stored in a separate file within the package and referred to in the manifest file by an appropriate cp:resource element. Note that references to the assessment items and other objects within the usage-data file itself are not considered to be dependencies of the resource. The resource type for usage-data files is imsqti_usagedata_xmlv2p1.
The packages listed below are included with the specification as samples resources available for download from the IMS website.
Simple Packaging Example
This example demonstrates how a single item is packaged using the techniques described in the Integration Guide. The manifest file demonstrates the use of a resource element to associate metadata (both LOM and QTI) with an item and the file element to reference the assessmentItem XML file and the associated image file.
Shared Image Example
This example demonstrates how multiple items are packaged. Note that where two items share a media object (such as an image) a dependency can be used to enable the object to be represented by its own resource element within the manifest.
CC QTI package
This fully zipped up package contains a linear test and a collection of the most widely used itemtypes: multiple choice (single and multiple), fill-in-the-blank and essay submission. The package is a transcoding from a Common Cartridge QTI 1.2 example, and therefore demonstrates IMS_CC 1.1 metadata usage in the manifest file, including the use of curriculum standards.
The original QTI 1.2 profile for IMS_CC that provided the basis for this CC QTI package was determined by defining an intersection of the assessment capabilities of the most widely used LMSs at the time. This CC QTI package was successfully imported by a number of QTI 2.x implementations in two interoperability tests. As such, it probably represents the most widely supported minimal subset of the QTI 2.x specification.
Note that the choice items in the CC QTI package (QUE_102010, QUE_102012, QUE_102013, QUE_104045, QUE_104047, QUE_104048, QUE_104049 and QUE_104051) provide a choice of two types of feedback: inline and modal.
The CC2_match.xml response template provides inline feedback and is the preferred response template. It processes outcomes by doing two things. One is to set the FEEDBACK variable with the RESPONSE variable, so that the feedbackInline element with an identifier that matches the FEEDBACK value can be shown to the candidate. When the CC2_match.xml template is used, the FEEDBACKBASIC variable that is declared in all choice items is not used and won't be set, which means the 'correct' and 'incorrect' modal feedback elements won't ever be shown to the learner.
The CC2_match_basic.xml template is designed for those systems that do not support inline feedback. Instead of inlinefeedback elements, the match basic template triggers the 'correct' and 'incorrect' modalfeedback elements via the FEEDBACKBASIC variable. It does this by comparing the candidate's RESPONSE value with the correct RESPONSE value from the item. If they match, the SCORE variable is set to the MAXSCORE value from the item and the FEEDBACKBASIC variable is set to 'correct'. If the candidate's RESPONSE value and the correct RESPONSE value from the item do not match, the match basic template will set the FEEDBACKBASIC variable to 'incorrect'. When the CC2_match_basic.xml template is used, the FEEDBACK variable will never be set to the 'true' or 'false' value, which means that the inline feedback won't ever be shown to the learner.
The match basic template should only be used by systems that cannot support inline feedback instead of the CC2_match.xml template that is referenced in the items.
Support for Non-Manifest-Embedded Resource-specific Metadata
In QTI v2.1 and prior, resource metadata was only embedded within the package manifest itself, i.e. part of the manifest XML. QTI v2.2 allows for resource-specific metadata to also be contained within its own XML instance file as an alternative to the current manifest embedded approach. Both methods are now allowed; however, it is recommended that only one of the approaches be used for any one resource. In the example below, a dependency on the new Item metadata instance has been added to the Item resource (line 0006);
Package with Response Processing Templates
The response processing templates feature of QTI allows common sets of response processing rules to be documented in separate XML documents and simply referred to by the items that make use of them. The mechanism for identifying the template to use is the template attribute on the responseProcessing element. This attribute is a URI, but it is not required to be a URL that resolves directly to the appropriate XML document. To help systems that support general response processing find the rule definitions required to support new templates an additional templateLocation attribute is provided which may be used to provide a URL that resolves to the template's XML document. If this URL is given relative to the location of the item then the template should be included in the same content package and listed as a dependency for each of the items that refer to it.
This example package demonstrates the use of a relative URL to refer to response processing templates listed as separate resources within the package as described above. Note that the technique used is similar to that for locating XML schemas from the URIs used to refer to their namespaces, however, XML schemas included in content packages to assist with validation should not be described as separate resources (or file dependencies) in the manifest file.
Package with Externally Defined Response Processing Templates
This examples is the same as the one above (Package with Response Processing Templates) except that response processing templates are not included. The templateLocation attribute is used with absolute URLs of the templates.
Package with Test and Items
This examples demonstrates how to package an assessmentTest together with the assessmentItem>s referenced by the test. Both the assesmentTest and assessmentItems are represented by resource elements within the manifest. A depency is used to represent the relationship between the assessmentTest and the individual assessmentItems.
BBQs test package
A package with a wide, but representative set of items from UK Higher Education. The set exemplifies the union of most commonly used itemtypes in that sector, regardless of tool or format. There are some items that make use of math extensions.
A representative set of commonly used question types, geared for language learning. Partially in German.
English excercises II
A more advanced language learning test that demonstrates the use of rubricBlock and extendedTextInteraction for reading comprehension.
QTI 2.2 has several different methods for achieving accessibility, namely use of the APIP 1.0 markup and W3C web accessibility standards, including: HTML4, selected HTML5 elements, WAI-ARIA 1.0, SSML 1.1, PLS 1.0, and CSS3 Speech.
APIP 1.0 is fully contained within the QTI 2.2 interoperability standard, and full documentation for [APIP] can be found on the IMS website. APIP is an interoperability standard that enables the exchange of assessment content and a test taker’s accessibility needs by defining standard XML-based exchange formats. APIP also provides expectations of a computer-based assessment delivery system for the delivery of an assessment to a test taker. The assessment content, with associated accessibility information, can be efficiently exchanged between assessment applications and service providers without the loss of information or the need to “re-code” the content. APIP focuses on the needs of test-taking audiences, and facilitates the exchange of accessibility content created for the test-taking audiences, the delivery needs of those audiences, and the ability for the audience to inform the delivery system of their particular needs. It enables educators to make decisions that support the specific needs of individual test takers.
Valid QTI 2.2 content many contain web accessibility markup, including Accessible Rich Internet Applications [ARIA] 1.0 and Speech Synthesis Mark-up Language [SSML] 1.1 in the text-based HTML content. QTI also allows for some additional HTML5 tags in the HTML markup, to aid in the structural intent of the content to aid Assistive Technology test takers. Content instances may also reference CSS3 Speech and/or Pronunciation Lexicon Standard [PLS] files to instruct Text-to-Speech Synthesis engines on pronunciation, emphasis, and timing.
Assessment delivery systems that use web delivery platforms can be made accessible to Assistive Technology audiences by using standardized accessible web markup.
To aid in the navigation of assessment content by assistive technology test takers, authors should try to markup their content using structured, hierarchical mark-up, including heading levels (h1, h2, h3, etc.), ordered and unordered lists, and tables (when presenting data). Authors should avoid using styles or text formatting as the only way of indicating a hierarchy of information within the content. Additionally, authors should not use header elements to provide visual results only. Header levels, and all structural markup, can be used as navigational aids by assistive technology users. If lists are used for layout purposes, the list items should include ARIA attributes that indicate their purpose (role).
In addition to the HTML4 elements and attributes allowed within QTI 2.1, QTI 2.2 includes the following additional allowable HTML5 elements:
See the W3C documentation ( http://www.w3.org/TR/html5/ ) for proper use of the above elements.
The example below illustrates the use of using the <video>, <source> and <track> elements within the itemBody of a QTI assessment item.
The next example, shown below, illustrates the use of <article> for grouping that permits multiple access, <section> to group related content, <nav> to group skip links, <header> to group visual and non-visual semantic markers and <footer> for standard attribution and copyright information.
ARIA allows for the addition of attributes to aid in user interaction, how the elements relate to one another, to reflect the current state of objects, and aid in controlling the user’s focus within an application. While it is best practice to use structured HTML markup as the primary method for providing web accessibility, the ARIA attributes greatly assist authors in expressing the intended use of content.
ARIA attributes focus on 3 main areas, namely: the role an element or widget is intended to play within the page/application, the state of properties that the element/widget is currently in, and aiding the focus and order of the objects within the page/application. While use of ARIA attributes aids in creating accessible web pages, they should be implemented as part of a thorough web accessibility effort, which should also follow the Web Content Accessibility Guidelines [WCAG] 2.0.
Within QTI, it may be beneficial to add ARIA attributes within the content to indicate the specific purpose of the authored content. In delivering QTI content, ARIA attributes can play a significant role in making the content (and the testing interface) accessible for Assistive Technology audiences. Code that regulates the user interaction will need to allow for a user’s interaction with the content/interface, and update the ARIA attributes as required.
The full documentation for [WAI-ARIA] can be found at the W3C website. Select ARIA 1.0 attributes are permitted for use in QTI 2.2. Future IMS documentation will include best practices for the use of WAI-ARIA attributes in an assessment context.
Sample item using ARIA
There are three different methods for providing pronunciation information in default content that could be consumed by Text-to-Speech (TTS) software (and by extension Screen Reader software) in APIP. They are:
The CSS3 Speech and PLS methods allow for the reference of files that list pronunciation rules to be employed when reading the assessment content. These pronunciation files may be referenced by more than one assessment item in a test package, or the pronunciation files may only be used by the specific assessment item that references the file. CSS3 Speech and PLS markup can NOT be directly used in an assessment item file.
An example of the use of [PLS] is shown below.
The use of the PLS lexicon reference is shown in line 0018. In this example, the use of PLS for the correct pronunciation of ‘Drosophila Melanogaster’ is demonstrated. The content of the corresponding PLS file for this example:
This PLS file must be included within the example package and identified within the corresponding resource descriptions.
SSML markup is used directly into the default content markup, and the pronunciation is specific to the exact location of the markup in the particular assessment item that employs the SSML markup. SSML can be used to indicate specific pronunciations, the location and length of pauses, volume, pitch, rate, etc. across different synthesis-capable platforms.
Example [SSML] code:
The QTI schema file imports externally defined auxiliary schemas, the built-in XML namespace and others as described in the spec. The schema imports these from their published locations on the web using absolute URLs. As a result, some XML validation tools may not be able to validate QTI documents when working offline.
There has been some confusion as to whether or not XML schemas that refer to components of the built-in XML namespace should be allowed to provide an associated namespace prefix declaration. The xml:lang attribute used by QTI is an example.
This point was unclear in the first edition of the XML specification and not cleared up until the errata to that edition [XML_ERRATA] was published. The errata has itself now been superseded by the second edition [XML] which makes it clear that the declaration may be included provided it is bound to the reserved prefix xml but that it is not required.
Therefore the QTI schema includes the declaration in the root of the schema. It is understood that some tools will still not validate documents against schemas that contain this prefix and a local copy of the QTI schema with the following attribute removed from the schema element may need to be used instead:
The namespace identifier of the QTI schema has changed for version 2.2 of this specification to
http://www.imsglobal.org/xsd/imsqti_v2p2. Use of this namespace is required when using any of the new elements defined by this version. Documents with a namespace of http://www.imsglobal.org/xsd/imsqti_v2p1 must still be supported.
The offical IMS online [validator] is freely availabe for testing content on the IMS website.
IMS Question & Test Interoperability Implementation Guide
J�r�me Bogaerts (OAT), Thomas Hoffmann (ETS), Rob Howard (NWEA), Wilbert Kraan (JISC/CETIS), Mark McKell (IMS), Colin Smythe (IMS)
1 September 2015
This document provides an overview of the QTI specification.
1 September 2015
This document has been approved by the IMS Technical Advisory Board and is made available for adoption and conformance.
To register any comments or questions about this specification please visit: http://www.imsglobal.org/forums/ims-glc-public-forums-and-resources/question-test-interoperability-public-forum
The following individuals contributed to the development of this document:
Data Recognition Corp
Joseph St. George
Data Recognition Corp
BPS Bildungsportal Sachsen GmbH
Base Document 2.1
14 October 2005
The first version of the QTI v2.1 specification.
Public Draft 2.1
9 January 2006
The Public Draft v2.1 of the QTI specification.
Public Draft 2.1 (revision 2)
8 June 2006
The Public Draft v2.1 (revision 2) of the QTI specification.
Final Release v2.1
31 August 2012
The Final Release v2.1 of the QTI specification. Includes updates, error corrections, and additional details.
1 September 2015
The Final v2.2 introduces new features and functionality to include for assessment and accessibility.
IMS Global Learning Consortium, Inc. ("IMS Global") is publishing the information contained in this IMS Question and Test Interoperability Implementation Guide ("Specification") for purposes of scientific, experimental, and scholarly collaboration only.
IMS Global makes no warranty or representation regarding the accuracy or completeness of the Specification.
This material is provided on an "As Is" and "As Available" basis.
The Specification is at all times subject to change and revision without notice.
It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.
IMS Global would appreciate receiving your comments and suggestions.
Please contact IMS Global through our website at http://www.imsglobal.org
Please refer to Document Name: IMS Question and Test Interoperability Implementation Guide
Revision: 1 September 2015