Sharebar?

1EdTech Accessible Portable Item Protocol™ (APIP™): Best Practice and Implementation Guide

1EdTech Final Release

 

1EdTech Accessible Portable Item Protocol™ (APIP™): Best Practice and Implementation Guide

 

Final Specification
Version 1.0

 

Date Issued:            31 March 2014

Latest version:         http://www.imsglobal.org/apip/

IPR and Distribution Notices

Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the specification set forth in this document, and to provide supporting documentation.

1EdTech draws attention to the fact that it is claimed that compliance with this specification may involve the use of Measured Progress U.S. patent 8,303,309. 1EdTech takes no position concerning the evidence, validity or scope of such patent rights. Measured Progress has assured 1EdTech that it is willing to license patent rights it owns or controls which would necessarily be infringed by any implementation of this specification to those licensees (Members and non-Members alike) desiring to implement this specification. The statement of Measured Progress to such effect has been filed with 1EdTech. Information regarding a Reasonable and Non-Discriminatory - Zero cost (RAND-Z) license may be obtained from Measured Progress at http://www.measuredprogress.org/ipr-apip.

Attention is also drawn to the possibility that some of the elements of this specification may be the subject of patent rights other than those identified above. 1EdTech shall not be responsible for identifying any or all such patent rights. 1EdTech takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on 1EdTech’s procedures with respect to rights in 1EdTech specifications can be found at the 1EdTech Intellectual Property Rights web page: http://www.imsglobal.org/ipr/imsipr_policyFinal.pdf.

Copyright © 2012-14 1EdTech Consortium. All Rights Reserved.

Use of this specification to develop products or services is governed by the license with 1EdTech found on the 1EdTech website: http://www.imsglobal.org/speclicense.html.

Permission is granted to all parties to use excerpts from this document as needed in producing requests for proposals.

The limited permissions granted above are perpetual and will not be revoked by 1EdTech or its successors or assigns.

THIS SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY AT THE IMPLEMENTER’S OWN RISK, AND NEITHER THE CONSORTIUM, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE USE OF THIS SPECIFICATION.

 

© 2014 1EdTech Consortium, Inc.
All Rights Reserved.

The 1EdTech Logo, Accessible Portable Item Protocol (APIP), and Question and Test Interoperability (QTI) are trademarks of the 1EdTech Consortium, Inc. in the United States and/or other countries.
Documents Name:  1EdTech APIP Best Practice and Implementation Guide – Revision: 31 March 2014


1         Introduction

1.1       Scope and Context

The Accessible Portable Item Protocol™ (APIP™) is an interoperability standard that enables the exchange of assessment content and a test taker’s accessibility needs by defining standard XML-based exchange formats. APIP also provides expectations of a computer-based assessment delivery system for the delivery of an assessment to a test taker. The assessment content, with associated accessibility information, can be efficiently exchanged between assessment applications and service providers without the loss of information or the need to “re-code” the content. APIP is intended to enable assessment file exchange that serves ALL students, not only students with accessibility needs. APIP focuses on the needs of students, rather than assuming a particular physical or cognitive diagnosis prescribes the solution. It enables educators to make decisions that support the specific needs of individual students.

In order to achieve this goal, APIP identifies three major components that must work together in harmony. Those are illustrated in the Figure 1.1 and discussed below.

Figure 1.1 The APIP Major Components Set.

Figure 1.1 The APIP Major Components Set.

Accessible Assessment Content

By leveraging the existing 1EdTech QTI™ v2.2 assessment interoperability specification APIP, in simplest terms, extends the default QTI item and assessment specifications to include alternate (or supplemental) content representations, content presentation sequences (known as inclusion orders), and information about companion materials or tools.

Test Taker Personal Needs and Preferences Profile

By collaborating with the 1EdTech Access for All (AfA) initiative, APIP has identified the test taker’s needs profile, or Personal Needs and Preferences (PNP). This profile contains the accessibility tool preferences and assessment session settings to provide the test taker with full access to the assessment content. The PNP provides information needed by an APIP delivery system to enable the various accessibility elements in the assessment content. By mapping the PNP to the access elements, all test takers can access the right content, in the right format, and at the right time. The PNP also provides accessibility tool preferences and assessment session settings that are to be provided by the delivery system’s user interface, without information provided by the content file(s).

Assessment Delivery System

By combining the accessible assessment content with the test taker’s Personal Needs and Preferences profile, the assessment system can deliver the appropriate testing experience for all test takers. While APIP is not a delivery system specification, it does identify the necessary features and functions that must be provided in order to support user profile access needs while providing alternate content sequences and representations. An APIP certified delivery system will be able to receive APIP content and PNP information, and accurately present the proper settings and content according to the test takers PNP profile. APIP is primarily focused on, though not necessarily limited to, online test delivery platforms.

To achieve accessible content interoperability, APIP host item banking systems will be able to import and export APIP formatted XML. This should, as much as possible, be automated so that manual interventions (proctor/assistant help) are not required.

In addition, each online delivery platform may provide for additional features and functions to be configured by the test content, definition, or other assessment metadata. This may include such items as navigation controls, tool access (ex: toolbars), error messages, logos, etc. that are not provided by the APIP (or QTI) content specification.

While the APIP file format is the expected exchange format, it is not a requirement of implementing delivery systems to use the native APIP file formats during actual operational delivery. Implementing delivery systems may transform the provided content to proprietary formats that may aid in the efficient delivery of content or user accessibility need information to test takers, provided the accessible content information supplied within the APIP access element data is available to test takers.

This document contains the practices for the use of the APIP standard. In this document, practices can be labeled as recommended, best, or enforced practice. These practices are defined as follows:

·         Recommended is a practice that will likely be the more widely used approach by the assessment community, and interoperability is much more likely using the recommended practice.

·         Best is a practice that is the intended technical approach, but may not be enforceable through the certification process (using 1EdTech validation rules). Interoperability may not be possible if best practices are not implemented in a vendor’s system.

·         Enforced is a practice that must be implemented in order to pass through the 1EdTech validator, and is required to achieve APIP certification.

The APIP content standard provides assessment programs and question item developers a data model for standardizing the file format of digital test items. When applied properly, APIP accomplish two important goals. First, the APIP content standard allows digital test items to be ported across APIP compliant test item banks.  Second, APIP provides a test delivery interface with all the information and resources required to make a test item accessible for test takers with a variety of accessibility needs.

1.1.1           Relationship to 1EdTech Standards

APIP builds on the 1EdTech Question and Test Interoperability (QTI) v2.2 [QTI, 12a] and the 1EdTech Access For All Personal Needs & Preferences (AfA PNP) v2.0 [AfAPNP, 10] specifications.  A number of extensions have been added to both of these specifications [APIP, 14c], [APIP, 14d].

1.1.2           Other Related Standards

APIP is expected to compliment and not supersede or replace any other US-based or international accessibility standards. APIP is an assessment content interchange standard that provides for robust extensions to assessment content in support of accessibility options within the assessment delivery system. In contrast, APIP is not a specification for assessment delivery system accessibility functionality. Suppliers of assessment delivery systems will articulate how their platform meets the needs of their users as well as how the platform incorporates APIP content and functions and in turn, how it satisfies Section 508 Amendment to the Rehabilitation Act of 1973 or other internationally accepted standards such as the Web Content Accessibility Guidelines (WCAG) 2.0 (http://www.w3.org/TR/WCAG ) that are applicable for a specific program.

Assessments, and more specifically high-stakes assessments, present unique challenges for accessibility that are unlike most traditional web applications. For example, ensuring that certain system behaviors are consistent across platforms regardless of the software that each system may be running is necessary in online assessments. Comparability concerns, even for accessibility options, should be considered when designing the assessment interfaces. Often there is more control over the user interface to turn on or off features based on user profiles or the type of assessment being administered that is not typical in a traditional browser interface. In addition, accessibility features may need to undergo research or usability studies to determine their effectiveness and how they may or may not impact comparability.

An assessment delivery system that implements all of the APIP standard’s access features could meet, and in some cases exceed other industry accessibility standards. However, 1EdTech and the APIP compliance and certification process will allow for each supplier to specify which accessibility features have been implemented in their delivery platforms. While the specific content provided for each accessibility feature will be consistent, there will be some variation in the specific accessibility implementations, and therefore the user experiences, across different APIP delivery systems. As the APIP compliance and certification process describes, documentation of specific access feature support can be provided to 1EdTech and their members for review.

1.1.3           Interoperability in APIP

The APIP standard is intended to foster interoperability of the content packages and user accessibility needs (PNP) files. APIP delivery systems are expected to support the content and user profile information, but are not expected to directly communicate with other APIP delivery systems. That is, the file exchange is the primary method of interoperability. APIP delivery systems will be expected to support the information supplied in the APIP exchange files, though they are not required to use those exchange files at the moment of delivery. Delivery systems are expected to use the information supplied in those exchange formats, but can elect to use proprietary or other delivery-focused formats during content rendering.

Within the content package exchange, there may be interoperability issues surrounding specific file formats (e.g., mime types) for some supporting media files or custom interaction code. The APIP standard allows for the use of any media type at this time, so interoperability between vendors may require agreement on the provision of specific file formats. Likely areas where that would occur within the content are pre-recorded files for use within the spoken access feature, pre-recorded video of sign language, or provision of a media file (audio or video) as part of an item stimulus or prompt. Additionally, APIP allows for the use of a custom interaction (a specific QTI interaction) within an item, which may use specific kinds of code to govern the user’s response interaction. That governing code may not be interoperable between an APIP authoring system and a receiving APIP delivery system. To achieve maximum interoperability between organizations for a specific program, the program will need to decide on the specific file formats, supported QTI features, APIP access features, and accessibility business rules specific to that particular program.

1.1.4           APIP Access Features

The first and current version of APIP describes support for the following access features:

·        Spoken (The delivery system will read text and/or describe graphics to test takers. Spoken is often used for students with reading disabilities, second language learners, or students with visual impairments.)

  • Text Only (read text aloud but will not describe graphics)
  • Text and Graphics (read text aloud and describe graphics)
  • Non-Visual (text and graphics spoken information optimized for test takers who cannot see any of the displayed content)
  • Graphics Only (describe graphics but not intended to read text aloud)
  • Directions Only (only content designated as directions will be read aloud)
  • Screen Reader Preferences (for test takers using text-to-speech capabilities provided by the delivery system)
  • Braille (information for a refreshable Braille display, including user preferences)
  • References to tactile media (raised line-drawings, manipulatives) (When a tactile manipulative is available for test takers, the test will notify the test taker of the availability and identifying characteristics of the tactile material. Usually provided (but not limited) to students with visual impairments.)
  • Sign Language (Provides a video alongside the default representation in sign language. Sign Language should only be provided to students who understand the particular sign language available for the specific assessment.)
    • American Sign Language (ASL)
    • Signed English
  • Item Translation (The entire content of the question is translated into the test takers first/primary language.)
  • Keyword Translation (Provides language translations for particular key words and/or phrases in the test takers first/primary language.)
  • Simple Language (Translation of the entire content into another version of the item that uses simpler language.)
  • Alternate Representation (Provides an alternative representation to important data representations such as graphics or tables. Provided to students who have difficulty decoding the default representation.)
  • Magnification, and magnification amount preferences (Increases the magnification of the screen. Usually used for students with low vision & visual impairments.)
  • Reverse Contrast (Inverts the contrast between the text color and background color. Dark becomes light, light becomes dark, orange becomes blue. Most often used by students with visual impairments or reading/learning difficulties.)
  • Alternate text and background colors (Often used by students with visual impairments or reading/learning difficulties.)
  • Color tint overlay over the content (Places a transparent layer of color over the assessment content.  Usuallly used by students with visual impairments or reading/learning difficulties.)
  • Custom Masking (Provides the ability to mask certain parts of the test interface or question. Commonly used by students with Attention Deficit Disorder (ADD) or other learning difficulties, but is also a common strategy for assisting lower-performing students during assessments.)
  • Answer Masking (Provides the ability to show or hide answer options for multiple choice questions. Commonly used by students with Attention Deficit Disorder (ADD) or other learning difficulties, but is also a common strategy for assisting lower-performing students during assessments.)
  • Auditory Calming (Music or background noise is provided to the student during testing. Many students with ADD or other attention difficulties find that the background noise helps them focus on the test. It can also be useful to calm down students who suffer from test anxiety.)
  • Additional testing time (Adds extra time up to an unlimited amount of time for the student to complete the assessment. This is often provided to students who need more time to process the test content, and is a common provision for students with any difficulties.)
  • Breaks (Permits the student to take addition breaks during the testing session. Additional breaks can reduce fatigue and/or increase focus on the test materials.)
  • Keyword Emphasis (Some additional words or phrases may be emphasized to focus students’ attention to particular operative words. Could be useful for lower-performing students who have difficulty focusing on the intent of the question.)
  • Line-by-line reading tool (Tracks lines in reading selections. Regularly provided for students with dyslexia or other reading tracking difficulties.)
  • Language Learner Guidance (Provides additional information about some words or phrases for students who have not yet mastered the language used in the test. While the guidance is provided in the same language as the test, it attempts to explain in simpler terms the concepts and/or words used in the test question. Commonly used by English Language Learner students.)
  • Cognitive Guidance (Provides additional information about some parts of the test content. Commonly provided to students who are often confused by assessment structures.)

To provide support for the access features listed above, APIP combines XML structures available in user profile and item content. The APIP Tagging Map in Appendix A, details how each access feature is addressed in the user profile and/or content structures. Also see the APIP Terms and Definitions document for definitions of each of the above access features.

For some access features, the delivery system will first need to know that the test taker requires the access feature, then will use the access information supplied in the APIP content. For example, a user profile that indicates the user needs spoken text & graphics support, meaning reading aloud all text-based content and describing all the graphics. The APIP content provides information to be supplied to that test taker. The APIP delivery system then provides the necessary sound files (or text-to-speech capability) for the test taker, in the order specified in the content.

In other cases, the delivery system provides the needed access feature without specific information provided by the content. A good example is a student whose reading comprehension is dramatically better when the text and background colors of the test content are changed. The delivery system gives the student the ability to alter the presentation of the testing environment, without the need for any special coding in the content itself.

The APIP standard specifies critical features both for assessment content and for personal needs (e.g., PNP) but intentionally offers limited guidance relative to assessment delivery systems. Accessible online delivery is still in its infancy, and developers are innovating new approaches that may be beneficial to users with specific access needs across different computer devices. It is important that developers of assessment delivery systems as well as organizations that are seeking suppliers of assessment delivery systems consider issues that extend beyond those that are addressed by APIP v1. Among the many possible issues when delivering accessible online assessments are the following:

  • Determining how best to render content to individuals whose access needs, as documented in their PNP files, are the same but whose specific access preferences are different. Consider an example of two people, both of whose PNP files indicate that they benefit from using the spoken text & graphics access feature. One is blind. The other has difficulty decoding graphic information. The individual with difficulty decoding graphic information may wish to minimize the system’s presentation of auditory navigational cues in the assessment, because she can see and understand the navigational controls. On the other hand, auditory navigational cues are likely essential for the person who is blind.
  • Determining which modes of user interaction to support. APIP v1 does not require support for any specific modes of user interaction with the delivery system. Such modes, with respect to input devices, might include, for example, keyboard, mouse, touch screen, tab/enter device, and a switch mechanism (enter command and automatic tabbing). Yet for a delivery system to be accessible for a particular person there needs to be at least one mode or method that is usable by that person. For a delivery system to be accessible to diverse users, it will likely need to support multiple modes of user interaction. For example, one user may be comfortable interacting with the assessment in a mode that requires the ability to use both mouse and keyboard. On the other hand, a person who is blind may not be able to use a mouse and but instead needs a mode that will provide access to all functionality of the assessment via keyboard alone, or by input from a refreshable Braille display.
  • Determining how to improve the usability of delivery systems for specific audiences and settings. This would include, for example, how to make refreshable Braille presentations maximally helpful in real assessment settings. In one setting, the student may want the refreshable Braille presentation to be accompanied by the spoken feature and in another the student may not want the spoken feature. In a case where the student has residual sight, it may be important to have the default (e.g., visual) content to highlight text in synchronization with the spoken access feature. The visual highlighting of default content or even a separate visual text rendering of what is being presented via refreshable Braille may also be important when a sighted assessment proctor needs to verify the information that the test taker is receiving.

 

1.1.5           Kinds of Packages

APIP (as part of QTI) provides methods to bundle content and addition assessment structures using content packages and manifests. Depending on the requirements for each specific partner exchange, content may be packaged in different ways. For example, if the partner exchange is only the item content itself and does not include any test form or section specific information, the individual APIP item files with associated content files (e.g., audio/video files) would be sufficient. Alternatively, if the partner exchange is for both the items as well as the associated structures of the assessments (e.g., test forms), then addition section and content manifest information will be required.

APIP content packages include a package manifest, which details dependent resources for the test or item, the resources referenced, as well as other assessment meta-data. Section 6 describes how to package APIP content for transfer.

Test taker PNP files for APIP v1 do not impose a packaging structure for multiple user profiles. It is likely these user profiles will be exchanged and stored within larger student rostering systems. There is also the capability to provide multiple profiles in a single PNP document, as demonstrated in Section 5 of this document.

1.1.6           Future Scope

APIP v1 purposefully limits the scope of its specification to the online administration of accessible tests. It includes many, but not all, of the commonly administered access features (or test accommodations) during testing. It does not consider other test administration factors that may be stored in other student roster systems, such as specific, physical environment settings. For example, it has no way of indicating a student should take the test in room by themselves, or that the user needs to be given a computer screen of 51 cm (20 inches) or higher. APIP v1 is primarily concerned with the presentation of accessible information during a live assessment session. Future versions may include guidance on alternate response capabilities (i.e., speech-to-text, Braille input, etc.). Additionally, APIP is expected to continue as a standard, and incorporate support for additional accessibility needs as they are more fully researched and understood. Work on APIP conformance will be an ongoing effort directed by the APIP Accredited Profile Management Group (APMG), incorporating feedback from the appropriate APIP Task Force and APIP End-User Group.

Appendix B lists a set of features included in the APIP v1.0 XSD that may be used by vendors, but should be considered experimental in the current version. These features may be modified or replaced in future versions. They are not discussed in the body of this document. The features are not included in any of the example files, nor are they tested in compliance and certification procedures.

1.2       Structure of this Document

The structure of the rest of this document is:

2.     Constructing an APIP Solution

An overview of how the set of examples were created and recommendations for how these can be used as best practice references;

3.     Annotated Standard QTI Examples

Presentation and description of several annotated examples of the use of the standard QTI v2.1 for assessment Items;

4.     Annotated APIP Examples

Presentation and description of several annotated examples of APIP Items;

5.     Annotated APIP PNP Examples

Presentation and description of several annotated examples of Personal Needs and Preferences (PNP) profiles

6.     Supporting the Use-cases

Presentation and description of several annotated examples to show how APIP access features the set of use-cases.

1.3       Related Documents

[AfAPNP, 09]                       1EdTech Access For All Personal Needs & Preferences Information Model v2.0, R.Schwerdtfeger, M.Rothberg and C.Smythe, Final Release, 1EdTech Inc., April 2010.

[APIP, 14a]                           Accessible Portable Item Protocol (APIP) Overview v1.0 Final Specification, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, 1EdTech Inc., March 2014.

[APIP, 14b]                           Accessible Portable Item Protocol (APIP) Technical Specification v1.0 Final Specification, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, 1EdTech Inc., March 2014.

[APIP, 14c]                           Accessible Portable Item Protocol (APIP) Technical Specification of New Features to QTIv2.1 v1.0 Final Specification, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, 1EdTech Inc., March 2014.

[APIP, 14d]                           Accessible Portable Item Protocol (APIP) Technical Specification of New Features to AfA PNPv2.0 v1.0 Final Specification, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, 1EdTech Inc., March 2014.

[APIP, 14f]                            Accessible Portable Item Protocol (APIP) Use Cases v1.0 Final Specification, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, 1EdTech Inc., March 2014.

[APIP, 14g]                           Accessible Portable Item Protocol (APIP) Conformance and Certification v1.0 Final Specification, G.Driscoll, T.Hoffmann, W.Ostler, M.Russell and C.Smythe, 1EdTech Inc., March 2014.

[APIP, 14h]                           Accessible Portable Item Protocol (APIP) Terms and Definitions Specification v1.0 Final Specification, C.Smythe, T.Hoffmann and M.Russell, 1EdTech Inc., March 2014.

[PCI, 13]                                Portable Custom Interactions v1.0 Candidate Final Specification, M.Aumock, M.McKell, and P.Spruiell, 1EdTech, Inc., April 2013.

[QTI, 2012a]                        1EdTech Question & Test Interoperability Assessment Test, Section and Item Information Model v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012b]                        1EdTech Question & Test Interoperability XML Binding v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012c]                         1EdTech Question & Test Interoperability Implementation Guide v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012d]                        1EdTech Question & Test Interoperability Conformance Guide v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012e]                         1EdTech Question & Test Interoperability Metadata and Usage Data v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012f]                         1EdTech Question & Test Interoperability Migration Guide v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012g]                         1EdTech Question & Test Interoperability Overview v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

[QTI, 2012h]                        1EdTech Question & Test Interoperability Results Reporting v2.1 Final Specification, S.Lay and P.Gorissen, 1EdTech Inc., August 2012.

1.4       Acronyms

AfA PNP                Access for All Personal Needs & Preferences

APIP                       Accessible Portable Item Protocol

CP                           Content Packaging

IEEE                      Institution of Electronic & Electrical Engineers

1EdTech           1EdTech Consortium

LOM                      Learning Object Metadata

MC                         Multiple Choice

MC-MR                 Multiple Choice Multiple Response

MC-SR                  Multiple Choice Single Response

PNP                        Personal Needs & Preferences

PSM                       Platform Specific Model   

QTI                         Question & Test Interoperability

T/F                          True/False

XML                      Exchange Mark-up Language

XSD                        XML Schema Definition

 

2         Constructing an APIP Solution

Before you can tailor the delivery of accessible content to an test taker (a.k.a. the test taker, or online assessment user), the delivery system will need to be provided with two important pieces of information: what the user needs (their PNP profile) and the test/item content. Supplying only one of those pieces might result in not understanding the needs of the test taker, or not having the correct content to present to the test taker. However, there are cases where the delivery system is expected to provide access to the test taker, without supplemental accessibility information provided within the XML content file. For example, a user profile that indicates the user needs to have the content magnified. The delivery system would provide the tools or means necessary for the test taker to magnify their content. In contrast, for a test taker that needed American Sign Language (ASL), both their user profile would indicate the need, and the content would contain supplemental information that provides the test taker with a video that signs the item’s content.

For an overview of each access feature, and how the PNP tags relate to the content tags, see Appendix A: APIP Tagging Map. Full definitions of each access feature are available in the APIP document 1EdTech Accessible Portable Item Protocol (APIP) Terms & Definitions Version 1.0  [APIP, 14h].

2.1       Authoring User Profiles

If a test taker needs a specific access feature during an assessment, that access feature would be listed in their PNP profile. For the majority of test takers today, no access features or session settings would be listed. It is acceptable to have a PNP with no features specified. This confirms with the delivery system that no additional access features or session settings need to be provided during an assessment session.

When an access feature is listed in a test taker’s PNP, the default assumption is that the access feature should be provided. Specifically, the assignedSupport tag is required for each access feature, and its default value is always true. Use of the false value for the assignedSupport tag is allowed, and confirms that the student should NOT be provided the access feature. This allows two approaches for the student records system to withhold an access feature from the test taker. First, remove the access feature tag entirely from the profile, or second, change the assignedSupport tag to false.

Most access features also have an activateByDefault tag, which usually defaults to true. activateByDefault indicates that when an assessment session begins for a test taker, the access feature is provided immediately, as opposed to merely available as an option. For example, a test taker requiring magnification would start their assessment with the content and/or interface magnified. Not all access features lend themselves to activation when assessment is initiated, like masking (access feature C2). By default, masking is not activated when the assessment session begins, as it may confuse or disorient the test taker from knowing how to navigate through the test. The masking option is however, available for the test taker to activate at a moment of their choosing.

Some features do not have an activateByDefault tag, like breaks (access feature C5). This is because the feature isn’t something the test taker turns on or off during the assessment. It is a condition of the assessment session generally. The test taker can take breaks during the assessment, or is allowed more time to take the assessment. The feature would never go away during the session.

The spoken access feature contains an additional tag in the PNP called readAtStartPreference. By default, this is set to true, and means that when the spoken access feature is active in a delivery system, and the test taker first encounters assessment content, the content will be spoken aloud to them, from the beginning to the end of their default inclusion order (see Section 2.2.9). Some users prefer to request the content be spoken to them only at a time of their choosing, which could be stored as <apip:readAtStartPreference>false</apip:readAtStartPreference> in their PNP. The delivery system should then have a method for letting that user begin playback of the content.

When a profile indicates screenReader preference settings for a test taker, it should also indicate the related userSpokenPreference attributes, so the delivery system is clear as to what inclusion order to present to the test taker. The Accessibility for All (AfA) screenReader vocabulary includes a usage tag that includes a vocabulary of required, preferred, optionally use,  and prohibited which are functionally not relevant in the assumed assessment context of APIP. The userSpokenPreference attributes will indicate whether or not the access feature should be assigned to the test taker and the screenReader preference settings will indicate specific preferences. A delivery system could make use of the usage tags, but at this time there is no APIP recommended practice for that vocabulary.

The APIP PNP profile is designed to transfer test taker accessibility and assessment condition information specific to their assessment context. APIP PNP is not intended to store or transfer any other student information, though the bulk records file will require a student identifier for each record, and optionally allow for test assignment identifiers to be added to one or more records. While the APIP PNP profile could be used by an APIP delivery system, it is not the expectation that an APIP delivery system must use an APIP PNP XML during delivery. Delivery systems may use the test taker access and feature needs listed within the PNP file to make the necessary access features available to the test taker using its own proprietary formats or methods.

A test taker’s PNP represents the features which an online assessment will provide to a test taker during an assessment session. APIP is neutral about how or why certain features have been assigned to test takers. Specific assessment programs will likely have their own policies about which features are available to use for an assessment program, and may regulate which features are permitted to be used by test takers.

APIP PNP profiles can be provided in individual XML files (or fields) or within a bulk records file. In a bulk records file, individual records are separated by including each record inside an accessForAllUserRecord node. Within the accessForAllUserRecord node, each record needs to include a personSourcedId node, as well as the accessForAllUser node, which contains the profile information. The record could optionally include one or more assignmentId nodes. If a test taker has different PNP settings for different assessment assignments (for example, different settings for a reading test and a math test), then you would provide different records for each assignmentId. Within a bulk records file, a PNP record cannot have more than one PNP for a personSoureId and assignmentId combination. In other words, there can only be a single PNP for each test assignment. If the records do not include assignmentIds, then there can only be a single record for each unique personSourceId within the file.

Annotated examples of PNP files can be found in Section 5: Annotated PNP Examples. Additional PNP examples can be found in the APIP examples folder.

 

2.2       Authoring Item Content

2.2.1           APIP Item Content Package Information

An APIP package is constructed according to the 1EdTech Content Packaging standard, and consists of a zip file containing (a) a manifest, (b) one or more assessmentItem XML files, and (c) item assets (the supplementary image and sound files).  If a package is meant to contain multiple items, then it will also contain either assessmentTest and/or assessmentSection XML files.  The package may optionally include XSD schema files for use in validating the package contents.

A package contains an item manifest, which holds information of the item’s resources, and describes the relationship between the resources, including any dependencies. The package’s manifest also contains metadata about the content. Within the item package resources, there is at least one item XML file, which contains the default item content, and its supplemental accessibility information. A fuller detailed description of package information is discussed in Section 6.

2.2.2           APIP Test Sections

APIP assessments are organized in the same manner as QTI 2.1 – by the use of assessmentTest and assessmentSection XML data well described in the standard QTI Information Model.  assessmentSection elements are the structure for organizing assessmentItems into groups, as they may contain multiple assessmentItems, other assessmentSections, and section-specific content in the form of rubricBlocks.  assessmentTest elements may contain one or more assessmentSections, and provides additional navigation-controlling and test-scoring information.  assessmentSections may be included in the package in one of two ways – in their own files or wrapped inside assessmentTest files. Specific construction of Test Sections is fully described in Section 6.

2.2.3           APIP Item Variants

In an APIP Content package, more than one variant of an item may exist. Some access features might require the delivery system to provide an entirely different representation of the default content in the original item. For example, for an original item presented in English, there is a translated version in Spanish that is available to those students who need the test content delivered in Spanish. Both the English and Spanish versions of the item are known as variants. Each variant of an item has its own accessibility information coded within its item XML file. For example, to support spoken accessibility needs for both the English and Spanish test takers, the original English variant will have spoken access feature information in the apipAccessibilityInfo section of its XML file, and the Spanish variant (in a different XML file) will have its spoken access feature information within the apipAccessibilityInfo section of its XML code. By separating the representations into different variants, there is less confusion about the accessibility information available for the presented representation.

In APIP v1, if one or more Item Translations or Simplified Language were required, item packages would contain more than a single variant for an item. If these access features are not offered to test takers, an APIP item package is expected to contain only a single variant. A full description of packaging variants is discussed in Section 6.

2.2.4           Parts of an APIP Item Content XML File

The APIP standard builds on the 1EdTech QTI v2.2 specifications. The ‘Q’ or question portion of the QTI specification is held within an APIP Item XML file. APIP, in simplest terms, extends the QTI item representation to include alternate content representations, content presentation sequences (known as inclusion orders), and information about companion materials. These content extensions will enable a test delivery engine to tailor the presentation of items to meet the specific access needs of each individual test taker, as provided in the test taker’s PNP. Figure 2.1 illustrates the key structures of an assessment item using QTI and the APIP extensions.

Figure 2.1  Key QTI Structures and APIP Extensions

Figure 2.1  Key QTI Structures and APIP Extensions


The default QTI representation of the item provides the content to be presented to a test taker with no defined access needs. Traditionally, this default content defines the original form of the item developed for the general population of test takers, accessed by visually reading or viewing content (or in some cases viewing/listening to time-based media). Typically, the “default” content includes text, graphics, and/or tables that form the item that would be presented to a student who does not have any defined access needs. However, default content might also include other media elements such as sound files or movies intended to be presented as part of the item for the general population of test takers.

The default content, content displayed to all test takers regardless of their access needs, is included in the item’s XML file, using XHTML. It also contains code that supplies item identification, response information, and several other item characteristics, available through the QTI 2.2 standard. Section 3 supplies annotated examples of QTI 2.2 items. These are presented without the APIP accessibility extensions.

In APIP, the content file can provide alternate content representations of items, or supplemental content, for students with specific access needs. In general, when all of the original default content is replaced by content that is presented instead of the default content (not presented simultaneously), another full APIP representation (which includes its own accessibility information) of the item can be provided. QTI structures support bundling the multiple QTI item representations in the test section manifests, known as variants. As an example, a full translation of an item that replaces all text, graphics, or other content in another language likely results in another full APIP representation of the item as alternate content intended for specific audiences who will not receive the original, default representation.

In contrast, for many access needs, improved access does not solely require content to be replaced, but instead requires the presentation of information that supplements default content, or is presented simultaneously with the default content. As an example, text displayed as part of default content might be presented in spoken, Braille, or signed representations. When doing so, the default content will remain displayed to the test taker and the additional content (spoken, Braille or sign) will be presented as simultaneous, parallel representations. Similarly, changes intended to assist the test taker in identifying important aspects of default content present the default content along with supplementary information (e.g. emphasizing key words, translation or definitions for key words, guidance that point the test taker to key information, etc.).

Other QTI parts of an assessment item include: response declaration, outcome declaration, styles sheets, and response processing. Brief descriptions are provided below. For complete descriptions of the QTI structures, see the QTI 2.1 documentation.

  • Response variables are declared by response declarations and bound to interactions in the item body.
  • Outcome variables are declared by outcome declarations. Their value is set either from a default given in the declaration itself or by a response rule during response processing.
  • Style sheets can optionally be included, and help inform the intended formatting for the item’s default content.
  • Response processing provides a method for identifying how the given response should be handled. Rules and templates may be internally or externally defined.

APIP supports the use of all QTI interaction types. However, the APIP compliance and certification levels target specific interactions that systems must support in order to achieve certification.

 APIP Entry conformance level interaction types:

  • Choice Interaction (Multiple Choice single choice, Select All That Apply, True/False)
  • Extended Text Interaction
  • Text Entry Interaction (an inline interaction)

APIP Core conformance level interaction types:

  • All Entry level interaction types
  • Hotspot Interaction (a sub-type of Graphic Interaction)
  • Match Interaction

APIP supports the use of Composite Items, that is, items that use more than one interaction within the same item. 1EdTech recommends the use of Portable Custom Interactions [PCI, 13] for technology enhanced items or item types that aren’t currently defined in QTI or APIP. APIP also supports the use of MathML, Feedback, and Shared Material Objects.

For more information on QTI conformance requirements, see the APIP Conformance and Certification [APIP, 14g] document.
 

2.2.5           Connecting Default Content to Access Elements

When supplementary information is provided to meet a specific access need, this information must be placed in the APIP accessibility information (<apip:accessibilityInfo>) component for that item. By placing identifiers within the default XHTML itemBody, APIP extensions can now refer to portions of the default XHTML item body content. The XHMTL ‘id=‘ attribute is placed within the default XHMTL item body content. Accessibility (or supplementary) information is stored within an APIP accessElement. In conjunction with the APIP accessibility content link ‘qtiLinkIdentifierRef=‘, the XHTML content and accessibility information in an accessElement can be linked. The XHTML ‘id=‘ optional attribute can be placed on most all of the XHTML content tags as well as the XHTML features in QTI v2.1. Therefore any APIP compliant application must support all the possible uses of the ‘id’ attribute. Figure 3 illustrates this relationship.

Figure 2.2 Linking Access Elements to Default Content

Figure 2.2 Linking Access Elements to Default Content

It is also important to note that, depending on the specific accessibility needs of the different types of test takers, the default content may require multiple alternative representations. In this case, multiple access elements may point to the same QTI default content identifier ‘id=‘.

Each access element is also uniquely identified using the ‘identifier=‘ attribute tag within the <apip:accessElement> node. That identifier may be referenced in an inclusion order (see Section 2.2.9).

Within an accessElement, the reference to specific default content takes place within a contentLinkInfo node. The access feature information is in the relatedElementInfo node (see Section 2.2.5). You must provide at least one contentLinkInfo node in an accessElement, but you are permitted to provide as many contentLinkInfo nodes as needed. That is, you can refer to a single portion of any part of the default content (provided it is within an element that has an id attribute), or to multiple parts of the default content.

Below is an example of referencing all of the text within single, uniquely named text element in the default content. There is no information in the relatedElementInfo tag for simplicity’s sake.
 

<apip:accessElement identifier=“ae001”>

<apip:contentLinkInfo qtiLinkIdentifierRef=“p1”>

<apip:textLink>

<apip:fullString/>

</apip:textLink>

</apip:contentLinkInfo>

<apip:relatedElementInfo></apip:relatedElementInfo>

</apip:accessElement>

 

To reference more than one portion of the default content, you would use additional contentLinkInfo nodes, as shown below.

<apip:accessElement identifier=“ae001”>

<apip:contentLinkInfo qtiLinkIdentifierRef=“p1”>

<apip:textLink>

<apip:fullString/>

</apip:textLink>

</apip:contentLinkInfo>

<apip:contentLinkInfo qtiLinkIdentifierRef=“p2”>

<apip:textLink>

<apip:fullString/>

</apip:textLink>

</apip:contentLinkInfo>

<apip:relatedElementInfo></apip:relatedElementInfo>

</apip:accessElement>

There are a number of different ways of using only a part of a text string, including: <apip:characterStringLink>, which provides a method for supplying a starting and ending character number, and <apip:wordLink>, which permits the specifying a word within the text string by word count (a number), and <apip:characterLink>, which provides the ability to set specific character(s) by index number.  As an example of specifying a portion of text in more than one way, assume we have a paragraph identified as <p id=“p1”> with multiple sentences where each sentence has its own identified span tags of <span id=“a”> and <span id=“b”>. To provide accessibility to the first sentence of the paragraph, the access element could refer to the entire first sentence within the <span id=“a”> tag, or the access element could link to the <p id=“p1”> object, then specifying a reference to the first 39 characters of the string text within the paragraph using the <apip:characterLink> tag.

2.2.6           Access Element Access Feature Information

An APIP accessElement is a container that provides one or more pieces of accessibility information that can be provided to a test taker. While a single access element can only contain one instance of a particular access feature within the element, the access element can contain more than one access feature within the element. These access features share the single reference, the link, to the default content. The specific types of access features that can be included in an access element are:

  • Spoken
  • Braille
  • Tactile reference
  • Signing: ASL
  • Signing: Signed English
  • Keyword Translation
  • Keyword Emphasis
  • Alternative Representation
  • Language Learner Guidance
  • Cognitive Guidance

In the example code below, access feature information is provided for spoken AND Braille test takers within the same access element. Note that it would also be acceptable to have the spoken and Braille features in different access elements. Putting them in the same access element is simply more efficient by using the same reference to the default content.

 

<apip:accessElement identifier=“ae001”>

<apip:contentLinkInfo qtiLinkIdentifierRef=“p1”>

<apip:textLink>

<apip:fullString/>

</apip:textLink>

</apip:contentLinkInfo>

<apip:relatedElementInfo>

<apip:spoken>

<apip:spokenText contentLinkIdentifier=“spokentext001”>Sigmund Freud and Carl Jung both belong to the psychoanalytic school of psychology.</apip:spokenText>

<apip:textToSpeechPronunciation contentLinkIdentifier=“ttsp001”>Sigmund Freud and Carl Young both belong to the psycho-analytic school of psychology.</apip:textToSpeechPronunciation>

</apip:spoken>

<apip:brailleText>

<apip:brailleTextString contentLinkIdentifier=“brailleText001”>Sigmund Freud and Carl Jung both belong to the psychoanalytic school of psychology.</apip:brailleTextString>

</apip:brailleText>

</apip:relatedElementInfo>

</apip:accessElement>

Since you can only use a single instance of the spoken access feature in an element, if you want different information to be presented to different spoken audiences (TextOnly, TextGraphics, NonVisual), you would need to create a different access element to contain the different spoken information specific to the targeted audience. The different access elements would be listed in the different audience’s inclusion orders.

For guidance on the best practices on specific access feature tags and their use within an access element, see the information below.

2.2.6.1           Spoken Access Feature

Within the <apip:spoken> node, which supports the test taker needing information to be provided when spoken to them, there are 4 main access feature tags:

  • audioFileInfo (optional)
  • spokenText (required)
  • textToSpeechPronunciation (required)
  • [SSML] (optional)

If this element is intended to provide support for any test taker requiring spoken support, APIP requires the inclusion of the spokenText and textToSpeechPronunciation information within the element.

The <apip:spokenText> contains a string of text that clearly states how the text should be presented when spoken (read aloud). This tag has a mandatory inclusion requirement to support spoken accessibility. It is used to remove any ambiguity of the text, meaning it is intended to clarify exactly how the information is intended to be presented when spoken. It might also be used as the script to be used by a human reader when producing a recorded file. For instance, you might want to clarify the reading of certain numbers. This is particularly important for numbers like “1998” where you may want to ensure it is read as a year (nineteen ninety eight) and not a sequence of numbers (one, nine, nine, eight) or a single number (one thousand, nine hundred, and ninety-eight).

The <apip:textToSpeechPronunciation> tag is used to specify a pronunciation for a text-to-speech (TTS) engine. All TTS engines need some help indicating how to pronounce some words, or need proper names spelled phonetically. For the purposes of pre-generating a synthetic sound file, or generating the text at the time of testing, delivery systems should use this textToSpeechPronunciation tag. The likelihood of the text string for the textToSpeechPronunciation to match the spokenText is very high, but is repeated for ease of access by systems processing the information. As an additional benefit, in the event that any provided pre-recorded audio files are missing, corrupt, or of an unusable format for the delivery system, newly generated audio files could be created prior to delivery, or generated by a TTS at the time of delivery. It should also be noted that each TTS engine has its own quirks about pronunciation (particularly for English), and the textToSpeechPronunciation text string used for one vendor may not work the same for another vendor. If there are known differences between vendors, the receiving vendor may wish to redefine the textToSpeechPronunciation information by applying their own pronunciation rules using the spokenText string as the original source.

[Reserved for SSML use]

The <apip:audioFileInfo> tag contains information about the audio file that could be played for the test taker as a representation of the text or object it refers to. While a vendor may supply a single recording, it may also provide multiple recordings for use by test takers with different spoken preferences. For each recorded file, the attribute mime type is required for the referenced audio file (<apip:audioFile contentLinkIdentifier=“af001” mimeType=“audio/mpeg”>). Within the audioFile node, use <apip:fileHref> to provide the location of the file (within the item package). Note that it is a best practice that relative URLs referenced within an item should resolve to extant files in the content package. This will likely be an enforced practice in future versions of the APIP standard. You may include the same recording in multiple file types by changing the mimeType, and referencing a different sound file. Each of the different audio files should be referenced in its own audioFile node. There is no limit to the number of audio files listed within the <apip:spoken> node, but note that only a single sound file is intended to be delivered to an test taker for a single access element. The differences between the provided audio files within an element are meant to address specific test taker requirements and/or preferences, like speed or voice type, and/or to address differences in delivery system requirements (file mime types). The audio file restrictions will likely be an enforced practice in future versions of APIP.

For each recorded file, you may optionally provide:

  • <apip:startTime> provide, in milliseconds the time within the referred audio file where to begin playback for this element. A vendor may choose to use a single recording for whole paragraphs, or whole items, and wants to reference specific portions within the audio file. If the startTime is not provided, the default is to begin playback at the beginning of the audio file.
  • <apip:duration> provide, in milliseconds, the length of time the audio file should play for this element. If the duration is not provided, the default is to play the audio file to the end of the audio file.
  • <apip:voiceType> provide the vocabulary of either Synthetic (computer generated) or Human (an actual person recorded the audio file). If no voice type is provided, the default assumption is that the audio file is Synthetic. However, if both Human and Synthetic files are available, and the user has stated no preference between Human and Synthetic, the Human audio file should be provided to the test taker.
  • <apip:voiceSpeed> provide the rate of reading speed that the recording is intended for use. The vocabulary is either Slow, Standard, or Fast. If no voiceSpeed is provided, the default assumption is that the audio file has a Standard voice speed. If no speech rate is provided in the test-taker’s PNP, the Standard speed should be provided. If only a single speed, of any speed, is available, then provide the available speed to the test taker (something is better than nothing). To be clear, voiceSpeed is NOT providing audio file technical timing information, rather, it provides an indication if the audio file has the words read back slowly, at a standard (or regular) rate (about 180 words per minute), or read fast. Language Learners may want the text read slowly, while blind users may want the text read at very fast speeds. The 3 speeds are provided as a way for some delivery systems to provide different voice speeds if the delivery system is not capable of modulating (without distortion) the playback speed of a single, standard speed, recorded sound file.

To make the spoken access feature available to the test taker, the accessElement MUST be listed in the specific user type’s inclusion order in either their …DefaultOrder, or their …OnDemandOrder.

2.2.6.2           Braille Access Feature

The <apip:brailleText> node is used to specify the text will be displayed using a refreshable Braille display. This tag contains a single node, namely the <apip:brailleTextString> where the text intended for use with a refreshable Braille display should be provided. Braille text needs to be provided for any element that will be provided to a Braille user, though you might create specific elements that contain only Braille text. For example, you may have a reading passage with multiple paragraphs that you reference in a single accessElement, and provide the Braille string for the entire passage in that element. This might be different than the way you create accessElements to provide access for you spoken users. In order for this information to be available to the Braille user, the element MUST be listed in the inclusion order within the brailleDefaultOrder list.

2.2.6.3           Tactile Access Feature

The <apip:tactileFile> node is used to identify the tactile supports that will be available to the test taker separate from the online delivery system. The expectation is that the tactile file is an external, physical tactile representation, completely separated from the test taker’s computer experience. The information provided in the access feature is intended to assist the test taker in locating the proper tactile representation to use when responding to the current item. These tags do not contain the actual content that appears on the separate tactile supports but only how the test taker is to locate them during testing. The ability to define the content for the tactile supports may be included in future versions of APIP.

There are three main tactile support tags and they are:

·          tactileSpokenFile

·          tactileSpokenText

·          tactileBrailleText

Use the <apip:tactileSpokenText> tag to specify how you want to describe the location of the tactile representation. Example:

<apip:tactileSpokenText>Use tactile sheet B to answer question fifteen.</apip:tactileSpokenText>.

Use the <apip:tactileAudioFile> to reference the audio file to be played (spoken/read aloud). Use the <apip:tactileBrailleText> to write the text string to be sent to a refreshable Braille display. As a best practice, these tactile audio files and Braille text should not be referenced for the Spoken or Braille inclusion orders. Delivery systems should supply the audio or Braille content to users who have indicated the need for tactile and the spoken/Braille features.

2.2.6.4           Sign Language Access Features

There are two different nodes that specify sign language information within an access element. They are <apip:signFileASL> and <apip:signFileSignedEnglish>.  The signFileASL node is used to support American Sign Language (ASL) test takers. The signFileSignedEnglish is used to support Signed English test takers. Each of these nodes contains the same sub-nodes that supply specific pre-recorded video file information. In either signing container node, specify a video file using the videoFileInfo tag. Include a reference to the file location using the <apip:fileHref> tag. In the videoFileInfo node, specify a single mimeType attribute, to indicate which type of video format is used. Example: <apip:signFileASL><apip:videoFileInfo mimeType=“video/mpeg” /></apip:signFileASL> . Only a single video file can be provided for each signing sub-node. To specify a starting time (in milliseconds) use the <apip:startCue> tag. This will allow the playback of the video to begin at the indicated startCue time. If a startCue time is not specified, the default time the playback will begin is at the beginning of the video file (0 milliseconds). To indicate when the playback would stop, specify the time in milliseconds (greater than the startCue time) using the <apip:endCue> tags. If an endCue time is not provided, the default ending time is the end of the video file (or the entire length of the video file).

To make the sign language access feature available to the test taker, the accessElement MUST be listed in the specific audience’s inclusion order in either their aslDefaultOrder/signedEnglishDefaultOrder, or their aslOnDemandOrder/signedEnglishOnDemandOrder.

2.2.6.5           Keyword Translation Access feature

Use the <apip:keyWordTranslation> node for test takers that may benefit from having certain words translated into their native language, or other more familiar language.  For those specific test takers, you can indicate that certain text (word or words) has a translation in a specific language. Use the <apip:keyWordTranslation> node to include information for the translated text and the language it is providing the translation to. A definitionID attribute is used to identify the translation (in case you are reusing translations across content). Use the textString to write the translation, and the language tag to indicate the language, using the global ISO 639-2 standard. The translation should be provided to the test taker at the request of the test taker. To provide a spoken audio file for this access feature, use a different accessElement to provide the spoken access feature information (sometimes called access to an access feature, see Section 2.2.8).

It is the expectation that the delivery system will provide indicators for words that need translating when an test taker’s PNP specifies they should receive the keyword translation access feature, in the language provided in the test taker’s PNP.

2.2.6.6           Alternative Representation Access Feature

Use the <apip:revealAlternativeRespresentation> node for test takers that have difficultly decoding certain modes of test content (pie charts, bar charts, diagrams, etc.). To display an alternative representation of the original mode, use the <apip:textString> tag and provide a text string describing information presented in the graphic or diagram. These text access features should be provided at the request of the test taker, if the test-taker’s PNP specified they should receive the access feature.

2.2.6.7           Keyword Emphasis Access Feature

Use the <apip:keyWordEmphasis> node for test takers that benefit from having certain words emphasized (bold, italic, color background, or other highlight style) above and beyond the emphasis of the default content. For those specific users, you can indicate that certain text could have emphasis. Use the <apip:keyWordEmphasis/> tag. This is a closed tag. No further information need be provided. It is the expectation that the delivery system will identify which words need visual emphasis when an test taker’s PNP specifies they should receive the keyword emphasis access feature.

2.2.6.8           Guidance Access Features

Use the <apip:guidance> node for the Language Learner Guidance and Cognitive Guidance Access Features.

The <apip:languageLearnerSupport> node assists test takers who may need additional support with the test content if they are learning the language of the test. The support text provided is expected to be delivered in the same language that the default content is provided. If the question is in English, the support text should also be in English. The text provided is specified in the <apip:textString> node. A test item may have one or more Language Learner Supports. To specify the order in which the supports should be revealed to the test taker (at the request of the test taker), use the <apip:supportOrder> tag, and supply an integer. This access feature is intended only for test-taker who have specified they should receive the access feature in their PNP.

The <apip:cognitiveGuidance> node assists test takers who may need additional support interpreting or understanding the test content, or for any other cognitive support need. The guidance text provided is specified in the <apip:textString> node. A test item may have one or more Cognitive Guidance Supports. To specify the order in which the supports should be revealed to the test taker (at the request of the test taker), use the <apip:supportOrder> tag, and supply an integer. This access feature is intended only for test-takers who have specified they should receive the access feature in their PNP.

2.2.7           Access Features the Delivery System Should Provide Without the Use of Access Element Information

Many of the access features available in APIP do not require any modification or clarification to the default content. An APIP delivery system is expected to provide an access feature or assessment session setting if the test-taker’s PNP requires it and the delivery system has been requested to provide the access need (see the Compliance and Certification document for required conformance features). Those access features include:

  • Magnification
  • Reverse Contrast
  • Alternate Text & Background Colors
  • Color Overlay
  • Masking
  • Auditory Calming
  • Additional Testing Time
  • Breaks
  • Line Reader

 

2.2.8           Accessibility for Access Features using the apipLinkIdentifierRef Attribute

The APIP accessibility information also supports access elements having accessibility to content supplied within one of the access feature nodes (also known as access for access features). As an example, an access element might include a cognitive guidance feature, which provides supplemental content for the test taker. This new content might also need accessibility information associated with it, like spoken or braille support. In order to avoid confusion about which accessibility is to be used for the original default content and the new content supplied in the access element, accessibility information for the new content within the access element is moved to its own access element. A feature within an access element should never refer to another node within the SAME access element.

Because each access element can have multiple access features, each feature must also be uniquely identified with a ‘ContentLinkIdentifier=‘ within the access element. An access element that provides accessibility information for another access feature will then identify the specific feature within another access element, specifically referencing the contentLinkIdentifier of the access feature. In an access element providing access for an access feature within another access element, the attribute ‘qtiLinkIdentifierRef=‘ within the contentLinkInfo tag is replaced with a ‘apipLinkIdentifierRef=‘. The ‘apipLinkIdentifierRef=‘ will point to the specific ‘contentLinkIdentifier=‘ of the access feature in another access element. Figure 2.2 illustrates this relationship.

Figure 2.3 Accessibility for Access Features

Figure 2.3 Accessibility for Access Features

In the accessElement use the unique identifier attribute value for any of the specific elements within another accessElement to specifically identify what part of the element needs an additional support. For example, a text string supplied for Cognitive Guidance might have a spoken access feature (an audio file that reads the string of text). The original access element that had the Cognitive Guidance would look like:

<apip:accessElement identifier=“ae001”>
   <apip:contentLinkInfo qtiLinkIdentifierRef=“p1”>
      <apip:textLink>
          <apip:fullString/>
       </apip:textLink>
    </apip:contentLinkInfo>
    <apip:relatedElementInfo>
      <apip:guidance>
        <apip:cognitiveGuidanceSupport>
          <apip:supportOrder>1</apip:supportOrder>
          <apip:textString contentLinkIdentifier=“cgts001”>
               Use the information in the table to help answer
               this question.</apip:textString>
        </apip:cognitiveGuidanceSupport>
      <apip:guidance>
    </apip:relatedElementInfo>
</apip:accessElement>

The access element that provides accessibility for the text string for a cognitive guidance support within the above access element would look like:

<apip:accessElement identifier=“ae002”>
   <apip:contentLinkInfo apipLinkIdentifierRef=“cgts001”>
      <apip:textLink>
          <apip:fullString/>
       </apip:textLink>
    </apip:contentLinkInfo>
    <apip:relatedElementInfo>
      <apip:spoken>
          <apip:spokenText contentLinkIdentifier=“spokentext002”>Use the information in the table to help answer this question.</apip:spokenText>
          <apip:textToSpeechPronunciation contentLinkIdentifier=“ttsp001”>Use the information in the tay-bul to help answer this question</apip:textToSpeechPronunciation>
       </apip:spoken>
    </apip:relatedElementInfo>
</apip:accessElement>

Note that the apipLinkIdentifierRef refers to the textString identifier (cgts001), NOT accessElement identifier (ae001). Accessibility for access features should only go one level down from an access element that supports the default text. You should not have an access element that supports an access element that supports another access element.

By placing the spoken accessibility for the cognitive support text string in a different access element, we clarify that the spoken information relates to the text string inside the access element, and not the default content, which may be text that has its own spoken features.

Because the specific intended user is not explicit when creating accessibility for access feature content, as a best practice access elements that use the apipLinkIndentifierRef attribute in the contentLinkInfo Node should NOT be included in a specific inclusion order. For example, though the sample code above provides the spoken access feature, the access element should not be put in a spoken inclusion order because that spoken user may not also be a cognitive guidance user, so it might cause confusion for the delivery system during delivery. However, the delivery system should assume that if the user needs cognitive guidance support AND IS ALSO a spoken user (of any kind) that the spoken support should be made available to that user. When you provide the text to the cognitive guidance user, the spoken information supporting that content should be made available to the user. If the cognitive guidance user does NOT also have a spoken need, the spoken support should NOT be provided to that user.

It is also considered best practice that accessibility support for access feature content (not the default item content) not be broken into multiple access features. For example, if a cognitive guidance text string was 5 sentences long, the entire spoken support for that string should be provided in a single access element which references the complete 5 sentence string. Because the access element providing the spoken support for that string will not be listed in any inclusion order, a delivery system would not have specific instructions for the presentation order if the spoken support was split into different access elements. This may be an enforced practice in future versions of APIP.

2.2.9           Inclusion Orders

Inclusion order allows you to specify the order certain test takers with particular access feature needs may be presented with the item content. Order differences may occur for any number of reasons, including: an agreement among experts that users with specific needs perform better with a particular order; the authoring organization may have a preference of how some content should be presented to certain audiences, or because the content is reorganized when presented in its accessible form (like an ASL paragraph that may change the order to follow a time sequence). No inclusion order is specified for the default content test-taker, as it is expected they will choose how to decipher/decode the displayed information using visual cues only. Table 1 below lists the audiences for whom APIP provides an explicit inclusion order.
 


Table 2.1: Test Taker with Explicit Inclusion Orders

Test Taker Access Feature Need
Delivery Expected Behavior

PNP ‘user’ tag

Corresponding Inclusion Order Tags

Spoken, Text Only
Text-based content will be read out loud to the user. Graphics will NOT be described to this audience.

userSpokenPreference:?
textOnly

textOnlyDefaultOrder,
textOnlyOnDemandOrder

Spoken, Text & Graphics
Text-based content and text descriptions of graphics (including images, graphics, and tabular information) will be read out loud to the test taker. Graphic labels are available on-demand.

userSpokenPreference:
?textGraphics

textGraphicsDefaultOrder,
textGraphicsOnDemandOrder

Spoken, ?Non-Visual
All content will be read out loud to the test taker. It is not expected that the test taker would be able to see any of the content. There is no on-demand order for this audience.

userSpokenPreference:
nonVisual

nonVisualDefaultOrder

Spoken, Graphics Only
Only graphics’ descriptions are read out loud to the test taker. These descriptions are only available on-demand, so there is no default order for this audience.

userSpokenPreference:
graphicsOnly

graphicsOnlyOnDemandOrder

Braille user
All content will be provided by the Braille text string. It is not expected that the test taker would be able to see any of the content. There is no on-demand order for this audience.

braille

brailleDefaultOrder

American Sign Language
Text-based content will be signed to the test taker in ASL.

signingType:?
ASL

aslDefaultOrder,
aslOnDemandOrder

Signed English
Text-based content will be signed to the user in Signed English.

signingType:?
SignedEnglish

signedEnglishDefaultOrder
signedEnglishOnDemandOrder

 


The test takers listed in the above table can be thought of as specific ‘audiences’ for the presentation for their content. Spoken audiences (userSpokenPreference) are exclusive to each other. A test taker can only be one spoken audience at a time during an assessment session.

A test taker could be assigned a spoken access feature, but one that is restricted to the directions of an assessment, rather than the questions (or reading passages) within the assessment. In such cases test takers should have a PNP profile with <apip:directionsOnly>true</apip:directionsOnly> entry. Those test takers should also have a userSpokenPreference supplied. If a userSpokenPreference is not supplied, the default preference will be considered TextOnly. In version 1.0 of APIP, there is ambiguity about the order of the directions/instruction if there is more than 1 access element that refers to content designated as instructions in a rubricBlock (see section 4.3). To reduce confusion, as a recommended practice, all instructional content for a specific audience should be contained within a single access element.

Inclusion orders are available for the access features listed in Table 2.1. Also, as a best practice only elements specifically intended (meaning the access feature is provided within that access element) for that test taker type (the specific intended audience) should be listed in that inclusion order. Future versions of APIP will enforce access elements listed in an audience’s inclusion order(s) to include access feature tags that are necessary to support the access need of that particular audience. For example, if an access element is listed in a spoken text only audience’s default order, the access element referenced must contain spoken text and text-to-speech pronunciation data.

Elements tagged or created for a different need (for example, keyword translation or cognitive guidance) should NOT be listed in an inclusion order. For example, if you created a spoken access feature for a cognitive guidance text string, the element supplying the spoken access feature would not be listed in ANY spoken inclusion order. The vendor supplying the delivery system will decide how and in what order that supplementary access feature would be accessed by the test takers requiring spoken access features. The access element that supplied the spoken support for the guidance support text (accessibility for access features) would NOT be listed in any spoken inclusion order because not all spoken users are meant to receive the cognitive guidance support. You cannot include the spoken access feature for the cognitive guidance text in the same access element, because the information provided in an access element is intended to support the content referenced in the contentLinkInfo container, not any other content within the same access element. See section 2.2.8 for additional information on Accessibility for Access Features using the apipLinkIdentifierRef Attribute.

Within an inclusion order, you refer to the access element you want a test taker to encounter using the elementOrder tag’s identifierRef attribute. Contained within the elementOrder is the explicit order tag, as shown below.

<apip:inclusionOrder>
              <apip:textOnlyDefaultOrder>
                <apip:elementOrder identifierRef=“ae001”>
                    <apip:order>1</apip:order>
                </apip:elementOrder>
                <apip:elementOrder identifierRef=“ae002”>
                    <apip:order>2</apip:order>
                </apip:elementOrder>
                <apip:elementOrder identifierRef=“ae003”>
                    <apip:order>3</apip:order>
                </apip:elementOrder>
              </apip:textOnlyDefaultOrder>
</apip:inclusionOrder>

The identifierRef attribute should contain the name (the unique identifier within an accessElement’s identifier attribute) of an access element within the same XML file. It should never refer to access elements outside of the same document.

The order the content is presented should follow these order numbers. As a best practice, order numbers should align with the order within the code they are presented, and there should not be gaps in the sequencing of the order. If the sequence is presented out of order, use the explicit order number to present the content. If gaps exist in the order numbering, proceed to the next higher number that is above the preceding order number. As a best practice, order numbers within a list should unique. This will be an enforced practice in future versions of APIP.

In many, if not most cases, inclusion orders will be identical between the various audiences. For example, in a multiple choice item that had text as the prompt, and text in the responses, most audiences will be presented with the content in the exact same order, possibly using the same access elements.

Most of the audiences that use inclusion orders have a ‘default’ inclusion order. Only the spoken, graphics only audience does not have a default order. A default order is the order of content intended to be delivered when the item content is presented in total from beginning to end, without requiring a user to reinitiate the presentation (unless the user has requested that the presentation be stopped or paused). It should include all the information necessary to understand the context of the question, what response is expected from the test taker, and any answer options/choices.

Most audiences also have ‘on demand’ inclusion orders. Audiences expecting to support blind or very low vision user types do not include on demand orders. On demand content is content that is any content authors might be considered to be important to responding to the question, but might best be presented to the test taker at the request of the test taker. Examples include identifying labels in scientific diagram, or specific data within a table of information. This information might better be understood when taking to the time to understand its context, rather than when understanding the main points of the item. The reason blind test takers do NOT have these on demand orders is that on demand information is normally made in the context of the content’s location, which is not available to blind users. Information about a diagram’s or a table’s content would be supplied in the description of that content for these audiences.

Regardless of which inclusion order contains content for a given audience, access elements listed in either the default and on demand orders are expected to be available (individually) to users at their request. Content should be something a test taker can specifically request as many times as the test taker requests. As a best practice, access elements should be referenced either in the default inclusion order OR the on demand order for a single user type, but NOT listed in both orders. In general, as a best practice, repeating a reference to an access element in any inclusion order should be avoided. The single reference to an access element for any specific audience within their inclusion order(s) may be an enforced practice in future versions of APIP.

The inclusion order serves an additional purpose, namely providing a tabbing order for some test takers who may be navigating through test content using an input device (large buttons, sip & puff, keyboard) that sends tab and enter commands to the computer. Use the inclusion orders to set the tabbing order for these users. The default orders should precede any on-demand orders, but both orders (if applicable) should be part of the tabbing order. As a best practice, delivery systems should also ensure users have the ability to respond to items using tab and enter navigation.

Because spoken and Braille access features are a Core (compliance level) access feature for APIP, if an item could be used by any of those audiences, it should include inclusion orders for those audiences. If an item is not considered appropriate for an audience, exclude the inclusion orders for that audience in the item. For example, if an item requires the user to recognize something visually (like a photograph of a person), you would remove completely the tags for brailleDefaultOrder and nonVisualDefaultOrder. This should be interpreted by delivery systems to mean that the item is not intended for that audience. Do not put in inclusion orders in APIP content if you wish the item to be considered inaccessible to the following audiences:

  • Spoken: Text Only
  • Spoken: Text & Graphics
  • Spoken: Non-Visual
  • Braille
  • ASL
  • Signed English

The exception to this rule is that Spoken: Graphics Only user. Many items do not contain graphics, and therefore do not need descriptions. Items without a graphicsOnlyOnDemandOrder should still be provided to this audience.

Inclusion order lists that contain no references to access elements (an ‘empty’ inclusion order) are not allowed. For example, a simple multiple choice question of only text would likely not have any on-demand elements listed, so the textOnlyOnDemandOrder or textGraphicsOnDemandOrder would be omitted from the code.

See the annotated examples in Section 4 for specific examples of best practice coding for inclusion orders.

2.2.10       Using Multiple Access Elements that Refer to the Same Default Content

In some rare cases, you might have the need to have an access feature broken into several pieces, but have it support a single piece of default content. An example might be a lengthy pre-recorded audio file that describes a complex diagram. In that example, assuming the spoken access feature was broken into two audio files, referenced in two different access elements that refer to the same default content (the complex diagram), you supply the order of the audio files (via the accessElement’s contentLinkIdentifier) using the inclusion order list for the intended test taker type. The access elements might look like:

 

<apip:accessElement identifier=“ae001”>

<apip:contentLinkInfo qtiLinkIdentifierRef=“complexDiagram1”>

<apip:objectLink/>

</apip:contentLinkInfo>

<apip:relatedElementInfo>

<apip:spoken>

<apip:spokenText>First part of the complex diagram description here.</apip:spokenText>

<apip:textToSpeechPronunciation contentLinkIdentifier=“ttsp001”>First part of the complex diagram description here.</apip:textToSpeechPronunciation>

</apip:spoken>

<apip:brailleText>

<apip:brailleTextString contentLinkIdentifier=“brailleText001”>First part of the complex diagram description here.</apip:brailleTextString>

</apip:brailleText>

</apip:relatedElementInfo>

</apip:accessElement>

<apip:accessElement identifier=“ae002”>

<apip:contentLinkInfo qtiLinkIdentifierRef=“complexDiagram1”>

<apip:objectLink/>

</apip:contentLinkInfo>

<apip:relatedElementInfo>

<apip:spoken>

<apip:spokenText contentLinkIdentifier=“spokentext002>The second part of the complex diagram description goes after.</apip:spokenText>

<apip:textToSpeechPronunciation contentLinkIdentifier=“ttsp002”> The second part of the complex diagram description goes after.</apip:textToSpeechPronunciation>

</apip:spoken>

<apip:brailleText>

<apip:brailleTextString contentLinkIdentifier=“brailleText002”>The second part of the complex diagram description goes after.</apip:brailleTextString>

</apip:brailleText>

</apip:relatedElementInfo>

</apip:accessElement>

Note the same reference in the contentLinkIdentifier in both access elements. The inclusion order might then just refer to those elements.

<apip:inclusionOrder>
       <apip:nonVisualDefaultOrder>
                <apip:elementOrder identifierRef=“ae001”>
                    <apip:order>1</apip:order>
                </apip:elementOrder>
                <apip:elementOrder identifierRef=“ae002”>
                    <apip:order>2</apip:order>
                </apip:elementOrder>
       </apip:nonVisualDefaultOrder>
</apip:inclusionOrder>

2.2.11       Using CSS to Position Labels for Graphics

Many graphics have labels or text areas that overlap, or are displayed alongside the graphics. To avoid making the text part of the graphic file (pixel based text), the recommended practice is to use CSS styles to position text elements over or near the graphic. Text elements can then contain id attributes that can be individually supported by access element information. If the text is embedded within the image file, use the method described in Section 2.2.12 below. The advantage to having text separate from the graphic file is that is can be modified without changing the graphic file, or modified using stylesheet changes. Additionally, the text is also now capable of being modified by the test delivery interface. A user may want to alter the text and background colors, and by making the labels text elements, the text can be changed without altering how the graphic is rendered. This is not always possible though, and the placement of text in some graphics requires a high level of positioning and styling. For example: labeling a river in a map. Below is the graphic used in the example for this section.

Figure 2.4 Graphic with labels that require accessible information

Figure 2.4 Graphic with labels that require accessible information

For this example, assume the graphic is only the grid with shading, and the graphic does NOT include the A-B-C-D text as part of the graphic file. The text for the labels will be written in their own tags, and those tags will be positioned using CSS positioning. Below is the default content sample mark up within an assessmentItem itemBody node:

<p id=“p1”>Below is a graphic with the labels placed by CSS.</p>
<div id=“div1”>
    <img id=“graphic1” src=“mm1003justGrid.png” width=“209” height=“160” />
    <span id=“labelA”>A</span>
    <span id=“labelB”>B</span>
    <span id=“labelC”>C</span>
    <span id=“labelD”>D</span>
</div>

An access element can now refer to the ids of the entire div (div1), the graphic (graphic1), and/or any one of the labels (labelA – labelD).

CSS code that places the labels and graphics, as written in a separate stylesheet file:

#div1 { position:relative; height:170px; }

#labelA {    
     font-size: 16pt;
     font-family:Verdana, Geneva, sans-serif;
     font-style:italic;
     position:absolute;
     left:20px;
     top:140px;
     z-index:2;
}

#labelB {    
     font-size: 16pt;
     font-family:Verdana, Geneva, sans-serif;
     font-style:italic;
     position:absolute;
     left:20px;
     z-index:3;
}

#labelC {
     font-size: 16pt;
     font-family:Verdana, Geneva, sans-serif;
     font-style:italic;
     position:absolute;
     left:255px;
     z-index:4;
}

#labelD {
     font-size: 16pt;
     font-family:Verdana, Geneva, sans-serif;
     font-style:italic;
     position:absolute;
     left:255px;
     top:140px;
     z-index:5;
}

#graphic1 { position:absolute; left:40px; z-index:1; }

 

2.2.12       Referring to a Portion of an Image

Below is the recommended practice for referring to a portion of an image for the purpose of supplying accessibility information to that portion. The basic idea is that within the XHTML default content, a containing <div> element is created. The graphic is referenced in an image tag. Boxes (rectangles) will lay over text within the graphic, listed as <div>s. The CSS style sheet used for the assessmentItem would then describe the size and location of the boxes. The example below also uses the graphic shown in Figure 2.4, except in the example below, the text labels are included as pixel based information in the graphic file.

Code in the asssessmentItem itemBody node:

<p>Below is graphic that has the text embedded in the graphic file:</p>

<div id=“div2”>
     <img id=“graphic2” src=“mm1003withLetters.png” width=“263” height=“169” />
     <div id=“areaA”></div>
     <div id=“areaB”></div>
     <div id=“areaC”></div>
     <div id=“areaD”></div>
</div>

CSS code that places the areas around the text within the graphic, as written in a separate stylesheet file:

#div2 { position:relative; height:170px; }

#areaA {
       position:absolute;
       height:24px;
       width:20px;
       left:25px;
       top:140px;
       z-index:2;
 }

#areaB {
       position:absolute;
       height:24px;
       width:20px;
       left:25px;
       z-index:2;
 }

#areaC {
       position:absolute;
       height:24px;
       width:20px;
       left:260px;
       z-index:2;
 }

#areaD {
       position:absolute;
       height:24px;
       width:20px;
       left:260px;
       top:140px;
       z-index:2;
 }

#graphic2 { position:absolute; left:20px; z-index:1; }

Since the label ‘areas’ are now identifiable elements in the XHTML default content, you can now refer to those elements within an access element contentLinkInfo node.

2.2.13       Companion Materials

Companion materials are described within the <apip:apipAccessibility> container, and include assessment materials that are required to be available to test takers while answering a specific item. Materials may include interactive tools, like calculators or rulers, or content that also used in responding to the item, like a table of information or map. The specific materials tags, and their best practice usage are described below.

2.2.13.1       CALCULATORS

All calculators would have access to all number digits, decimal key, equals button, and Clear button. Spoken (read aloud) capability should be something that is configurable to either allow or not allow during testing. Additionally, some programs allow for reading the numbers or functions as you use them, but do not allow reading the number as a whole. This is usually for Math related content. The four possible calculators that can be specified are Basic, Standard, Scientific, and Graphing. Descriptions of each are included below.

Basic Calculator: In the <apip:calculatorType> tag, use the Basic vocabulary. The best practice assumed functions: Add, Subtract, Multiply, Divide.

Example usage:

<apip:companionMaterialsInfo>

       <apip:calculator>

              <apip:calculatorType>Basic</apip:calculatorType>
          <apip:description>4 function calculator</apip:description>

      </apip:calculator>

</apip:companionMaterialsInfo>

Standard Calculator: In the <apip:calculatorType> tag, use the Standard vocabulary. The best practice assumed functions: all basic calculator functions, Square root (√), Percentage (%) , Plus/Minus (a.k.a. Sign Change), Memory Functions

Scientific Calculator: In the <apip:calculatorType> tag, use the Scientific vocabulary. The functions may include, but are not limited to: ALL standard calculator functions, a p key, square (x2) , cube (x3),  x to the y (xy), cube root , xth root , logarithm keys, log, ln, base 10, base e, Trigonometry function keys with an INVERSE key for the inverse functions, sin, cos, tan, hsin (hyperbolic sin), hcos, (hyperbolic cos), htan (hyperbolic tan), DEG, RAD, GRAD conversion, a capacity to work in both degree and radian mode, a reciprocal key (1/x) – calculate the inverse of the displayed value, permutation and/or combination keys (nPr , nCr), parentheses keys, metric conversion, permutation and combination keys, nPr,  cPr, x!

Graphing Calculator: In the <apip:calculatorType> tag, use the Graphing vocabulary. A Graphing calculator includes many of the same functions of a scientific calculator, plus the ability to display equations graphically.

2.2.13.2       RULE

Allows for the presentation of a measuring device for use on the computer with the supplied content. Use the description tag for text description, as a human readable description of the functionality/capability of the rule. Provide the system of measurement using the ruleSystem (which provides for choosing between metric (SI) and US measurement systems), then set the minimum length of the rule, the minor increment of the rule, and the major increment of the rule using the unit type (related to the rule measurement system). An example of a specifying a rule is shown below.

<apip:companionMaterialsInfo>

       <apip:rule>

<apip:description>A metric ruler with increments on one side of the rule.</apip:description>

              <apip:ruleSystemSI>
                <apip:minimumLength>10</apip:minimumLength>
                <apip:minorIncrement unit=“meter”>0.5</apip:minorIncrement>
                <apip:majorIncrement unit=“meter”>1.0</apip:majorIncrement>
           </apip:ruleSystemSI>

       </apip:rule>

</apip:companionMaterialsInfo>

2.2.13.3       PROTRACTOR

The test taker will be supplied with an on-screen protractor while responding to the item. A human readable description can be including in the description tag. Provide the measurement system using the increment tag, which lets you provide a value for either the metric (radians) or US (degrees) systems of angular measurement. An example is shown below.

 <apip:companionMaterialsInfo>

       <apip:protractor>

<apip:description>A floating, transparent protractor that can be moved over the angles in the item.</apip:description>

              <apip:incrementUS>
               <apip:minorIncrement>5.0</apip:minorIncrement>

               <apip:majorIncrement>30.0</apip:majorIncrement>

</apip:incrementUS>       
</apip:protractor>

</apip:companionMaterialsInfo>

2.2.13.4       READING PASSAGE

This allows a reference to a reading passage that needs to be provided to the test taker while responding to this item. Provide fileHref Use the <apip:readingPassage> tag and provide a link to the material by use of the fileHref tag.

<apip:companionMaterialsInfo>

       <apip:readingPassage>

              <apip:fileHref>someAPIPcontent.zip</apip:fileHref>

       </apip: readingPassage>
</apip:companionMaterialsInfo>

Note that a conflict between the relationship of the reading passage and the item may exist between the item  <apip:readingPassage> reference and the test package section manifest. Systems should seek to resolve these conflicts, and/or allow for the section manifest to override the item reference.

2.2.13.5       DIGITAL MATERIALS

These are content or reference materials that relate to the item content. Examples could be a map, a table of information, a sheet of math formulas, an interactive periodic table of elements, or even graphic creation tools. Use the <apip:digitalMaterial> tag and provide a link to the material by use of the fileHref tag.

<apip:companionMaterialsInfo>

       <apip:digitalMaterial>
          <apip:fileHref>directory001/someDigitalFile.exe</apip:fileHref>

       </apip: digitalMaterial>
</apip:companionMaterialsInfo>

2.2.13.6       PHYSICAL MATERIALS

These are external materials needed to work with, or respond with, when the test taker responds to the item. Use the <apip:externalMaterial> tag, then describe the materials using text. Example:

<apip: companionMaterialsInfo >

      <apip: physicalMaterial>Supply scissors and 2 sheets of
         8.5 x 11 inch white paper.</apip: physicalMaterial>
</apip:companionMaterialsInfo>

2.2.14       Use of CSS in a Style Sheet

QTI 2.1 allows for the inclusion of a CSS file reference within the item. APIP, as a best practice, uses CSS as the layout specification, and best practice is to limit the use of CSS 2.1 tags to the list below. The full CSS 2.1 specification can be found at http://www.w3.org/TR/CSS2/

Colors should always be represented as hexadecimal values.
The recommended unit type for fonts is points. Only letter spacing uses ems.
The recommended unit type for all other length measurements is pixels.
[DIR] represents directions around the element (top, right, bottom, left).

  • color (represented as a hexadecimal value only)
  • font-family (not enforceable as interoperable due to copyright law for font usage)
  • font-size (expressed as points -- pts)
  • font-style (normal, italic)
  • font-weight (normal, bold)
  • letter-spacing (normal, or ems as negative or positive, where 0 is normal)
  • line-height (normal, or length as expressed in points -- pts)
  • text-align (left [default], right, center, justify, inherit)
  • text-decoration (none, underline)
  • text-indent (length as expressed in pts, inherit)
  • vertical-align (for inline elements: baseline, sub, super, length as expressed in points – pts, inherit; for table cells: top, middle, bottom)
  • white-space (normal, pre)
  • list-style-image
  • border-color (as expressed as a hexadecimal value)
  • border-style (none, solid, dashed, dotted, double, inherit)
  • border-width (length as expressed in pixels – px)
  • border-[DIR]-color (as expressed as a hexadecimal value)
  • border-[DIR]-style (none, solid, dashed, dotted, double, inherit)
  • border-[DIR]-width (length as expressed in pixels – px)
  • padding (length as expressed in pixels – provided to top, right, bottom, left, or as pairs relating to top & bottom, left & right – no negative values permitted)
  • padding-[DIR] (length as expressed in pts, no negative values permitted)
  • margin (length as expressed in pixels – provided to top, right, bottom, left -- no negative values permitted)
  • margin-[DIR] (length as expressed in pts -- no negative values permitted)
  • background-color (expressed as a hexadecimal value)
  • float (left, right, none, inherit) – used for flowing text around images
  • height (as expressed in pixels – px)
  • width (as expressed in pixels – px)
  • position (static, relative, absolute, inherit)
  • overflow (visible, hidden, scroll)
  • [table] caption-side (top, bottom)

3         Standard QTI Examples

Below are the set of examples used to demonstrate the presentation of standard QTI v2.1 instances of content. These examples do NOT use the APIP accessibility extensions of QTI v2.1. The initial set of standard example instances are:

  1. True/false;
  2. Multiple-choice with single choice.

 

Example code may be found in the APIP examples folder that is distributed with the specification document set, and is not presented within this section.

3.1       Standard True/False AssessmentItem

The standard QTI XML representation for a true/false assessment item is shown in Example 3.1 (file qti_example_3d2) with the corresponding visualization shown in Figure 3.1. The corresponding logical structure of the content package and the corresponding “imsmanifest.xml” for the assessmentItem, are shown in Figure 3.2 and Example 3.1 respectively. The files for this example are available in the qti_example_3d1.zip (see the examples distributed with the specification document set).

Figure 3.1 Visualization of the T/F assessmentItem example.

Figure 3.1 Visualization of the T/F assessmentItem example.

This is a simple True/False question in which the user selects one of the two available options.

The key features in the example shown in Example 3.1 are:

1.                   Lines (0008–0010) define the identifier for the choice correct answer;

2.                   Lines (0007-0022) define the set of variables that are used to sustain the response processing, the scores assigned for this question;

3.                   Lines (0023 and 0034) define the question presented to the candidate;

4.                   Lines (0026–0032) define the presentation of the full true/false question (for which only a single attempt is permitted (maxChoices=“1” in line 0025);

5.                   Lines (0035–0047) denote the response processing that is used to assign the correct response score (‘1’) and to trigger the feedback for the correct answer.  If the answer is incorrect then the default value of ‘0’ is assigned automatically.

The code for the corresponding ‘imsmanifest.xml’ when the assessmentItem is packaged is listed in file qti_example_3d1. 
 

The key features in the example shown in Example 3.1 are:

6.                   Lines (0008–0012) contain the manifest metadata (at present there is only the schema identification information – this shows that this content package is used to contain an APIP Package);

7.                   Lines (0010–13) show that the only resource in the content package is an APIP Item (this conforms to QTI v2.1and v2.2 assessmentItem);

8.                   Lines (0015–0028) contain the resource metadata (defined in terms of LOM) that is used to identify the associated APIP assessmentItem, i.e. its GUID, title and human-readable description;

9.                   Lines (0031–0033) contain the reference to the XML instance file that contains the actual QTI XML (as listed in Code 3.1) and is found in the content package itself.

A schematic representation of the content package for the manifest listed in Example 3.1 is shown in Figure 3.2. This shows that a single resource is defined with the associated XML instance file contained in the package as a whole.

Figure 3.2 Visualization ‘imsmanifest.xml’ for the packaging of the T/F assessmentItem.

Figure 3.2 Visualization ‘imsmanifest.xml’ for the packaging of the T/F assessmentItem.

 

3.2        Standard Multiple Choice AssessmentItem

The standard QTI XML representation for multiple choice AssessmentItems is shown in Example 3.2 (file qti_example_3d2) with the corresponding visualization shown in Figure 3.3. The files for this example are available in the: qti_example_3d2.zip (see the examples distributed with the specification document set).

Figure 3.3 Visualization of the choice interaction assessmentItem example.

Figure 3.3 Visualization of the choice interaction assessmentItem example.

This is a simple multiple choice (choice interaction) question in which the user selects one of the four available options.

The key features in the example shown in Example 3.2 are:

1.                   Lines (0009–0011) define the identifier for the choice correct answer;

2.                   Lines (0013–0022) define the set of variables that are used to sustain the response processing, the scores assigned for this question;

3.                   Lines (0023 and 0060) define the question presented to the candidate;

4.                   Lines (0046–0063) define the presentation of the full multiple choice question (for which only a single attempt is permitted (maxChoices=“1” in line 0043);

5.                   Lines (0061–0073) denote the response processing that is used to assign the correct response score (‘100’) and an incorrect answer receives the default value of ‘0’.

The code for the corresponding ‘imsmanifest.xml’ when the assessmentItem is packaged is found in the file apip_example3d2.

The key features in the example shown in Example 3.2 are:

10.                Lines (0011–0014) contain the manifest metadata (at present there is only the schema identification information – this shows that this content package is used to contain APIP data);

11.                Line (0032) shows that the only resource in the content package is an APIP assessmentItem;

12.                Lines (0009–0028) contain the resource metadata (defined in terms of LOM) that is used to identify the associated QTI assessmentItem i.e. its GUID, title and human-readable description;

13.                Lines (0032–0034) contain the reference to the XML instance file that contains the actual QTI XML (as listed in Code 3.1) and is found in the content package itself.

A schematic representation of the content package for the manifest listed in Example 3.2 is shown in Figure 3.4.  This shows that a single resource is defined with the associated XML instance file contained in the package as a whole.

Figure 3.4 Visualization ‘imsmanifest.xml’ for the packaging of the choice interaction assessmentItem.

Figure 3.4 Visualization ‘imsmanifest.xml’ for the packaging of the choice interaction assessmentItem.

 

4         Annotated APIP Examples

In the annotated examples within this section, only the specific lines of the example code are presented. To see these annotated examples as complete files, refer to the files starting with apip_examples4d#.

Section 4.1 annotates an example item with spoken, Braille, keyword emphasis, keyword translation, and cognitive guidance access features.

Section 4.2 annotates and example item with more complex access features for spoken, Braille, ASL language learner guidance (with spoken and Braille access feature for the guidance text).

Section 4.3 annotates an APIP Section, where a single reading passage is related to several items.

4.1       Spoken Multiple Choice Alternative Rendering

An example of audio-based alternative rendering of a multiple choice-single response assessmentItem is shown in apip_example_4d1 with the corresponding visualization shown in Figure 4.1 (this is based on the original example shown in Sub-section 3.2). A visualization of the content package for this item is shown in Figure 4.2. More detailed information on Content Packaging is found in Section 6. The files for this example are available in the apip_example_4d1.zip.

Figure 4.1 Visual (Default) representation of an APIP Multiple Choice item

Figure 4.1 Visual (Default) representation of an APIP Multiple Choice item

This is a simple Multiple Choice (choice interaction) question in which the user selects one of the four available options. The basic item structure has added five access elements that are coordinated in various inclusion orders. The key features of the XML code are:

1.                   The XHTML code is within the <itembody> tags. It is the expectation that the majority of users would encounter the item using this visual presentation only, with no supplemental accessibility information required;

2.                   While the visual content is presented in a top to bottom, left to right presentation – that may not necessarily be the order we want all users to interact with the content. APIP provides a method of supplying additional information about the basic content, and a method for other types of users to encounter the content;

3.                   The APIP related code for the example item is included in lines 62–304. The accessibility information is listed within the <apip:accessibilityInfo> tags in lines 140–304. The inclusion orders for the different audiences is found within the <apip:inclusionOrder> tags in lines 69–139. Accessibility information and inclusion order are discussed in further detail below;

4.                   This example shows the different ways you can refer to the default content from an accessElement using the contentLinkInfo node. Lines 142–146 show linking to content using a textLink and a fullString reference. Lines 158–160 show linking to content using an objectLink (mathML ). Lines 191–196 link content to the access element using a textLink and a characterStringLink reference.  Lines 271–273 link to content using a textLink and a wordLink references. Any of the linking methods can be used for text, though the characterStringLink and wordLink are to be used when referencing a subset of the text within a text object. Note that with a textLink, you also need to include either the fullString, characterStringLink, or wordLink reference. The object link can be used to refer to an entire text object, like a paragraph or span, as well as any other content object, like an image or interaction object.

5.                   This document does include a reference to keyword emphasis on line 265 inside accessElement “ae015”.  The intent here is to mark certain words that could have emphasis for someone reading the content, independent of the base XHTML code that describes the content. You could then highlight/emphasis some words for some users, and not for others. How that word or those words are emphasized is entirely up to the delivery vendor or their clients, though likely implementations include bolding and/or italicizing the text, or color highlighting the text;

6.                   Keyword Translation is used in this item, with accessElement “ae016” (line 269–282) referring to the word “expression” in the content. Line 277 provides the essential language tag of “es” (Spanish), to indicate the text string should be provided to users with a profile indicating they should receive keywords translated in Spanish. Element “ae017” provides additional support for users who may also need that translation read out loud (or when spoken) to them, or if they need Braille when spoken for those words. Line 285 uses the apipLinkIdentfier to refer to the text string on line 278;

7.                   This item shows a formula, which the implementing project has decided they wish to provide tactile sheets for blind users to access. Element “ae002” (lines 157–174) provides tactile information in lines 166–172. Line 170 indicates the text that should be spoken (if the spoken access feature is provided). Lines 167–169 indicate the sound file that might be played to read that text when spoken. And finally, line 171 provides the text string that might be provided to Braille users.


Figure 4.2 shows a visualization of the how the XML code in the ‘imsmanifest.xml’ is defined for the packaging of the assessmentItem.

Figure 4.2 Visualization ‘imsmanifest.xml’ for the packaging of the assessmentItem.

Figure 4.2 Visualization ‘imsmanifest.xml’ for the packaging of the assessmentItem.

 

4.2       Spoken, Braille, ASL Supported Multiple Choice Rendering

Another example of alternative rendering of a multiple choice-single response assessmentItem is shown in apip_example_4d2 with the corresponding visualization shown in Figure 4.3.  The files for this example are available in the apip_example_4d2.zip (see the set of examples distributed with the specification documents).

Figure 4.3 Visualization of the APIP question apip_example_4d2.

Figure 4.3 Visualization of the APIP question apip_example_4d2.

The example demonstrates how different users have been assigned different orders to the item content.  Those users include:

  • Visual only (this is likely most users);
  • Spoken Text Only (PNP userSpokenPreference:textOnly);
  • Spoken Text & Graphics (PNP userSpokenPreference:textGraphics);
  • Spoken Non-Visual (PNP userSpokenPreference:nonVisual);
  • Braille users (PNP braille);
  • American Sign Language users (PNP signingType:ASL);
  • Users who need supplemental (PNP cognitiveGuidance) content;
  • Users who would benefit from supplemental word emphasis (PNP keywordEmphasis).

It should be noted that the above list of users might not be exclusive to one another. While you could only be one of the three listed spoken audiences, you might be a spoken, text only user AND need the supplemental cognitive guidance information (as just one of the many combinations possible).

An APIPv1.0 core instance is required to include an inclusion order for Spoken and Braille users, provided that the content is appropriate for those users. The example item presented adds other optional accessibility information, but does not include ALL possible optional accessibility information. It demonstrates that it is the content creator’s prerogative to include or exclude supplemental accessibility information for their content.

The layout of the XHTML code (given in the apip_example_4d2.xml file) is found within the <itembody> tags. It is the expectation that the majority of test takers would encounter the item using this visual presentation only, with no supplemental accessibility information required. Default content code is shown below.

 

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0049
0041
0042
0043
0044
0045
0046
0047
0048
0049
0050
0051

 

 

<itemBody id=“theWholeItem”>
        <p id=“p1”><span id=“a”>Ms. Smith’s class contains 24 students. </span>
<span id=“b”>Each student voted for his or her favorite color. </span>
<span id=“c”>The result of the class vote is shown </span>
<span id=“z”>in the table below.</span></p>
        <table id=“table001”>
            <caption id=“d”>Results of the Class Vote</caption>
            <tbody>
                <tr id=“columnheadings”>
                    <th id=“th001”><span id=“e”>Color</span></th>
                    <th id=“th002”><span id=“f”>Number of Students</span></th>
                </tr>
                <tr id=“u”>
                    <td id=“td001”><span id=“g”>Red</span></td>
                    <td id=“td002”><span id=“h”>12</span></td>
                </tr>
                <tr id=“v”>
                    <td id=“td003”><span id=“i”>Blue</span></td>
                    <td id=“td004”><span id=“j”>6</span></td>
                </tr>
                <tr id=“w”>
                    <td id=“td005”><span id=“k”>Green</span></td>
                    <td id=“td006”><span id=“l”>4</span></td>
                </tr>
                <tr id=“x”>
                    <td id=“td007”><span id=“m”>Yellow</span></td>
                    <td id=“td008”><span id=“n”>2</span></td>
                </tr>
            </tbody>
        </table>
        <choiceInteraction responseIdentifier=“RESPONSE” shuffle=“false”
maxChoices=“5”>
            <prompt id=“o”>Indicate which of the following statements are
accurate.</prompt>
            <simpleChoice identifier=“choice1” fixed=“true”>
                <p id=“p”>The majority of students voted for Red.</p>
            </simpleChoice>
            <simpleChoice identifier=“choice2” fixed=“true”>
                <p id=“q”>Twice as many students voted for Red a voted for Blue.</p>
            </simpleChoice>
            <simpleChoice identifier=“choice3” fixed=“true”>
                <p id=“r”>Two percent of students voted for Yellow.</p>
            </simpleChoice>
            <simpleChoice identifier=“choice4” fixed=“true”>
                <p id=“s”>Red received more votes thatn any other color.</p>
            </simpleChoice>
            <simpleChoice identifier=“choice5” fixed=“true”>
                <p id=“t”>Twenty-five percent of students voted for Green.</p>
            </simpleChoice>
        </choiceInteraction>
    </itemBody>

 

 

While the visual content is presented in a top to bottom, left to right presentation – that may not necessarily be the order all users may interact with the content. APIP provides a method of supplying additional information about the basic content, and a method for other types of users to be presented the content. The APIP related code for the example item is included within the <apip:apipAccessibility> tags. The inclusion orders for the different audiences is found within the <apip:inclusionOrder> tags. The accessibility information is listed within the <apip:accessibilityInfo> tags. Accessibility information and inclusion order are discussed in further detail below.

As a reminder, any XHTML tag within the default content can have a unique identifying name using the id attribute. You add accessibility information to content using the accessibility element, or <apip:accessElement> tags, which reference the content id names.

Within the accessElement tag, you need to state which piece or pieces of content the accessibility information is associated with. On line 416, the contentLinkInfo says that the qtiLinkIdentifierRef=“a”, which means the accessibility information will be related to the QTI XHTML element whose id is “a”. Lines 417 to 419 state that the information should be associated with the entire length of the text string. 

APIP allows you to assign one or more named elements to an accessibility element. Lines 1079–1084 demonstrate that the accessibility element is related to parts “a” and “b” (all of the 1st sentence in the first paragraph). You can also refer to different subparts in the same object, like referring to 2 non-consecutive words in the same sentence.

Figure 4.4 shows the accessibility elements, and the part of the content with which they are associated. The names shown in the figure refer to the unique identifier of the accessElement, NOT the id of the XTHML element. The accessElement names are overlapped with the content to represent the concept of linking content to accessElement information. Note that some content can have more than one accessibility elements that refer to the same content, like the table title, which has two accessElements (ae005 and ae022). Different accessibility information is associated with that same part of the content for different audiences. The accessElements that refer to the rows for the table are ae025–ae028, while the individual table cells are ae008–ae015.

Figure 4.4 Accessibility elements referencing the content used by ‘Spoken’ and ‘Braille’ audiences.

Figure 4.4 Accessibility elements referencing the content used by ‘Spoken’ and ‘Braille’ audiences.


4.2.1           Accessibility Information within an Access Element

The accessibility information supplied for the accessibility elements shown in Figure 4.4 are:

1.                   Lines 415–437 describe the accessibility information for accessibility element “ae001”, which is linked to the content identified as “a”, and refers to the entire string of the text within that content piece;

2.                   Line 422 begins the related accessibility content for access element ae001;

3.                   Line 423 states that the accessibility information listed is related to the spoken access feature for access element ae001;

4.                   Line 424 states that there is an audio file associated with the spoken access feature. That audio file is also given a unique (within the document) identifying name, and the mime type must be provided;

5.                   Line 425 supplies the address of the above audio file, including the file name;

6.                   Line 426 states that the above audio file is a human recording.  If no voiceType specified, we would assume that the audio file is generated by a computer (Synthetic). The voiceType tag is useful for a number of reasons. If a testing program requested that both human and computer recordings were to be provided, the user could indicate a preference to which recording they prefer in their APIP user profile, and the voiceType tag would allow the testing application to provide either voice type to the user.

7.                   Line 428 describes the text as it should be read out loud for access element ae001 (the  <apip:spokenText> information).

8.                   Line 430 has a tag that is used to specify a pronunciation for a text-to-speech engine (an enforced practice). In this case, the text-to-speech engine should use the text supplied in line 418, because it needed specific spellings of the word “Ms.” to get the pronunciation correct, and needed a hyphen between the numbers to get the words read smoothly (this is just a hypothetical case – actual exceptions vary between text-to-speech engines).

9.                   With the audio file and spoken information, a test delivery application could now supply text spoken to the Text Only audience as either the recorded human voice, or by a text-to-speech application.

10.                Line 433 begins information related to Braille users for access element ae001, and is not necessarily related to the Spoken user experience, though a Braille user may also use a spoken:nonVisual when spoken. Those two different accessibility features are not necessarily expected to work simultaneously with each other, as the rate of reading the Braille is unlikely to match that of the listening to the spoken representation.

4.2.2           Inclusion Orders

4.2.2.1           Spoken Text Only Orders

Many testing programs allow very specific access to the content of a test question. They will make special business rules about which kinds of text can be read, and how specific types of information should be described. In the case of the example item, the testing program has made decisions about how they want the table of information spoken to the Text Only audience. They indicated that tables should have their titles read by default, during the default reading of the item, but the table content should only be accessed by the test taker on demand.

1.                   Lines 82–117 describe those elements that should be read by default, and are also available on demand.

2.                   Lines 118–149 describe those elements that should only be available on demand.

Taken together, they represent all the spoken elements available to the Spoken Text Only audience. They also represent the combined tabbing order (for users navigating by tabbing), with the default order going before the on-demand order.

4.2.2.2           Spoken Text & Graphics Orders

When preparing information for the Text & Graphics audience (userSpokenPreference:textGraphics), content writers need to consider that the test taker may have difficulty seeing the materials, or may have difficulty making sense of the graphic representation. Sometimes orienting information can also be useful for this audience. For example, you might describe a table as having 3 columns and 5 rows. That information could help the Text & Graphic audience have a sense of the whole table. This description has been done in the example item, so while the Text Only audience uses the accessibility information provided in accessibility element “ae005” (lines 492–507), the Text & Graphics audience uses accessibility element “ae022” (lines 932–949). This element has a fuller explanation of the whole table. Both accessibility elements reference the same piece of text content, namely id=“d” (line 29).

In this example, let’s assume the assessment program also decided that for Text & Graphics users, they would read the table information by rows, restating the column information as it was read. Access element “ae0025” describes the entire row in line 988.

We will also assume the assessment program made the choice to read the table as part of the default content. The default order for the Text & Graphics audience is listed in lines 151–207. Note that the main difference between this audience and the Text Only audience is the inclusion of the table in the middle of the default reading order, and that it refers to different accessibility information, namely a fuller description of the table and reading the data by row. The assessment program also wanted to make the individual text available for test takers on demand, so those are listed in lines 208–239 within the textGraphicOnDemand order. Together, the default order and on-demand order represent the full spoken content available to the Text & Graphics audience. Both lists also represent the tabbing order, with the default order preceding the on-demand order.

For the Spoken Text & Graphics audience, you can make new, supplemental spoken information available as well as giving them access to information already specified for Text Only users. That supplemental information can better orient or guide the test taker.

4.2.2.3           Spoken Non-Visual Order

It is assumed for the Spoken Non-Visual audience that they may not be able to see any of the content of the item. They cannot experience the content in the same way that the Visual Only audience does. Careful consideration needs to be taken when providing content for these users, and any random reading of text without context should be avoided. For example, if you had a graphic that had two labels pointing to parts of the graphic, and you included spoken information where the labels are spoken, that information could easily be useless, or at least confusing. For that reason, there is no on demand order for the Non-Visual audience. It is expected that all the information needed to understand the content, or respond to a question, will be supplied by default to the test taker.

In the case of this example item, no new information (additional access elements with access feature information) was created for Non-Visual users. Instead, it reused the accessibility information supplied for the Text Only and Text & Graphics users. The inclusion order for Non-Visual users for this example is found on lines 240–295. The assessment program has made the decision that for the Non-Visual audience, tabular data should be presented AFTER the response options are read, so that data is presented after they know the context for which the data is supplied.  There is no on-demand order for Non-Visual users, so the tabbing order is taken only from the default inclusion order (nonVisualDefaultOrder).

4.2.2.4           Braille Order

Braille users will have content provided in the order specified in the <brailleDefaultOrder> tags. For the elements listed in the brailleDefaultOrder order, it is a best practice to provide the brailleTextString. That text will be sent to a refreshable Braille display device during the assessment session. This will be an enforced practice in future versions of APIP. You do not need to specify the actual Braille characters (ascii), rather, you write the letter characters you want the Braille display to convert for the Braille audience. For this reason, all APIP Core certified content that is considered accessible to Braille test takers should include at least the brailleDefaultOrder. Future versions of APIP may include methods for including actual Braille character encoding.

In the example, accessibility element “ae001” is included in the brailleDefaultOrder, so it includes the text string for the Braille device, shown on lines 434–435. The brailleTextString here matches the content string exactly, which is often the case. Note the difference between the Braille string specifying using the number “24” instead of the “twenty four” used in the spokenText string on line 428. The number “24” takes up far less characters on the refreshable display, and the context the numbers are used are not so vague as to be confusing.

Accessibility elements “ae005” and “ae022” demonstrate the difference between an element that is NOT included in the brailleDefaultOrder (ae005) and one that is included (ae022). No Braille text is supplied for ae005, but is supplied for ae022.

It should be noted that in the majority of cases, the text string supplied will likely match the content string. It is declared within the accessibility element to ensure the information is available to the refreshable Braille display, and to allow for modifications by people knowledgeable of Braille use in an assessment context.

4.2.2.5           American Sign Language (ASL) Orders

While you can often use the same accessibility elements for the ASL audience as the Spoken, Text Only audience, that isn’t always the case. When English is translated to ASL, some concepts or major elements are moved to different parts of a sentence, or even different parts of the paragraph. Therefore, the chunks of content you are referring to with ASL are often larger than for the Spoken Text Only audience. This example has new elements that were created specifically for the ASLaudience. The other elements already existed for other kinds of audiences:

1.                   For accessibility element “ae030” in lines 1078–1096, the only accessibility information is for the ASL user. Note that lines 1079–1084 specify that it links to two named parts of the content for this accessibility information;

2.                   Line 1087 states that the signing information provide is <apip:signFileASL>, intended for test taker with a PNP assignment of signType: ASL. This is done to ensure that the user is given the sign language they understand. If this item also had video information for Signed English sign language (or some future APIP version’s other kind of sign language), you’d want to be sure the ASL user got only the sign file that is ASL, because there are different signs and grammar in ASL than Signed English;

3.                   Line 1088 identifies the video element, and assigns the mime type of the file;

4.                   Line 1089 specifies the location of the video file;

5.                   Line 1090 specifies the location within the video file you should begin playing for this piece of information;

6.                   Line 1091 specifies the ending location within the video file. In other words, the start and end cues tell you within the video file where the first two sentences of the question are located. If there had been no starting or ending cue information, you would play the entire video file. If you only specified a start cue, you start playing at that point, then play to the end of the file. With these options, you could have one or more video files associated with the content, and you can pick and choose which parts of the video[s] you want to use;

7.                   Lines 353–378 specify the default order for the ASL audience, and lines 379–410 specify the on demand order. For this example, the assessment program has decided that the table content will not be automatically presented to the ASL audience, rather, they can specifically request to have the table information signed as needed.

4.2.3           A User Who Might Need Supplemental Information

If a user profile indicated that the test taker benefited or required the use of additional information about the content, any part of the content could have supplemental information provided to that test taker. This example item has an accessibility element related to the word “accurate” (line 54) in the content.

Accessibility element “ae029” (lines 1061–1096) has two pieces of related element information: keyWordEmphasis (line 1068) and a guidance (lines 1069–1074). The keyWordEmphasis tag means that if a test taker would benefit from certain text having emphasis (bold, italic, underline, etc.), this word (or words) should have that emphasis. It is up to the assessment program (or the application developers) to determine what kind of emphasis should be placed on the text. The language learner tag indicates there is additional information about that word that may assist language learners, provided on line 1072. It is up to the application developer (or the assessment program) to determine the best way to present this information to the appropriate test takers.

But what if the test taker needed Spoken or Braille when spoken for the text provided for the language learner (line 1072)? APIP has a method that allows you link accessibility information to the first level of accessibility when spoken. Accessibility element “ae032” (lines 1117–1134) provides spoken information for the text provided in element “ae029”.  The important difference for this accessibility element is made on line 1118, where the attribute in the contentLinkInfo tag is now “apipLinkIdentifier” instead of “qtiLinkIdentifier”. This means the element is linking to an APIP accessibility element tag, not the default content within the XHTML. For more information, see Section 2.2.8, Accessibility for Access Features using the apipLinkIdentifier Ref Attribute.

4.3       Supporting a RubricBlock

4.3.1           RubricBlock in an Assessment Section

In an assessment, there are situations where there is associated content that applies to multiple items. Examples include directions associated with a section of items, or a reading passage associated with a cluster of items. In an assessmentSection, one or more rubricBlock elements may be used to provide such content. A rubric block’s purpose can be defined using the use attribute, using the vocabulary of scoringguidance, instruction, sharedstimulus.

scoringguidance describes information used in the evaluation of the responses.

instruction describes content to be viewed by test takers related to a test or section of a test.

sharedstimulus describes content meant to be delivered simultaneously with test questions. A reading passage could be provided along-side several questions related to the passage. A diagram or graphic could also be presented with one or more questions associated with the diagram.

As in QTI, the target viewing audience for a rubricBlock is specified with the view attribute.  The value candidate should be used in the rubricBlock’s view attribute for content relevant to test takers.  This is the most common view value when the purpose of the rubricBlock, as indicated by the use attribute, is either instruction (directions) or sharedstimulus (materials to be presented alongside an item, or group of items).

Other view attribute value options include author, proctor, scorer, testConstructor, and tutor.  These view options are more likely to be selected when the rubricBlock is being used in the traditional academic sense, to provide scoringguidance.

A rubric block can also contain APIP extensions contained within the <apip:apipAccessibility> node. Only a single instance of the apipAccessibility node should be provided for each rubric block.  An example of a rubricBlock containing instructions relevant to a set of questions contained within an assessmentSection is shown below.  Note the presence of an <apip:apipAccessibility/> element following the main content of the rubric block.

 

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
 

 

<assessmentSection identifier=“AssessmentSection1” visible=“true”
title=“AssessmentSection1 Title”
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p2”
xmlns:apip=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_qtiv1p0”>
    <rubricBlock view=“candidate” use=“instruction”>
        <p>Pay <span id=“content1”>extra</span> attention to portions of text
that have been <em>visually emphasized</em>.  These are key phrases relevant to
the questions.</p>
        <apip:apipAccessibility>
            <apip:accessibilityInfo>
                <apip:accessElement identifier=“ae1”>
                    <apip:contentLinkInfo qtiLinkIdentifierRef=“content1”>
                        <apip:textLink>
                            <apip:fullString/>
                        </apip:textLink>
                    </apip:contentLinkInfo>
                </apip:accessElement>
                <apip:relatedElementInfo>
                    <apip:keyWordEmphasis/>
                </apip:relatedElementInfo>
            </apip:accessibilityInfo>
        </apip:apipAccessibility>
    </rubricBlock>
    <assessmentItemRef identifier=“AssessmentItem1” href=“AssessmentItem1.xml”/>
    <assessmentItemRef identifier=“AssessmentItem2” href=“AssessmentItem2.xml”/>
    <assessmentItemRef identifier=“AssessmentItem3” href=“AssessmentItem3.xml”/>
</assessmentSection>
 

 

Note that an assessment section’s use of the apipAccessibility node allows for the full use of all APIP accessibility tags, including inclusion orders.

4.3.2           RubricBlock in an Assessment Item

There might be instances where it is important to include rubricBlock information within a single assessment item.

In the example below, the rubricBlock is included within the assessment item to designate a portion of the content as instructions (directions). This could be useful for assessment programs that only want to provide some supports to the directions only, either for their entire assessment program, or for particular test takers.

 

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
 

 

<itemBody id=“content5”>
    <rubricBlock view=“candidate” use=“instruction”>
      <p id=“content2”>Answer the following question.</p>
    </rubricBlock>
    <choiceInteraction maxChoices=“1” minChoices=“0” id=“content1”
shuffle=“false” responseIdentifier=“RESPONSE”>
      <prompt id=“content6”>Sigmund Freud and Carl Jung both belong to the
psychoanalytic school of psychology.</prompt>
      <simpleChoice id=“content8” fixed=“true” showHide=“show”
identifier=“true”>
        <p id=“content7”>True</p>
      </simpleChoice>
      <simpleChoice id=“content4” fixed=“true” showHide=“show”
identifier=“false”>
        <p id=“content3”>False</p>
      </simpleChoice>
    </choiceInteraction>
  </itemBody>
 

 

 

4.4       Additional APIP Exemplars

Listed below are the features within each exemplar content file accompanying this Best Practices document.

apip_exemplar01.zip – Multiple Choice (choice interaction) item with MathML for basic fractions. Content contains an SVG image which uses CSS for placement of labels for the graphic. Inclusion orders and access element support for Spoken (Text Only, Text & Graphics, Non-Visual, Graphics Only) and Braille. Includes video file for ASL representation.

apip_exemplar02.zip – MC item that has an English and Spanish Variants. Each Variant has inclusion orders and access element support for Spoken (Text Only, Text & Graphics, Non-Visual) and Braille.

apip_exemplar03.zip – True/false item that is content targeted for the minimum requirements for an Entry item. The only APIP access feature supported is Spoken: Text & Graphics.

apip_exemplars_section01.zip – Packages a section file with 3 items. The section file apip_exemplarSection01 has APIP features for the content (a passage of text with PNG graphics). All item content provides inclusion orders and access element supports for Spoken (Text Only, Text & Graphics, Non-Visual) and Braille. Items within the section:

·         apip_exemplar04 is an order interaction item.

·         apip_exemplar05 is an choice interaction item.

·         apip_exemplar06 is an extended text item.

 

5         Annotated APIP PNP Examples

5.1       Reading APIP Preferences

5.1.1           User Profile Example 1

An example of an APIP AfA PNP instance is shown in Code 5.1 (the original file containing this code is available in the examples distributed with the specification documents, see file BP_PNP_Code5d1.xml).

Code 5.1 Example of an APIP AfA PNP instance.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
 

 

<?xml version=“1.0” encoding=“UTF-8”?>
<accessForAllUser
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0”
    xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”   
xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpextv1p0_v1p0.xsd
http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpv2p0_v1p0.xsd”
xmlns:apip=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
        <content>
            <apip:apipContent>         
                <apip:spoken>
                    <apip:assignedSupport>true</apip:assignedSupport>
                    <apip:activateByDefault>true</apip:activateByDefault>
<apip:spokenSourcePreference>Human</apip:spokenSourcePreference>
     <apip:readAtStartPreference>true</apip:readAtStartPreference>
<apip:userSpokenPreference>TextOnly</apip:userSpokenPreference>
                </apip:spoken>
                <apip:languageLearner>
                    <apip:assignedSupport>true</apip:assignedSupport>
                    <apip:activateByDefault>false</apip:activateByDefault>
                </apip:languageLearner>
            </apip:apipContent>
        </content>
        <control>      
            <apip:apipControl>       
                <apip:additionalTestingTime>
                    <apip:assignedSupport>true</apip:assignedSupport>
                    <apip:timeMultiplier>unlimited</apip:timeMultiplier>
                </apip:additionalTestingTime>
            </apip:apipControl>
        </control> 
</accessForAllUser>
 

 

 


Lines 0002–0009 supply the necessary name space declarations.

Lines 0010–0024 supply the APIP Content extension.

The spoken access feature is declared in lines 0012–0018, where the test taker is assigned the when spoken, and will have the spoken feature available for them when they begin their assessments (line 0013). This profile includes optional preference information about the spoken source, indicating they prefer a human voice (line 0015). Line 0016 states the test taker likes to have the item read to them (the default inclusion order’s list of access elements) when they first encounter the item. Line 0017 provides the type of spoken audience, which in this example is Text Only.

Lines 0019–0022 also indicate that this test taker should be provided with language learner guidance supports, when available.

Finally, lines 0027–0030 state the test taker should be given additional time for their assessment sessions. For this example, no specific amount is indicated.

5.1.2           User Profile Example 2

Another example of an APIP AfA PNP instance is shown in Code 5.2 (the original file containing this code is available in the examples distributed with the specification documents, see file BP_PNP_Code5d2.xml).

Code 5.2 Example of an APIP AfA PNP instance.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0040
0041
 

 

<?xml version=“1.0” encoding=“UTF-8”?>
<accessForAllUser xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0”
xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpextv1p0_v1p0.xsd
http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpv2p0_v1p0.xsd”
xmlns:apip=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
    <control>
        <apip:apipControl>
            <apip:additionalTestingTime>
                <apip:assignedSupport>true</apip:assignedSupport>
                <apip:timeMultiplier>1.5</apip:timeMultiplier>
            </apip:additionalTestingTime>
            <apip:breaks>
                <apip:assignedSupport>true</apip:assignedSupport>
            </apip:breaks>
        </apip:apipControl>
    </control>
    <display>
        <screenEnhancement>
            <magnification>3</magnification>
            <apip:apipScreenEnhancement>
                <apip:magnification>
                    <apip:assignedSupport>true</apip:assignedSupport>
                    <apip:activateByDefault>true</apip:activateByDefault>
                </apip:magnification>
                <apip:backgroundColour>
                    <apip:assignedSupport>true</apip:assignedSupport>
                    <apip:activateByDefault>true</apip:activateByDefault>
                    <apip:colour>0000CC</apip:colour>
                </apip:backgroundColour>
                <apip:foregroundColour>
                    <apip:assignedSupport>true</apip:assignedSupport>
                    <apip:activateByDefault>true</apip:activateByDefault>
                    <apip:colour>FFFF33</apip:colour>
                </apip:foregroundColour>
            </apip:apipScreenEnhancement>
        </screenEnhancement>
    </display>
</accessForAllUser>
 

 

Lines 0011–0014 indicate the test taker requires additional testing time. Line 0013 states the specific amount of time. If an assessment is regularly scheduled for an hour, this test taker should be permitted one and a half hours of time.

Line 0022 describes the amount of magnification the test taker prefers when they start using their magnification tool within the testing interface.

Lines 0028–0037 indicate the test taker should have the ability to change the text (foreground) and background colors for the content (directions, passages, items). For this when spoken, BOTH the foreground and background nodes need to be included. In this example, Lines 0031 and 0036 describe the specific colors the test taker prefers. This is an optional parameter for the Text and Background color when spoken. Colors are indicated with hexadecimal notation. This profile states that the test taker wishes to have the colors changes when the test session initiates (line x). That is, they want to begin their tests with their preferred text and background choices already showing, without them having to explicitly activate the when spoken.

Additional PNP examples can be found in the PNP examples folder of the APIP example files.

5.1.3           Bulk PNP Files

APIP PNP profiles can also be provided in a bulk records file, where more than one test taker’s profile, or more than one assessment assignment, can be supplied. Use of the bulk records file may be more efficient and accurate than individual PNP files where there is no test taker identifier supplied within the XML of the file.

Below is an example of how to provide multiple user profiles within a single file for the convenience of transferring multiple profiles in bulk. The original file containing this code is available in the examples distributed with the specification documents, see file BP_PNP_Code5d3.xml.

 

Code 5.3 Example of a Bulk APIP AfA PNP File

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0040
0041
0042
0043
0044
0045
0046
0047
0048
0049
0050
0051
0052
0053
0054
0055
0056
0057
0058
0059
0060
0061
0062
0063
0064
0065
0066
0067
0068
0069
0070
 

 

<?xml version=“1.0” encoding=“UTF-8”?>
<accessForAllUserRecords
 xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnprecordsv1p0”
    xmlns:afa=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0”
    xmlns:apip=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”
    xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnprecordsv1p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpv2p0records_v1p0.xsd
    http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpv2p0_v1p0.xsd
    http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpextv1p0_v1p0.xsd”>
    <accessForAllUserRecord>
        <personSourcedId sourceSystem=“STATE_REGSYS”>11111111</personSourcedId>
        <afa:accessForAllUser>
            <afa:display>
                <afa:screenEnhancement>
                    <apip:apipScreenEnhancement>
                        <apip:foregroundColour>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>false</apip:activateByDefault>
                            <apip:colour></apip:colour>
                        </apip:foregroundColour>
                        <apip:backgroundColour>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>false</apip:activateByDefault>
                            <apip:colour></apip:colour>
                        </apip:backgroundColour>
                        <apip:magnification>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>true</apip:activateByDefault>
                        </apip:magnification>
                    </apip:apipScreenEnhancement>
                </afa:screenEnhancement>
            </afa:display>
        </afa:accessForAllUser>
    </accessForAllUserRecord>
    <accessForAllUserRecord>
        <personSourcedId sourceSystem=“STATE_REGSYS”>2222222</personSourcedId>
        <afa:accessForAllUser>
            <afa:language>en-US</afa:language>
            <afa:display>
                <apipDisplay
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
                    <masking>
                        <assignedSupport>true</assignedSupport>
                        <activateByDefault>false</activateByDefault>
                        <maskingType>AnswerMask</maskingType>
                    </masking>
                    <auditoryBackground>
                        <assignedSupport>true</assignedSupport>
                        <activateByDefault>false</activateByDefault>
                    </auditoryBackground>
                </apipDisplay>
            </afa:display>
            <afa:control>
                <apipControl
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
                    <breaks>
                        <assignedSupport>false</assignedSupport>
                    </breaks>
                    <additionalTestingTime>
                        <assignedSupport>true</assignedSupport>
                        <timeMultiplier>1.5</timeMultiplier>
                    </additionalTestingTime>
                </apipControl>
            </afa:control>
        </afa:accessForAllUser>
    </accessForAllUserRecord>
</accessForAllUserRecords>
 

 

 

Code 5.3 shows an example of a bulk records file where two student profiles are provided in the same file. There is no actual limit on the number of records that can be supplied within a bulk records file. Individual PNP profiles are separated by including each profile within its own accessForAllUserRecord node.

Within each accessForAllUserRecord node, you MUST include a personSourcedId node, where the personSourceId is unique to each assignment. An appointmentID could be included to specify the exact test assignment. If the record does not include an appointmentID, the profile is considered generic, or universal, for that test taker, meaning that the profile should always apply for this test taker unless a specific profile is assigned to a specific appointmentId. In the code above, both profiles are assigned to a universal assignment. The example below (Code 5.4) includes examples of using specific assignmentIds for profiles.

Also included in each accessForAllUserRecord node is the accessForAllUser node, which contains the profile data, and has characteristics common to all PNP profiles. In the above example, test taker 11111111 is assigned to Text & Background Colour and Magnification tools. The test taker 22222222 is assigned to Answer Masking, Background Music/Sounds, and additional testing time. Test taker 22222222 is also explicitly NOT assigned to Breaks. False assignments are allowed in PNP profiles. Some systems prefer to store the affirmative and negative assignment of needs and preferences, though it is not a requirement of PNP systems to retain negative assignments.

Below is an example of how to provide information about test taker profiles assigned to specific assessment assignments. The original file containing this code is available in the examples distributed with the specification documents, see file BP_PNP_Code5d4.xml.

Code 5.4 Example of a Bulk APIP AfA PNP File with assignmentIds

 

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0040
0041
0042
0043
0044
0045
0046
0047
0048
0049
0050
0051
0052
0053
0054
0055
0056
0057
0058
0059
0060
0061
0062
0063
0064
0065
0066
0067
0068
0069
0070
0071
0072
0073
0074
0075
0076
0077
0078
0079
0080
0081
0082
0083
0084
0085
0086
0087
0088
0089
0090
0091
0092
0093
0094
0095
0096
0097
0098
0099
0100
0101
0102
0103
0104
0105
0106
0107
0108
0109
1010
1011
1012
0113
0114
0115
0116
0117
 

 

<accessForAllUserRecords
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnprecordsv1p0”
    xmlns:afa=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0”
    xmlns:apip=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”
    xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
   xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnprecordsv1p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpv2p0records_v1p0.xsd
    http://www.imsglobal.org/xsd/apip/apipv1p0/imsafa_pnpv2p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpv2p0_v1p0.xsd
    http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0
http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_afapnpextv1p0_v1p0.xsd”>
    <accessForAllUserRecord>
        <personSourcedId sourceSystem=“STATE_REGSYS”>100001</personSourcedId>
        <appointmentId>APIP PNP Test 1</appointmentId>
        <afa:accessForAllUser>
            <afa:display>
                <afa:screenEnhancement>
                    <apip:apipScreenEnhancement>
                        <apip:foregroundColour>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>false</apip:activateByDefault>
                            <apip:colour></apip:colour>
                        </apip:foregroundColour>
                        <apip:backgroundColour>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>false</apip:activateByDefault>
                            <apip:colour></apip:colour>
                        </apip:backgroundColour>
                        <apip:magnification>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>true</apip:activateByDefault>
                        </apip:magnification>
                    </apip:apipScreenEnhancement>
                </afa:screenEnhancement>
            </afa:display>
        </afa:accessForAllUser>
    </accessForAllUserRecord>
    <accessForAllUserRecord>
        <personSourcedId sourceSystem=“STATE_REGSYS”>200002</personSourcedId>
        <appointmentId>APIP PNP Test 2</appointmentId>
        <afa:accessForAllUser>
            <afa:display>
                <afa:screenEnhancement>
                    <apip:apipScreenEnhancement>
                        <apip:foregroundColour>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>false</apip:activateByDefault>
                            <apip:colour></apip:colour>
                        </apip:foregroundColour>
                        <apip:backgroundColour>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>false</apip:activateByDefault>
                            <apip:colour></apip:colour>
                        </apip:backgroundColour>
                        <apip:magnification>
                            <apip:assignedSupport>true</apip:assignedSupport>
                            <apip:activateByDefault>true</apip:activateByDefault>
                        </apip:magnification>
                    </apip:apipScreenEnhancement>
                </afa:screenEnhancement>
            </afa:display>
            <afa:control>
                <apipControl
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
                    <additionalTestingTime>
                        <assignedSupport>true</assignedSupport>
                        <timeMultiplier>unlimited</timeMultiplier>
                    </additionalTestingTime>
                </apipControl>
            </afa:control>
        </afa:accessForAllUser>
    </accessForAllUserRecord>
    <accessForAllUserRecord>
        <personSourcedId sourceSystem=“STATE_REGSYS”>222222</personSourcedId>
        <appointmentId>APIP PNP Test 1</appointmentId>
        <appointmentId>APIP PNP Test 2</appointmentId>
        <afa:accessForAllUser>
            <afa:language>en-US</afa:language>
            <afa:display>
                <apipDisplay
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
                    <masking>
                        <assignedSupport>true</assignedSupport>
                        <activateByDefault>false</activateByDefault>
                        <maskingType>AnswerMask</maskingType>
                    </masking>
                </apipDisplay>
            </afa:display>
            <afa:control>
                <apipControl
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
                    <additionalTestingTime>
                        <assignedSupport>true</assignedSupport>
                        <timeMultiplier>unlimited</timeMultiplier>
                    </additionalTestingTime>
                </apipControl>
            </afa:control>
        </afa:accessForAllUser>
    </accessForAllUserRecord>
    <accessForAllUserRecord>
        <personSourcedId sourceSystem=“STATE_REGSYS”>123456</personSourcedId>
        <appointmentId>APIP PNP Test 1</appointmentId>
        <afa:accessForAllUser>
            <afa:language>en-US</afa:language>
            <afa:control>
                <apipControl
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_pnpv1p0”>
                    <additionalTestingTime>
                        <assignedSupport>true</assignedSupport>
                        <timeMultiplier>unlimited</timeMultiplier>
                    </additionalTestingTime>
                </apipControl>
            </afa:control>
        </afa:accessForAllUser>     
    </accessForAllUserRecord>
</accessForAllUserRecords>
 

 

 

In code 5.4 (above), there are three test takers referenced in the code: test taker 100001, 200002, and 123456.

Test Taker 100001 has two different profiles, where the profile associated with assignmentId “APIP PNP Test 2” has an extra feature, namely additional testing time added to the profile. Because there is a different set of needs for a different assessment context, the test taker has two different profiles. There could be any number of reasons for the differences, including assessment subject matter, assessment program policy, or local assessment conditions, to name a few.

Test taker 200002 has a single profile, but is assigned to multiple assessmentIds. In this case, the test taker’s profile should apply to both assessment assignments.

Test taker 123456 has a single profile associated with a single assessment.

Systems that import and export APIP PNP bulk records files may electively support the assignmentId node, and can also electively support assignments to different assessments for individual test takers. However, if a system does not support the assignment of different assessments for individual test takers, the system should make decisions about how to handle the multiplicity of profiles for individuals during the import process.

A bulk records file should never contain multiple profiles for a test taker for the SAME assignmentId. For the purposes of preventing duplication or overwriting data, if a record does not contain an assignmentId, the profile is considered a universal profile, and there can only be a single universal profile for an individual test taker within the bulk records file. It is permissible to provide a universal profile as well as other profiles for specific assignmentIds, provided the PNP system handling the bulk records file has a place to store universal and specific assignmentIds.

5.2       Access for All Hazard Tags

The APIP v1.0 XSD includes four tags that can be used to indicate specific hazardous conditions for a test taker. These tags could be used by delivery systems to remove certain stimuli for test takers. These conditions are not part of APIP 1.0 Conformance and Certification.

hazard

motion simulation

flashing

olfactory

sound

5.3       Access for All Support Tool Tags

There is a vocabulary of support tools that specific test takers may require during an assessment session. In the context of an APIP PNP, the specific indication of the need for a tool would override test or item specific Companion Materials. For example, if a PNP indicated an test taker needs the use of a calculator, you would provide a calculator to that test taker for any assessment requiring any mathematical work, regardless of whether that assessment permits the use of calculators for the whole or portions of the assessment. Conversely, if an assessment indicated that calculators were permitted during the assessment for all test takers, the support tool does NOT need to be indicated on all test taker’s user profiles.

The availability of these support tools is subject to the implementation of delivery systems and/or specific regulating rules established by specific contracts. They are not part of the current APIP conformance and certification process.

supportTool

abacus

calculator

dictionary

homophone checker

mind mapping software

note taking

outline tool

peer interaction

spell checker

thesaurus

6         Supporting the Use Cases

6.1       Importing/Exporting APIP Items

Importing and Exporting APIP Items is achieved through the use of a content package. An APIP package is constructed according to the 1EdTech Content Packaging standard, consisting of a zip file containing a manifest, assessmentItem XML files, and the supplementary image and sound files necessary for those assessmentItems. If a package is meant to contain multiple items, then it will also contain assessmentTest or assessmentSection XML files. The package may optionally include XSD schema files for use in validating the package contents.

The manifest is an XML file named “imsmanifest.xml” that serves to identify and catalogue all of the other files in the package. Each file contained within the package must be listed in the manifest, along with information about its purpose. The manifest’s “resources” element should contain a “resource” element for every file. The manifest also includes metadata about the package, contained within the aptly-named “metadata” element.

Below is a minimal manifest describing an item package containing a single assessment item resource file.

 

Code 6.1 Example of an APIP Content Package manifest.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
 

 

<manifest
       identifier=“apipManifestExample”
       xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1”
       xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
       xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1
       http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd”
       xmlns:lomm=“http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest”>
       <metadata>
              <schema>APIP Package</schema>
              <schemaversion>1.0.0</schemaversion>
              <lomm:lom>
                     <lomm:educational>
                           <lomm:learningResourceType>
                                  <lomm:source>APIPv1.0</lomm:source>
                                  <lomm:value>APIP Package</lomm:value>
                            </lomm:learningResourceType>
                     </lomm:educational>
                     <lomm:general>
                           <lomm:identifier/>
                           <lomm:title/>
                     </lomm:general>
                     <lomm:lifeCycle>
                           <lomm:contribute/>
                           <lomm:version/>
                     </lomm:lifeCycle>
                     <lomm:rights>
                           <lomm:copyrightAndOtherRestrictions/>
                           <lomm:description/>
                     </lomm:rights>
              </lomm:lom>
       </metadata>
       <organizations />
       <resources>
              <resource identifier=“Item_01” type=“imsqti_apipitem_xmlv2p2”>
                     <file href=“items/item01.xml” />
              </resource>
       </resources>
</manifest>
 

 

 

The “file” element with the resource node must include a “href” attribute that includes the URI path to the location of the relevant file in the package.

Note that the purpose of a given resource is identified by the value of the resource element’s “type” attribute.  

Use the imsqti_apipitem_xmlv2p1 resource type attribute value for APIP assessmentItem XML files. XML assessmentItem files contain the core content and accessibility metadata used to display and score assessment items.

 If the package contains a whole test, the resource element of the XML file with the assessmentTest data will have a type attribute value of imsqti_apiptest_xmlv2p2. The assessmentTest provides organization to the included assessmentItems by splitting up the assessmentItems into various assessmentSections and dictating the order of user navigation between the items.

imsqti_apipsection_xmlv2p1 is the type attribute value used to describe an XML file containing an assessmentTest.

The controlfile/apip_xmlv1p0 resource type value is for schema XSD documents used for validation testing.

The associatedcontent/apip_xmlv1p0/learning-application-resource resource type is for all other supplemental resource files such as style sheets, audio and video content, and images.

In addition to listing the resource files, the manifest describes the relationships between the resources. For example, if an assessmentItem resource requires an image file for its contents, then this dependency must be incorporated in the manifest by adding a “dependency” element child to the resource element of the assessmentItem. The dependency element must have an identifierref attribute with a value equal to the identifier of the resource element that defines the given image file.

Code 6.2 Example of a manifest demonstrating dependencies.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0040
0041
0042
0043
 

 

<manifest
       identifier=“apipManifestExample”
       xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1”
       xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
       xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1
       http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd”
       xmlns:lomm=“http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest”>
       <metadata>
              <schema>APIP Package</schema>
              <schemaversion>1.0.0</schemaversion>
              <lomm:lom>
                     <lomm:educational>
                           <lomm:learningResourceType>
                                  <lomm:source>APIPv1.0</lomm:source>
                                  <lomm:value>APIP Package</lomm:value>
                            </lomm:learningResourceType>
                     </lomm:educational>
                     <lomm:general>
                           <lomm:identifier/>
                           <lomm:title/>
                     </lomm:general>
                     <lomm:lifeCycle>
                           <lomm:contribute/>
                           <lomm:version/>
                     </lomm:lifeCycle>
                     <lomm:rights>
                           <lomm:copyrightAndOtherRestrictions/>
                           <lomm:description/>
                     </lomm:rights>
              </lomm:lom>
       </metadata>
       <organizations />
       <resources>
              <resource identifier=“Item_01” type=“imsqti_apipitem_xmlv2p2”>
                     <file href=“items/item01.xml” />
                     <dependency identifierref=“Picture_01” />
              </resource>
              <resource identifier=“Picture_01”
type=“associatedcontent/apip_xmlv1p0/learning-application-resource”>
                     <file href=“resources/picture01.png” />
              </resource>
       </resources>
</manifest>
 

 

 

Similarly, if the content of an assessmentSection file references an assessmentItem, the resource node must list the appropriate assessmentItem as a dependency. By that same token, if the content of an assessmentTest file references an assessmentSection file, the resource node of the assessmentTest should list the assessmentSection as a dependency.

In order to prevent ambiguity in the relationships between resources, all identifier attribute values must be unique within the manifest file.

For metadata, note the presence of the <lomm:lom></lomm:lom> element within the previous manifest examples. This element is the container for IEEE  1484.12.1 Learning Object Metadata data structures, where an author may optionally provide detailed information about the title, description, life cycle, usage rights, and so forth for the package. While the presence of the lom element and the demonstrated minimal substructures is mandated, the population of its data is optional. Similar metadata elements may also be optionally inserted within individual resource elements in order to more thoroughly characterize assessment items and test structures.

More detailed explanation of the purpose and content of the Learning Object Metadata structures may be found in the IEEE standard itself, as well as in the inline documentation for the LOM XML binding schema files included within the APIP schema set.

6.2       Item Package with a Single APIP Item

The corresponding partial content package manifest for the APIP Item discussed in Sub-section 4.1 is shown in Code 6.3.

Code 6.3 Example of the manifest with a single APIP assessmentItem.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0040
0041
0042
0043
0044
0045
0046
0047
0048
0049
0050
0051
0052
0053
0054
0055
0056
0057
0058
0059
0060
0061
0062
0063
0064
0065
0066
0067
0068
0069
0070
0071
0072
0073
0074
0075
0076
0077
0078
0079
0080
0081
0082
0083
0084
0085
0086
0087
0088
0089
0090
0091
0092
0093
0094
0095
0096
0097
0098
0099
0100
0101
0102
0103
 

 

<?xml version=“1.0” encoding=“UTF-8”?>
<manifest
    identifier=“apipmanifest0001”
    xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1”
    xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
    xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1
    http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd”
    xmlns:lomm=“http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest”
    xmlns:lomr=“http://ltsc.ieee.org/xsd/apipv1p0/LOM/resource”
    xmlns:qtim=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtimetadata/imsqti_v2p2”>
    <metadata>
       <schema>APIP</schema>
       <schemaversion>1.0.0</schemaversion>
       <lomm:lom>
           <lomm:educational>
              <lomm:learningResourceType>
                 <lomm:source>APIPv1.0</lomm:source>
                 <lomm:value>APIP Package</lomm:value>
              </lomm:learningResourceType>
           </lomm:educational>
           <lomm:general>
              <lomm:identifier/>
              <lomm:title/>
           </lomm:general>
           <lomm:lifeCycle>
              <lomm:contribute/>
              <lomm:version/>
           </lomm:lifeCycle>
           <lomm:rights>
              <lomm:copyrightAndOtherRestrictions/>
              <lomm:description/>
           </lomm:rights>
       </lomm:lom>  
    </metadata>
    <organizations/>
    <resources>
       <resource identifier=“I_00001_QR”
           type =“imsqti_apipitem_xmlv2p2”>
           <metadata>
              <lomr:lom>
                 <lomr:educational/>
                 <lomr:general>
                     <lomr:identifier/>
                 </lomr:general>
                 <lomr:lifeCycle>
                     <lomr:version/>
                 </lomr:lifeCycle>
                 <qti:qtiMetadata>
                     <qti:composite>false</qti:composite>
                     <qti:interactionType>choiceInteraction
                     </qti:interactionType>
                     <qti:toolName>Insert Tool Name Here</qti:toolName>
                     <qti:toolVersion>Insert Tool Version Here</qti:toolVersion>
                     <qti:toolVendor>Insert Tool Vendor Here</qti:toolVendor>
                 </qti:qtiMetadata>
              </lomr:lom>
           </metadata>
           <file href=“apipitem01/apip_example_item01.xml”/>
          <dependency identifierref=“I_00001_R”/>
           <dependency identifierref=“I_00002_R”/>           
       </resource>
       <resource identifier=“I_00001_R”
           type=“associatedcontent/apip_xmlv1p0/...”>
           <file href=“item01resources/file1.mp3”/>          
       </resource>
       <resource identifier=“I_00002_R”
           type=“associatedcontent/apip_xmlv1p0/...”>
           <file href=“item01resources/tactilefile138765.mp3”/>
       </resource>
       <resource identifier=“I_00001_CF”
           type=“controlfile/apip_xmlv1p0”>
           <file href=“controlxsds/apipv1p0_imscpv1p2_v1p0.xsd”/>
       </resource>
        <resource identifier=“I_00002_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/apipv1p0_cpextv1p2_v1p0.xsd”/>
        </resource>
        <resource identifier=“I_00003_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/apipv1p0_lommanifestv1p0_v1p0.xsd”/>
        </resource>
        <resource identifier=“I_00004_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/apipv1p0_lomresourcev1p0_v1p0.xsd”/>
        </resource>
        <resource identifier=“I_00005_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/apipv1p0_qtiextv2p2_v1p0.xsd”/>
        </resource>
        <resource identifier=“I_00006_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/apipv1p0_qtiitemv2p2_v1p0.xsd”/>
        </resource>
        <resource identifier=“I_00007_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/apipv1p0_qtimetadatav2p2_v1p0.xsd”/>
        </resource>
        <resource identifier=“I_00008_CF”
           type=“controlfile/apip_xmlv1p0”>
            <file href=“controlxsds/xml.xsd”/>
        </resource>
    </resources>
</manifest>
 

 

The key features in the example shown in Code 6.3 are:

1.       Lines (0014-0033) – the LOM for the manifest;

2.       Lines (0049-0056) – the LOM and QTI (0048-0055) metadata for the resource assigned to the APIP Item QTI;

3.       Lines (0037-0038) – identifies the resource assigned to the APIP Assessment Item QTI;

4.       Lines (0058-0060) – identifies the file that identifies the APIP Item QTI XML instance and the two dependencies on the asset files (these have their own resource descriptions);

5.       Lines (0062-0069) – the resources descriptions for the two asset files;

6.       Lines (0070-0101) – the resource descriptions assigned to the XSDs also contained in the package. It is not a requirement to provide these XSDs in the package but if they are supplied then resource descriptions must be supplied in the manifest).

6.3       APIP Package Additional Variants

In the APIP package, specify the variants by declaring the relationships in the manifest XML file. In the ‘resource’ element for a given item, add a ‘variant’ element with an ‘identifierref’ attribute value set to the identifier of a variant item resource.  The variant element may contain a metadata node describing in more detail the nature and purpose of the variant relationship, by use of the AfA DRDv2.0 specification. The inclusion of said metadata recommended but not mandated by APIP version 1.0. For example, if an item resource with an identifier of ‘B’ is the Spanish-language variant of item resource A, the manifest resources XML should contain the following:


Code 6.4 Example of manifest fragment demonstrating variants.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
 

 

<resource identifier=“A” type=“imsqti_apipitem_xmlv2p2”>
 <file href=“itemA.xml”/>
 <variant identifierref=“B” identifier=“variantRelationshipAB”
      xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_extensionv1p2”>
   <metadata>
     <accessForAllResource
xmlns=“http://www.imsglobal.org/xsd/accessibility/accdrdv2p0/imsaccdrd_v2p0”>
       <adaptationStatement>
         <originalAccessMode>textual</originalAccessMode>
         <language>es</language>
       </adaptationStatement>
     </accessForAllResource>
   </metadata>
 </variant>
</resource>
<resource identifier=“B” type=“imsqti_apipitem_xmlv2p2”>
 <file href=“itemB.xml”/>
</resource>
 

 

6.4       APIP Package with Sections

APIP assessments are organized in the same manner as QTI 2.1 and QTI 2.2 – by the use of assessmentTest and assessmentSection XML data well described in the standard QTI Information Model.  assessmentSection elements are the atomic structure for organizing assessmentItems into groups, as they may contain multiple assessmentItems, references to other assessmentSections, and section-specific content in the form of rubricBlocks.  assessmentTest elements may contain references to one or more assessmentSections, and provides additional navigation-controlling and test-scoring information.  Each assessmentSection element is contained within its own XML file.  assessmentTests and assessmentSections may contain other assessmentSections by use of the assessmentSectionRef element, which uses identifier and href attributes to point to the appropriate resource.

Here is an example of a package manifest that incorporates an assessmentSection that is comprised of two assessmentItems.

 Code 6.5 Example of manifest with an assessmentSection and assessmentItems.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0049
0041
0042
0043
0044
0045
0046
 

 

<manifest
       identifier=“apipManifestExample”
       xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1”
       xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
       xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1
       http://www.imsglobal.org/profile/apip/apipv1p0/apipv1p0_imscpv1p2_v1p0.xsd”
       xmlns:lomm=“http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest”>
       <metadata>
              <schema>APIP Package</schema>
              <schemaversion>1.0.0</schemaversion>
              lomm:lom>
                     <lomm:educational>
                           <lomm:learningResourceType>
                                   <lomm:source>APIPv1.0</lomm:source>
                                  <lomm:value>APIP Package</lomm:value>
                            </lomm:learningResourceType>
                     </lomm:educational>
                     <lomm:general>
                           <lomm:identifier/>
                           <lomm:title/>
                     </lomm:general>
                     <lomm:lifeCycle>
                           <lomm:contribute/>
                           <lomm:version/>
                     </lomm:lifeCycle>
                     <lomm:rights>
                           <lomm:copyrightAndOtherRestrictions/>
                           <lomm:description/>
                     </lomm:rights>
              </lomm:lom>
       </metadata>
       <organizations />
       <resources>
              <resource identifier=“Section_01” type=“imsqti_apipsection_xmlv2p2”>
                     <file href=“sections/section01.xml” />
                     <dependency identifierref=“Item_01” />
                     <dependency identifierref=“Item_02” />
              </resource>
              <resource identifier=“Item_01” type=“imsqti_apipitem_xmlv2p2”>
                     <file href=“items/item01.xml” />
              </resource>
              <resource identifier=“Item_02” type=“imsqti_apipitem_xmlv2p2”>
                     <file href=“items/item02.xml” />
              </resource>
       </resources>
</manifest>
 

 


Furthermore, here is an example of what the contents of that section’s file (found in the “section01.xml” file in the “sections” directory) would look like.

Code 6.6 Example of assessmentSection Referencing assessmentItems.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
 

 

<assessmentSection
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p2”
   xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
   xsi:schemaLocation=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p2
../../controlxsds/apipv1p0_qtisectionv2p2_v1p0.xsd”
xmlns:apip=“http://www.imsglobal.org/xsd/apip/apipv1p0/imsapip_qtiv1p0”
    identifier=“Section_01” title=“Example Section” visible=“true”>
       <rubricBlock view=“candidate” use=“instruction”>
              <p>Please answer the questions using these directions...</p>
              <apip:apipAccessibility>
              </apip:apipAccessibility>
       </rubricBlock>
       <assessmentItemRef identifier=“Item_01” href=“../items/item_02.xml”/>   
       <assessmentItemRef identifier=“Item_02” href=“../items/item_02.xml”/>   
</assessmentSection>
 

 

 

Note the use of “assessmentItemRef” elements to identify which assessmentItems are to be included in a given section.  The identifier attribute values for these elements should match the identifier values of the resource nodes in the manifest file associated with these items.

Also introduced in this sample assessmentSection is “rubricBlock” element.  “rubricBlock” nodes are used to convey content that is associated with all of the items contained within an assessmentSection.  In this case, the contents are providing cursory instructions to the student (“candidate”) taking the assessment.  The rubricBlock element may also be used to present reading passage content, or whatever other material is appropriate to associate with an entire section rather than a given individual item.  The “view” attribute of the rubricBlock is set to “candidate,” indicating that the test-taker should be shown this content.  Other possible values correspond to other possible target audiences, such as “author”, “proctor”, “scorer”, “testConstructor”, and “tutor”.

An “apipAccessibility” element may be found following the content of the rubricBlock.  The “apipAccessibility” element provides accessibility metadata for the rubricBlock’s content in the same manner and with the same structure as the comparable “apipAccessibility” element found within assessmentItems.

The “assessmentSection” and “assessmentTest” structures may contain additional optional data specifying complex ordering, scoring, and presentation rules, though these are beyond the scope of the current discussion.  A detailed explanation of these options may be found in the documentation for QTI version 2.1 and QTI v2.2.

6.5       APIP Package with Test

The assessmentTest XML structure is used to specify an entire testing experience, and is comprised of references to one or more assessmentSection XML files, which in turn reference the assessmentItems XML that typically represent individual test questions.  Presented below is the code of a manifest file for an APIP test package where the assessmentTest contains references to two assessmentSections, the code contents for which are also shown.

Code 6.7 Manifest of example test package with two sections containing 5 items.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0049
0041
0042
0043
0044
0045
0046
0047
0048
0049
0050
0051
0052
0053
0054
0055
0056
0057
0058
0059
0060
0061
0062
0063
0064
0065
0066
0067
0068
0069
 

 

<manifest identifier=“PackageManifest”
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/imscp_v1p1”>
  <metadata>
    <schema>APIP Package</schema>
    <schemaversion>1.0.0</schemaversion>
    <lom xmlns=“http://ltsc.ieee.org/xsd/apipv1p0/LOM/manifest”>
      <educational>
        <learningResourceType>
          <source>APIPv1.0</source>
          <value>APIP Package</value>
        </learningResourceType>
      </educational>
      <general>
        <identifier/>
        <title/>
      </general>
      <lifeCycle>
        <contribute/>
        <version/>
      </lifeCycle>
      <rights>
        <copyrightAndOtherRestrictions/>
        <description/>
      </rights>
    </lom>
  </metadata>
  <organizations/>
  <resources>
    <resource type=“imsqti_apiptest_xmlv2p2” identifier=“AssessmentTest1”
href=“AssessmentTest1.xml”>
      <file href=“AssessmentTest1.xml”/>
      <dependency identifierref=“AssessmentSection1”/>
      <dependency identifierref=“AssessmentSection2”/>
    </resource>
    <resource type=“imsqti_apipsection_xmlv2p2” identifier=“AssessmentSection1”
href=“AssessmentSection1.xml”>
      <file href=“AssessmentSection1.xml”/>
      <dependency identifierref=“AssessmentItem1”/>
      <dependency identifierref=“AssessmentItem2”/>
    </resource>
    <resource type=“imsqti_apipsection_xmlv2p2” identifier=“AssessmentSection2”
href=“AssessmentSection2.xml”>
      <file href=“AssessmentSection2.xml”/>
      <dependency identifierref=“AssessmentItem3”/>
      <dependency identifierref=“AssessmentItem4”/>
      <dependency identifierref=“AssessmentItem5”/>
    </resource>
    <resource type=“imsqti_apipitem_xmlv2p2” identifier=“AssessmentItem1”
href=“AssessmentItem1.xml”>
      <file href=“AssessmentItem1.xml”/>
    </resource>
    <resource type=“imsqti_apipitem_xmlv2p2” identifier=“AssessmentItem2”
href=“AssessmentItem2.xml”>
      <file href=“AssessmentItem2.xml”/>
    </resource>
    <resource type=“imsqti_apipitem_xmlv2p2” identifier=“AssessmentItem3”
href=“AssessmentItem3.xml”>
      <file href=“AssessmentItem3.xml”/>
    </resource>
    <resource type=“imsqti_apipitem_xmlv2p2” identifier=“AssessmentItem4”
href=“AssessmentItem4.xml”>
      <file href=“AssessmentItem4.xml”/>
    </resource>
    <resource type=“imsqti_apipitem_xmlv2p2” identifier=“AssessmentItem5”
href=“AssessmentItem5.xml”>
      <file href=“AssessmentItem5.xml”/>
    </resource>
  </resources>
</manifest>
 

 

 

Note how each resource node includes only the direct dependencies.  So, the assessmentTest AssessmentTest1 is dependent on the assessmentSections identified as AssessmentSection1 and AssessmentSection2.  AssessmentSection1 is in turn dependent on AssessmentItem1 and AssessmentItem2, while AssessmentSection2 is dependent on AssessmentItem3, AssessmentItem4, and AssessmentItem5.  It is not necessary to include indirect dependencies. So, for example, it is not necessary for AssessmentTest1 to list all of the assessmentItem resources as dependencies.
 

Code 6.8 An assessmentTest example corresponding to “AssessmentTest1.xml”
in the manifest from Code 6.7.

 

0001
0002
0003
0004
0005
0006
0007
 

 

<assessmentTest identifier=“AssessmentTest1”
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtitest/imsqti_v2p2”>
  <testPart navigationMode=“nonlinear” submissionMode=“individual” identifier=“TestPart1”>
    <assessmentSectionRef href=“AssessmentSection1.xml” identifier=“AssessmentSection1”/>
    <assessmentSectionRef href=“AssessmentSection2.xml” identifier=“AssessmentSection2”/>
  </testPart>
</assessmentTest>
 

 

 

The above assessmentTest XML shows us that the assessmentTest contains a single testPart node, which wraps references to the two assessmentSections using assessmentSectionRef elements. The identifier attribute of an assessmentSectionRef element should match one of the resource identifiers in the manifest pointing to an assessmentSection.  The href attribute should point to the file location of the relevant assessmentSection, relative to the file position of the current assessmentTest file.
 

Code 6.9 The assessmentSection corresponding to “AssessmentSection1.xml” in the manifest from Code 6.7.

 

0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
 

 

<assessmentSection visible=“true” identifier=“AssessmentSection1”
title=“AssessmentSection1 Title”
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p2”>
  <rubricBlock view=“candidate” use=“sharedstimulus”>
    <p>Consider the following poem by Alan Lightman:</p>
    <p>In the magnets of computers will
     be stored

Blend of sunset over wheat
     fields.
Low thunder of gazelle.
Light, sweet wind on high
     ground.
Vacuum stillness spreading from
A thick snowfall.

Men will sit in rooms
Upon the smooth, scrubbed earth
Or stand in tunnels on the moon
And instruct themselves in how it
     was.
Nothing will be lost.
Nothing will be lost.
    </p>
  </rubricBlock>
  <assessmentItemRef href=“AssessmentItem1.xml” identifier=“AssessmentItem1”/>
  <assessmentItemRef href=“AssessmentItem2.xml” identifier=“AssessmentItem2”/>
</assessmentSection>
 

 

 

Note that the XML namespace URI of the assessmentSection above, found in the xmlns attribute of the assessmentSection element, is distinct from the namespace URI used for the assessmentTest XML.  AssessmentSection1 contains a reading passage within the rubricBlock element, which indicates its purpose with the use attribute value of “sharedstimulus”.  AssessmentSection1, also includes two assessmentItemRef elements, which point to assessmentItem XML resources, as previously described.  It is assumed that an assessment delivery system will somehow present the contents of a “sharedstimulus” type rubricBlock with all the assessmentItems within that same assessmentSection.  The manner of content presentation is left as a design decision for delivery system implementers.

 

The contents of a rubricBlock are only relevant for assessmentItems within the same assessmentSection.  Thus, a reading passage should not be presented for items in the following section, AssessmentSection2, which lacks a rubricBlock.
 

Code 6.10 The assessmentSection corresponding to “AssessmentSection2.xml” in the manifest from Code 6.7.

 

0001
0002
0003
0004
0005
0006
0007
 

 

<assessmentSection visible=“true” identifier=“AssessmentSection2”
title=“AssessmentSection2 Title”
xmlns=“http://www.imsglobal.org/xsd/apip/apipv1p0/qtisection/imsqti_v2p2”>
  <assessmentItemRef href=“AssessmentItem3.xml” identifier=“AssessmentItem3”/>
  <assessmentItemRef href=“AssessmentItem4.xml” identifier=“AssessmentItem4”/>
  <assessmentItemRef href=“AssessmentItem5.xml” identifier=“AssessmentItem5”/>
</assessmentSection>
 

 

 

6.6       Loading APIP Items

The loading of an APIP Item uses the same approach as the importing of an APIP package.

 


Appendix A – Content & User Profile Tagging Map

This reference connects a number of different APIP concepts and specifications. It relates the accessibility need to the corresponding PNP markup tags and Content tags (if applicable). The need is briefly described, and APIP compliance categories are noted below their description.

 

A. Accessibility through Alternate Representations

Access Feature

Notes

Content Tags [apip:accessibility]

AfA PNP Mapping [accessForAllUser]

Spoken (Read Aloud)

Text presented to the user is spoken aloud. Graphics (tables/diagrams/pictures) would have alternate text that could be spoken aloud.

For the directionsOnly, this is a boolean value (there is no inclusion order for this designation), and would be combined with the other userSpokenPreference to determine the inclusion order for that user. So a user could be both a directionsOnly user and a textGraphics user.

Compliance Categories A1, A2, A3, A4, A5, A6, A7

Affects the item-writing process.

Unless it would violate the construct being measured, it is expected that all content would include read aloud information for all content.

If no other inclusion order is included, the default reading order should be taken from the nonVisual user.

Optionally, you can provide audioFileInfo, which refers to a pre-recorded audio file.

 

spoken

     audioFileInfo (optional)

           [attribute: mimeType]

           fileHref

           startTime (opt)

           duration(opt)

           voiceType(opt: synthetic, human)
                def=synthetic

           voiceSpeed (opt, standard, fast, slow)
                def=standard

     spokenText (string)

     textToSpeechPronunciation(string)
     [SSML]

 

InclusionOrder

     (for textOnly user):

     textOnlyDefaultOrder

     textOnlyOnDemandOrder

     (for textGraphics user):

     textGraphicsDefaultOrder

     textGraphicsOnDemandOrder

     (for nonVisual user):

     nonVisualDefaultOrder

     (for graphicsOnly user):

     graphicsOnlyOnDemandOrder

content  → apipContent
     spoken

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

           spokenSourcePreference (Human, Synthetic)
                def=Human

           readAtStartPreference=true/false, def=true

           userSpokenPreference (TextOnly, TextGraphics,
                NonVisual, GraphicsOnly) def=TextOnly

           directionsOnly (opt: true/false) def=false

 

display
     screenReader (opt)

           usage (required, preferred, optionally use,
                prohibited)
           speechRate
           pitch
           volume
           linkIndication (speak link, different voice,
                sound effect, none)

 

Braille Text

Content would have specific text strings to be used in a refreshable Braille display device.

Compliance Categories A8, A9

Affects the item-writing process.

The Item Information would need to state whether or not the item was accessible to nonVisual (blind) users. If it is intended to be used by Braille users, the item MUST include a brailleVisualDefaultOrder.

brailleText

     brailleTextString

 

InclusionOrder

     (for Braille user):

     brailleDefaultOrder

 

display

     braille

           brailleGrade (opt)

           numberOfBrailleDots (opt)

           numberOfBrailleCells (opt)

           brailleDotPressure (opt)

           brailleStatusCell (opt)

           → assignedSupport (true/false) def=true

           → activateByDefault (true/false) def=true

 

Tactile

A tactile representation of the graphic information is made available outside of the computer testing system. The tags should include descriptions of how to locate the specific tactile sheet needed to answer the question.

Compliance Category A10

Affects the item-writing process.

 

tactileFile

     tactileAudioFile (audioFileInfo)

     tactileSpokenText

     tactileBrailleText

display

     tactile

           → assignedSupport (true/false) def=true

           → activateByDefault (true/false) def=true

Sign Language

Animated or live-action movie recordings are provided to the user that provide either an American Sign Language translation, or the Signed English version of the item.

Compliance Categories A11, A12

Affects the item-writing process.

The Item information specifies whether the content is accessible to either user group  (ASL or Signed English) by using signFileASL or signFileSignedEnglish.

signFileASL OR signFileSignedEnglish

     [attribute: mimeType]

     videoFileInfo

           fileHref

           startCue

           endCue

signFileSignedEnglish

     [attribute: mimeType]

     videoFileInfo

           fileHref

           startCue

           endCue

InclusionOrder

     (for ASL user):

     aslDefaultOrder

     aslOnDemandOrder

     (for Signed English user):

     signedEnglishDefaultOrder

     signedEnglishOnDemandOrder

content → apipContent

     signing

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

           signingType (ASL, SignedEnglish)

Item Translation

A variant of the item is made, and the user is exposed to the alternate language version. The Item information would contain which specific language it is providing.

Compliance Category A13

Affects the item-writing process.

Establishes a different variant within the item package.

 

content → apipContent

     itemTranslationDisplay [language]

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

Keyword Translation

Certain specific words would have translations available to users who need some assistance with difficult or important words in the content. The User profile would specify the language requested, and the content would supply the translation for the program required languages.

Compliance Category A14

Affects the item-writing process.

 

keyWordTranslation (definitionID)

     textString

     language

content → apipContent

     keyWordTranslations [language]

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

Simplified Language

An entirely different variant of the question is given to the user, using simpler language.

Compliance Category A15

Affects the item-writing process.

Establishes a different variant within the item package.

 

content → apipContent

     simplifiedLanguageMode

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

Alternate Representation

Specifies an alternate way of displaying content to facilitate the users access to the content. As an example, a text-based description of a figure displaying a life cycle might be provided, or an animation that represents a series of events described in text might be provided.

Compliance Category A16

Affects the item-writing process.

Only Text is available for v1. Future versions may support Audio, Video, Graphic, Interactive..

revealAlternativeRepresentation

     textString

content → apipContent

     alternativeRepresentations

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

           alternativeRepresentationType (Text)

 

 

 

B. Accessibility through Adapted Presentations

Access Feature

Notes

Content Tags [apip:accessibility]

AfA PNP Mapping [accessForAllUser]

Magnification

All content is magnified by the amount specified by the user. Optional magnification amount can be sent in the user profile.

Compliance Categories B1, B2

 

 

display

     screenEnhancement

           magnification (magnification amount)

display → apipDisplay

     apipScreenEnhancement

           magnification

                assignedSupport (true/false) def=true

                activateByDefault (true/false) def=true

Reverse Contrast

All colors are reversed in the user interface.

Compliance Category B3

 

 

display → apipDisplay

     apipScreenEnhancement

           invertColourChoice

                assignedSupport (true/false) def=true

                activateByDefault (true/false) def=true

Alternate Text and Background Colors

User has the ability to choose a text and background color combination other than the default presentation. Optional color settings could be sent in the user profile.

Compliance Categories B4, B5

 

 

display → apipDisplay
     apipScreenEnhancement

           foregroundColour

                assignedSupport (true/false) def=true

                activateByDefault (true/false) def=false

                colour (hexadecimal)

display → apipDisplay
     apipScreenEnhancement

           backgroundColour

                assignedSupport (true/false) def=true

                activateByDefault (true/false) def=false

                colour (hexadecimal)

Color Overlay

A color tint is layed over the content (directions and questions) to aid in reading of text. This emulates the classroom use of colored acetate sheets over paper. Optional color settings could be sent in the user profile.

Compliance Categories B6, B7

 

 

display → apipDisplay
     apipScreenEnhancement

           colourOverlay
                assignedSupport (true/false) def=true

                activateBy Default (true/false) def=false
                colour (hexadecimal)

 

 

C. Accessibility through Adapted Interactions

Access Feature

Notes

Content Tags [apip:accessibility]

AfA PNP Mapping [accessForAllUser]

Masking

For Answer Masking, by default, answer choices for multiple choice item are covered when the item is first presented. User has the ability to remove the masks at any time.

For Custom Masking, User is able to create their own masks to cover portions of the question until needed.

Compliance Categories C1, C2

 

 

display → apipDisplay
     masking
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=false

           maskingType (CustomMask, AnswerMask)

Auditory Calming

Users can listen to music or sounds in the background as they take the test.

Compliance Category C3

 

 

display → apipDisplay

     auditoryBackground

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=false

Additional Testing Time

If a test has a time limit, the user will be allowed additional time to complete the test.

Compliance Category C4

.

 

control → apipControl
     addtionalTestingTime

           assignedSupport (true/false) def=true

           timeMultiplier (time/unlimited)

Breaks

User is allowed to take breaks, at their request, during the testing session, and return to their testing session when ready.

Compliance Category C5

 

 

control→ apipControl
     breaks

           assignedSupport (true/false) def=true

Keyword Emphasis

Certain words are designated in the content as keywords for emphasis (beyond the default emphasis that may exist for all users). Programs would designate how they are to be emphasized (bold, italic, color background etc.)

Compliance Category C6

Affects the item-writing process.

 

keyWordEmphasis

content→ apipContent

     keywordEmphasis

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

Line Reader

User has a tool available that assists them in moving a reading tool (line highlighter or underscore) down line by line, to assist in reading the content

Compliance Categories C7, C8

 

 

control → apipControl

     lineReader

           assignedSupport(true/false) def=true

           activateByDefault (true/false) def=false

            colour (hexadecimal)

Language Learner Guidance

Additional information is provided in the test language about words or phrases that is intended to assist a Language Learner process that information.

Compliance Category C9

Affects the item-writing process.

guidance

     languageLearnerSupport

           supportOrder,
           textString

content → apipContent

     languageLearner

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

Cognitive Guidance

Additional information is provided to some users to assist in processing or understanding all or parts of the content.

Compliance Category C10

Affects the item-writing process.

 

guidance

     cognitiveGuidanceSupport

           supportOrder,
           textString

content → apipContent

     cognitiveGuidance

           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

 

 

 


Appendix B – Experimental Features

The APIP v1.0 XSD includes accessibility features that are not discussed in the body of the Best Practices and Implementation Guide. These features should be considered experimental by the APIP implementing community. Many of the features are still being researched as to their effectiveness in an assessment context, or are so new that no vendors have implemented these features in any of their systems.

These experimental features are subject to change or removal in future versions of APIP. The use of these specific tags is encouraged to be used if attempting to implement features of a similar nature. Modifications to these tags will be considered by the APIP Working Group based on the feedback received by the implementing community and the industry at large.

Any content created with the use of these specific experimental tags will pass the 1EdTech APIP 1.0 validation.

Feature Descriptions

Increased White Space: An APIP PNP can include the desire to increase the use of whitespace for a test taker. It may also include specific preferences for which whitespace characteristics they wish to have increased. This would also require that a delivery system be able to present assessment context with the increased white space characteristics.

AfA PNP Mapping for Increased White Space:

display

screenEnhancement  → apipScreenEnhancement
     increasedWhiteSpacing
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

           increasedWhiteSpacingType

                Line (true/false)

                Word (true/false)

                Character (true/false)

Structured Masking: The identification of parts of the content that should be masked off (hidden) with the ability to reveal parts of the content by indicating the order in which the hidden parts should be revealed.

APIP Content Tags for Structured Masking:

structuredMask
     revealOrder

     answerOption

 

Encouraging Prompts: An APIP PNP can include the desire to have encouraging prompts provided to the test taker during an assessment session. It includes the ability to provide text strings and sound files that could be presented to the test taker.

AfA PNP Mapping for Encouraging Prompts:

display  → apipDisplay
     encouragement
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

           soundFileHref

           textMessageString

 

Scaffolding: The inclusion of extra information that may help certain test takers. The tags associated with scaffolding should indicate the order they should be presented. The scaffoldBehavior tag provides the content to be presented.

APIP Content Tags for Scaffolding:

scaffold
     revealOrder

     scaffoldBehavior

           audioFileInfo

           spokenText

           textString

AfA PNP Mapping for Scaffolding:

content  → apipContent
     scaffolding
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

 

Chunking: The partitioning of the content to be presented into ‘chunks’ of information. In other words, not everything all at once.

APIP Content Tags for Chunking:

chunk

AfA PNP Mapping for Chunking:

content  → apipContent
     chunking
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

 

Reduced Answer Choices: Allowing the student to remove one or more choices before answer the question. Content creators can choose which answers to remove in which order.

APIP Content Tags for Reduced Answer Choices:

answerReduction

     removeTagGroup

           removeTagGroupOrder

           removeTagIdRef

AfA PNP Mapping for Reduced Answer Choices:

content  → apipContent
     reducedAnswers
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

 

Negatives Removed: Remove any items in an assessment where the question is asked with a negative logic. For example, “Which of the following choices is NOT a play by William Shakespeare?”

AfA PNP Mapping for NegativesRemoved:

content  → apipContent
     negativesRemoved
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

 

Alternate Representations: There are several other alternate representations that could be presented to test takers.

APIP Content Tags for Alternate Representations:

revealAlternateRepresentation

     audioFileInfo

     executableFileInfo

     flashFileInfo

     graphicFileInfo

     markupFileEmbedded

     markupFileInfo

     textFileInfo

AfA PNP Mapping for Alternate Representations:

content  → apipContent
     alternativeRepresentationType
           assignedSupport (true/false) def=true

           activateByDefault (true/false) def=true

 

Sign Language, Bone Animation File: Allows for the inclusion of a file that contains bone animation instructions to be used by an avatar.

APIP Content Tags for Bone Animation:

boneAnimationVideoFile

 

 

About This Document

Title:                                                      1EdTech Accessible Portable Item Protocol (APIP) Best Practice and Implementation Guide

Editors:                                                 Colin Smythe (1EdTech), Mark McKell (1EdTech) and Thomas Hoffmann (Measured Progress)

Co-chairs:                                            Gary Driscoll (ETS), Thomas Hoffmann (Measured Progress) and Wayne Ostler (Pearson)

Version:                                                1.0

Version Date:                                      31 March 2014

Status:                                                   Final Specification

Summary:                                          The aim of the APIP Project is to use well established e-learning interoperability standards to enable the exchange of accessible assessment content between computer-based assessment systems, tools and applications.  Users of systems, tools and applications that adopt the APIP are able to use their accessible assessment content on a wide range of systems.  This document contains the best practice and implementation guidance for using the specification.

Purpose:                                               This document is made available for adoption by the public community at large.

Document Location:                          http://www.imsglobal.org/apip/

 

List of Contributors

The following individuals contributed to the development of this document:

Rob Abel, 1EdTech (USA)

Justin Marks, NWEA (USA)

Michael Aumock, Pacific Metrics (USA)

Mark McKell, 1EdTech (USA)

Marty Christensen, ACT (USA)

Sue Milne, JISC (UK)

Jason Craft, Pearson (USA)

Wayne Ostler, Pearson (USA)

Gary Driscoll, ETS (USA)

Zack Pierce, Measured Progress (USA)

Eric Hansen, ETS (USA)

Michelle Richard, Pearson (USA)

Regina Hoag, ETS (USA)

Mike Russell, Measured Progress (USA)

Thomas Hoffmann, Measured Progress (USA)

Farhat Siddiqui, ETS (USA)

Wilbert Kraan, JISC (UK)

Colin Smythe, 1EdTech (UK)

Devin Loftis, McGraw-Hill/CTB (USA)

Wyatt VanderStucken, ETS (USA)

 

Revision History

 

Version No.

Release Date

Comments

Candidate Final v1.0

26 March 2012

The first formal release of the Candidate Final Release version of this document.

Final Specification v1.0

31 March 2014

The Final Specification version of this document.

 

 

 

 

 

1EdTech Consortium, Inc. (“1EdTech”) is publishing the information contained in this document (“Specification”) for purposes of scientific, experimental, and scholarly collaboration only.

1EdTech makes no warranty or representation regarding the accuracy or completeness of the Specification.

This material is provided on an “As Is” and “As Available” basis.

The Specification is at all times subject to change and revision without notice. 

It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.

1EdTech would appreciate receiving your comments and suggestions.

Please contact 1EdTech through our website at http://www.imsglobal.org.

Please refer to Document Name: 1EdTech APIP Best Practice and Implementation Guide v1.0 Final Specification

Date: 31 March 2014