Sharebar?

1EdTech AccessForAll Meta-data Best Practice and Implementation Guide

1EdTech Logo

1EdTech AccessForAll Meta-data
Best Practice and Implementation Guide

Version 1.0 Final Specification

Copyright © 2004 1EdTech Consortium, Inc. All Rights Reserved.
The 1EdTech Logo is a trademark of 1EdTech Consortium, Inc.
Document Name: 1EdTech AccessForAll Meta-data Best Practice and Implementation Guide
Revision: 12 July 2004


 
Date Issued: 12 July 2004

IPR and Distribution Notices

Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the specification set forth in this document, and to provide supporting documentation.

1EdTech takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on 1EdTech's procedures with respect to rights in 1EdTech specifications can be found at the 1EdTech Intellectual Property Rights web page: http://www.imsglobal.org/ipr/imsipr_policyFinal.pdf.

Copyright © 2004 1EdTech Consortium. All Rights Reserved.

Permission is granted to all parties to use excerpts from this document as needed in producing requests for proposals.

Use of this specification to develop products or services is governed by the license with 1EdTech found on the 1EdTech website: http://www.imsglobal.org/license.html.

The limited permissions granted above are perpetual and will not be revoked by 1EdTech or its successors or assigns.

THIS SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY AT THE IMPLEMENTER'S OWN RISK, AND NEITHER THE CONSORTIUM, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE USE OF THIS SPECIFICATION.

Table of Contents


1. Introduction
1.1 Nomenclature
1.2 References

2. General

3. Content and Resources

4. Accessibility for LIP (ACCLIP user profiles)

5. Matching Users and Resources

6. System Implementation

7. Multi-linguality

8. Implementation Examples
8.1 TILE Low Vision Example
8.2 TILE Caption Example

About This Document
List of Contributors

Revision History

Index


1. Introduction

For contextual and background information as well as related documents and specifications please see the AccessForAll Meta-data Overview [ACCMD, 04d].

1.1 Nomenclature

 
ACCLIP (user profile) 1EdTech Learner Information Package Accessibility for LIP
ACCMD (resource profile) 1EdTech AccessForAll Meta-data
CP 1EdTech Content Packaging Specification
CSS Cascading Style Sheets
DCMES DCMI Dublin Core Metadata Element Set
DCMI Dublin Core Meta-data Initiative
DRI 1EdTech Digital Repositories Interoperability Specification
EARL W3C Evaluation and Report Language
ICP 1EdTech International Conformance Program
IEEE Institute of Electronic & Electrical Engineering
LOM Learning Object Metadata (usually used in "IEEE LOM")
RDF Resource Description Framework
TILE The Inclusive Learning Exchange
W3C World Wide Web Consortium
XML Extensible Mark-up Language

1.2 References

 
[ACCMD, 04a] 1EdTech AccessForAll Meta-data Information Model v1.0, A.Jackl, 1EdTech Consortium, Inc., July 2004.
[ACCMD, 04b] 1EdTech AccessForAll Meta-data XML Binding v1.0, A.Jackl, 1EdTech Consortium, Inc., July 2004.
[ACCMD, 04d] 1EdTech AccessForAll Meta-data Overview v1.0, A.Jackl, 1EdTech Consortium, Inc., July 2004.
[ACCGuide, 02] 1EdTech Guidelines for Developing Accessible Learning Applications v1.0, 1EdTech Consortium, Inc., June 2002.
[ACCLIP, 03c] 1EdTech Learner Information Package Accessibility for LIP Best Practice and Implementation Guide v1.0, M.Norton, J.Treviranus, 1EdTech Consortium, Inc., June 2003
[CP, 03] 1EdTech Content Packaging v1.1.3, C.Smythe, 1EdTech Consortium, Inc., June 2003.
[IEEE LOM] IEEE 14.84.12.1 - 2002 Standard for Learning Object Metadata, http://ltsc.ieee.org
[RFC 2119] IETF RFC 2119 - Key words for use in RFCs to Indicate Requirement Levels
[RFC 2396] IETF RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax
[RFC3066] RFC 3066, Tags for the Identification of Languages, http://www.ietf.org/rfc/rfc3066.txt
[UDC] Universal Decimal Classification Scheme, UDC Consortium, http://www.udcc.org/
[ISO 11404] ISO 11404, Language-independent Datatypes, http://www.iso.ch/cate/d19346.html
[W3CWAI] W3C/WAI Web Content Accessibility Guidelines, http://www.w3.org/TR/WAI-WEBCONTENT/

2. General

What is Meta-data?

Meta-data is information about an object, be it physical or digital. It can be thought of as similar to a library catalog record of a book. As with a catalog record, meta-data does not have to be part of a resource, although it should be associated with it, and it does not have to be made at the same time as the resource or even by the resource's author or owner. A good general description of meta-data is available in "Metadata Principles and Practicalities" available at http://www.dlib.org/dlib/april02/weibel/04weibel.html.

What is EARL?

EARL is the Evaluation And Report Language, a Resource Description Framework (RDF) developed by W3C for expressing test results in machine readable form. The AccessForAll Meta-data specification references EARL statements, to describe the display transformability and control flexibility of a primary resource. For more information on EARL please refer to http://www.w3.org/TR/EARL10/.

What is a Binding?

A binding is the mapping between the information model and a machine readable format. Typically, the binding consists of two files: a binding document and the machine readable file. In the case of this specification, the machine readable file is an XML Schema Definition file. The binding document is a human language, normative explanation of the mapping between the information model and the machine readable XML Schema Definition file.

What is an Application Profile?

An application profile is a community specific extension to or restriction of an existing specification. It is expected that application profiles of this specification will be created for Institute of Electrical and Electronics Engineers Learning Object Meta-data (IEEE LOM) and Dublin Core Meta-data Initiative (DCMI).

Can you give me an example of how this all fits together in one system?

One example of a system which implements the AccessForAll specifications (both user and resource profiles) is The Inclusive Learning Exchange (TILE), a learning object repository (http://inclusivelearning.ca). TILE stores learning objects as atomic pieces of content along with their general and accessibility meta-data. Through this approach, TILE is able to respond to the needs and preferences of its users, i.e., it can retrieve, style, supplement, and substitute content by matching the profiles of resources to the profile of a user. When the TILE authoring tool is used to aggregate and publish learning objects, authors are prompted to provide information about the modality of the resources, stating whether or not they contain auditory, visual, textual, or tactile content, as well as any equivalent alternative resources along with their alternative accessibility properties. This information is captured in the resource meta-data profile. Users, on the other hand, are given the option of creating an ACCLIP profile stating their accessibility needs and preferences. Together, this information is used to determine whether or not a requested primary resource should be substituted or supplemented with an equivalent alternative resource, as well as styled (e.g., CSS) or transformed (e.g., image to ALT text), in order to meet the needs and preferences of the user.

I belong to a community of interest. How do I use the AccessForAll profiles for our specific needs?

Both specifications consist of a number of parts, almost all of which are considered generic or applicable to all situations in which accessibility is being considered. This does not mean that communities implementing the specifications do not have flexibility to make it work in additional locally specific and useful ways for them. The technique of having an 'application profile' is used for this. The local community may want to declare a specific way of using the specifications within their context or add some elements that will enrich it in their context. Either way, best practice has the community referencing the generic specification documents and adding anything extra by using a new schema for the extra elements. Most importantly, local communities should make their application profiles available to others with whom they intend to interoperate.

3. Content and Resources

Do I need to own the content to use this specification?

There is often confusion between content and its meta-data. Simply put, the latter is a description of the former. Meta-data is not required to have been made by the same author as the content or even to be located at the same location. It is possible that a meta-data author will not 'own' or control any of the content, but rather will reference the content externally from within other content or repositories that they do control. For example, it is possible for an author to create an equivalent alternative resource (of a different modality) for an external primary resource of which they do not 'own' (e.g., a caption for a video). The author can write meta-data for both pieces of content, creating the necessary equivalence relationships, and then a system that has access to this meta-data is able to match the more appropriate content to the needs and preferences of a user. In other words, the meta-data approach makes it possible to work without direct ownership over the content.

Should I package AccessForAll meta-data with content?

When possible, AccessForAll meta-data should be included or referenced in any manifest file that describes the packaging of its associate resource. When the resource is a collection of files that make up a composite object, best practice recommends associating each individual, atomic resource with its own meta-data in order to increase the granularity of accessibility properties for composite objects. When packaged, all AccessForAll meta-data for atomic resources should be included.

When content is aggregated into a single composite object, some items may be more accessible than others. How do I describe the aggregate resource?

Content can be considered either atomic or aggregate. An atomic resource is a stand-alone resource with no dependencies on other content. For example, a JPEG image would be considered an atomic resource. An aggregate resource, however, is dependent on other content in that it consists not only of its own content but also embeds other pieces of content within itself via a reference or meta-data. For example, an HTML document referencing one or more JPEG images would be considered an aggregate resource.

The use and behavior of AccessForAll Meta-data for atomic content is straightforward. It is defined in the System Description and Behavior Examples section of the Information Model. The algorithm/flow model defined there is referred to as content matching. For aggregate content, the required system behavior is slightly more complex but it still involves matching. In other words, if the primary resource is an aggregate resource, then the system will have to determine whether or not the primary resource contains atomic content that will not pass the matching test. If so, it will examine the inaccessible atomic resources to determine which resources require equivalents. This means a primary resource must define its modalities as inclusive of those of its content dependencies.

What should the system do when atomic content inside an aggregate resource is not accessible?

Decisions need to be made by the system when there is atomic content within an aggregate resource and some of that atomic content does not match the needs and preferences of the user. The system must find matching content to render the atomic content accessible (where possible) and re-aggregate the content into an accessible resource. The following scenario should give an adequate explanation.

Scenario: An HTML file contains text and an embedded Flash animation (visual only, no sound). There is also alternative textual content to the animation defined by accessibility meta-data as an equivalentResource containing alternativesToVisual properties. A user profile has a content element with the alternativesToVisual preference set and wishes to interact with the aggregate file. The system applies the matching test on the aggregate HTML resource and sees it has a hasVisual property with a value of true. Subsequently it sees the animation has an equivalentResource with an alternativesToVisual which matches the user's content preferences. At this point the system replaces the animation with the text alternative. The system modifies the aggregate resource by changing its reference to the animation to a reference to text, i.e., the embedded flash animation's <object> tag is replaced with a <p> tag containing the alternative textual content.

Is primary content allowed to contain supplementary content within itself? And if so, how is this marked-up?

Yes, primary content is allowed to include supplementary content and in fact, is recommended wherever possible. For example, a video in its initial authoring can include text captions. In this case the primary resource (the video) would have an equivalent alternative resource (the text captions) and would mark it as being supplementary to the primary resource. In the meta-data, this would consist of having a primary and equivalent element in the same meta-data record with the equivalentResource and primaryResource elements pointing to the same resource. For example, the following is an example ACCMD meta-data section for a primary video with an included supplementary German caption track:

<accessibility xmlns="http://www.imsglobal.org/xsd/accmd">
  <resourceDescription>
    <primary hasAuditory="true" hasTactile="false" hasText="false" hasVisual="true">
      <equivalentResource>uri:self-reference</equivalentResource>
    </primary>
    <equivalent supplementary="true">
      <primaryResource>uri:self-reference</primaryResource>
      <content>
        <alternativesToAuditory xmlns="http://www.imsglobal.org/xsd/acclip">
          <captionType xml:lang="de">
            <verbatim value="true"/>
            <reducedSpeed value="false"/>
            <enhancedCaption value="true"/>
          </captionType>
        </alternativesToAuditory>
      </content>
    </equivalent>
  </resourceDescription>

4. Accessibility for LIP (ACCLIP user profiles)

Should a system always act on a user's profile? How can we tell if the user really intended to use the settings?

Systems can't tell if a user was exploring the preference setting tool out of curiosity or was creating an important set of preferences. For this reason, any profile that is saved should be acted on, even, for example, if only one preference was selected or if all settings are left with default values. Implementers should ensure that their interface makes it easy to cancel without saving when exploring the preferences. Users also need the facility to save, delete, and modify their profiles.

Can a user have multiple profiles?

Yes, a user can have multiple profiles. For example, a user could have different needs for the morning, evening, when tired or when exposed to different environments. They can create a profile suitable for each of these contexts.

Can an institution set a 'system-wide' or organizational profile, say for branding purposes?

Yes, some implementers use the accessibility user and resource profiles to define organizational profiles for a house style. They include such settings as the organization's preferred fonts and colors or even specific content preferences the organization may endorse. For example, a Quebec governmental institution might create an organizational profile where its 'system-wide' colors are blue and white and its preferred content language is French. The organization must, however, respect the profiles of its individual users, and therefore, there must be a provision for the profiles of the institution and the individual to cascade, with the profile of the individual having the higher priority.

What if two people are working together and their profiles clash?

If two people are working together, the system should try to accommodate the profiles of both users by offering multiple modalities of content when possible. When this is not possible, the system should attempt to achieve the maximum accessibility possible. For example, if users have requested different font sizes, the system should use the larger font size. If users have requested different font colors, the system should use the color combination with the better contrast. A use case describing combined profiles is included in the 1EdTech Accessibility for LIP Best Practice and Implementation Guide version 1.0 [ACCLIP, 03c], section 4.5.1 , at http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_bestv1p0.html.

How can a system create Accessibility for LIP user profiles?

Accessibility for LIP user profiles can be created in a variety of ways. The most likely way is through an interactive form ('wizard') that presents a number of questions to the user and, given responses to the questions, generates the profile. This application may be integrated into a content management system or offered as a stand-alone application.

Once a person has a user profile, are they able to change and add to it?

Yes, users should be able to change, expand, replace, or completely remove their user profile as needed. They should also be able to create multiple profiles in order to provide a convenient way to switch between several sets of preferences for different situations - e.g., at home, school, or in a quiet or noisy place. For more information, see the 1EdTech Accessibility for LIP Best Practice and Implementation Guide version 1.0 [ACCLIP, 03c], section 4.4.1, at http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_bestv1p0.html.

5. Matching Users and Resources

If the user has no content ACCLIP preferences, how does a system match resources to the user?

The system should assume the user has no special preferences regarding the type of content to be displayed and, as a result, should always display the primary resource to the user.

If the requested resource has no AccessForAll Meta-data, but the user has content ACCLIP preferences, what should the system do?

The system should warn the user (in a format amenable to the user's preferences) that the content about to be displayed has no accessibility meta-data and, as a result, is potentially not compliant with the user's preferences. The technical format of the content (i.e., the MIME type) could be displayed and the user given the option of whether or not to continue viewing the content.

What is a 'direct' match between the ACCLIP preferences of a user and the AccessForAll Meta-data of a requested resource?

If, when the system matches the resource's content meta-data against the content preferences (as defined in the System Description and Behavior Examples section of the Information Model), all of the user's content preferences are matched by the content's content meta-data, then a direct match was found.

What is a 'partial' match between the ACCLIP preferences of a user and the AccessForAll Meta-data of a requested resource?

If, when the system matches the resource's content meta-data against the user's content preferences (as defined in the System Description and Behavior Examples section of the Information Model), only some of the user's content preferences are matched by the content's meta-data, then a partial match was found.

How should a system handle 'partial' matches between the ACCLIP preferences of a user and the AccessForAll Meta-data of a requested resource?

The system should display the partial matches to the user (in a format amenable to the user's preferences) indicating the degree to which the user's content preferences are satisfied, and the user should be given the choice of which partial match to view, if any. In this case, the usage element of the user's preferences could be used to weigh the importance of the preference and possibly rank the alternatives.

How should a system handle NO matches between the ACCLIP preferences of a user and the AccessForAll Meta-data of a requested resource?

The system should warn the user (in a format amenable to the user's preferences) that neither the requested resource nor its equivalent alternatives are compatible with the user's preferences. The user should be given the option of viewing the content.

How should a system handle 'partial' matches that are very close to 'direct' matches? Should the user have control of this?

The implementing system should denote a threshold for which 'partial' matches can be viewed as equivalent to 'direct' matches and, as a result, be displayed in the same fashion. For example, if the user's caption rate preference is 149 words per minute (WPM), but the AccessForAll meta-data of a supplementary states that the resource's caption rate is 150 WPM, then, given that the variation is so small, the implementing system could treat the requested resource as a 'direct' match.

How automated should the substitution of equivalent or supplemental resource be?

Best practice recommends that user control over the automation behavior be an implementation feature of the system.

6. System Implementation

In a list of search results, should resources that match a user profile be displayed differently from those that do not?

When a user searches for content, the user's profile should be taken into account when displaying the search results. Once resources matching the search criteria are found, the meta-data for these resources should be examined to determine if the resource matches the preferences in the user's profile. The resources should be ranked according to how well they match each preference and any partial matches should be flagged as such. The usage elements in the user's profile should be used to rank the resources. Users should be given the option of requesting that partial or non-matches be omitted from the search results.

How do systems accommodate both global interoperability issues and locally specific issues?

Typically implementations of meta-data occur in situations where locally specific needs require some information or behaviors that are not necessarily of interest to other communities. The main aim of the specification is to ensure the interoperability of meta-data between systems. Local specifications should be added to the main specification using application profiles, as indicated above.

How do I extend the resource profile meta-data?

This is described in the AccessForAll Meta-data XML Binding [ACCMD, 04b] document.

What is the relationship model between primary and equivalent resources?

In short, the model is one of two-way pointers between primary and equivalent resources. A primary resource is allowed to point to zero or more equivalent resources. An equivalent resource is allowed to point to a single primary resource only. In this way, circular references are avoided and the relationship model is greatly simplified with no loss of functionality.

Primary to equivalent relationship model
Figure 6.1 Primary to equivalent relationship model.

Text description of Figure 6.1: A diagram displaying the relationship between a primary resource and its associate equivalent resources. One primary resource, four equivalent resources with a two-way pointer between the primary resource and each of its four equivalent resources.

7. Multi-linguality

Does the AccessForAll Meta-data specification work in a multi-lingual context?

Alternate language versions of content are considered equivalent alternatives and are covered in the specifications.

What if my context is not English-speaking?

Current best practice would be for implementers to use English for the element and attribute names and values for both profiles. In this context, the terms should be regarded as providing linguistically neutral tokens. All accompanying documentation should be provided in the language of choice. Element and attribute names should not be changed as doing so will result in a lack of interoperability when computers cannot match element names. Best practice would be for implementers to use English for the element and attribute names and values for both profiles, but provide all documentation in the language of choice. This includes offering users profile creation tools that present the options to them in their language of choice.

8. Implementation Examples

8.1 TILE Low Vision Example

The following example is from the The Inclusive Learning Exchange system (TILE) developed by the Adaptive Technology Resource Centre, University of Toronto.

A learner is studying a course on Globalization and International Migration containing an illustration of the concepts of restricted migration. A user without an ACCLIP profile or with an ACCLIP profile, but without expressed needs or preferences concerning visual content, would receive the original primary image as displayed below in Figure 8.1:

TILE screenshot of resource with text and Flash animation
Figure 8.1 TILE screenshot of resource with text and Flash animation.

Another user who has a visual impairment and uses a screen reader may require text instead of images. To accommodate this user, it would be necessary for the primary image to be replaced by an equivalent resource containing alternative to visual characteristics.

To achieve this, first, the primary image would require the following accessibility meta-data to communicate its modality attributes and express an equivalent relationship with an alternative resource:

<accessibility xmlns="http://www.imsglobal.org/xsd/accmd">
  <resourceDescription>
    <primary hasText="true" hasVisual="true" hasAudio="false"/>
      <equivalentResource>
        urn:uuid:55b210d0-922f-11d8-a73a-0002b3af6db8
      </equivalentResource>
    </primary>
  </resourceDescription>
</accessibility>

Conversely, the equivalent resource would need to have the following accessibility meta-data to communicate its alternative-to-visual properties and a primary relationship with the original resource:

<accessibility xmlns="http://www.imsglobal.org/xsd/accmd">
  <resourceDescription>
    <equivalent supplementary="false">
      <primaryResource>
        urn:uuid:2b449e70-424a-11d8-a524-0002b3af6db8
      </primaryResource>
      <content>
        <alternativesToVisual xmlns="http://www.imsglobal.org/xsd/acclip">
          <longDescriptionLang xml:lang="en"/>
        </alternativesToVisual>
      </content>
    </equivalent>
  </resourceDescription>
</accessibility>

The above meta-data describes a resource which contains an English language text description of the original image. In other words, this text file is meant to be used as an alternative to the original image.

The final requirement is for the user to have an ACCLIP profile stating his/her needs or preferences relating to his/her vision requirements. The user edits an ACCLIP profile using a preference wizard as displayed below in Figure 8.2:

TILE screenshot of Alternatives to Visual preference editing
Figure 8.2 TILE screenshot of Alternatives to Visual preference editing.

The user specifies a requirement for text alternatives to visual elements. The user's ACCLIP profile could be the following XML instance document:

<?xml version="1.0"?>
<accessForAll xmlns="http://www.imsglobal.org/xsd/acclip"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://www.imsglobal.org/xsd/acclip
                                  AccessForAllv1p0d27.xsd">
  <context identifier="TILE" xml:lang="en">
    <content>
      <alternativesToVisual>
        <altTextLang xml:lang="en"/>
        <longDescriptionLang xml:lang="en"/>
        <colorAvoidance>
          <avoidBlueYellow value="false"/>
          <avoidGreenYellow value="false"/>
          <avoidRed value="false"/>
          <avoidRedGreen value="false"/>
          <useMaximumContrastMonochrome value="false"/>
        </colorAvoidance>
      </alternativesToVisual>
    </content>
  </context>
</accessForAll>

When the above user requests to view the course on Globalization and International Migration containing the image, the system recognizes that the user requires an alternative to the visual modality. It checks the image's equivalent resources and discovers that an equivalent exists with alternative to visual characteristics which match the requirements of the user. The system then displays the page with alternative text substituted for the image, as displayed below in Figure 8.3:

TILE screenshot of resource with Flash animation substituted with text equivalent
Figure 8.3 TILE screenshot of resource with Flash animation substituted with text equivalent.

The logic executed is as described in Figure 3.2 in the ACCMD Information Model [ACCMD, 04a] document ( http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_infov1p0.html), a segment of which is displayed in the following:

 
Step

Element to examine

Test

if primary has visual
resourceDescription.primary.hasVisual
true
and learner requests text alts to visual
accessForAll.alternativesToVisual
exists
for each equivalent referenced
resourceDescription.primary.equivalentResource
(list)
examine equivalent's meta-data  
 
if equivalent contains a long description
resourceDescription.equivalent.content.
alternativesToVisual.longDescriptionLang
exists
display equivalent  
 
if none of the LOs have long descriptions
resourceDescription.equivalent.content.
alternativesToVisual. longDescriptionLang
not exist
query user to display original  
 
.......
.......

 

8.2 TILE Caption Example

The following example is from the The Inclusive Learning Exchange system (TILE) developed by the Adaptive Technology Resource Centre, University of Toronto.

A learner is studying a course on Globalization and International Migration containing a video of a lecture by Professor Stephen Castles. Like most videos, it contains visual and audio information. The media type of the video could be Quicktime, Real Media, or one of many other formats. A user without an ACCLIP profile or with an ACCLIP profile, but without expressed needs or preferences concerning audio or visual content, would receive the original primary video as displayed below in Figure 8.4:

\

TILE screenshot of video with no captions
Figure 8.4 TILE screenshot of video with no captions.

Another user who has a hearing problem and difficulty understanding English may require a reduced reading level and enhanced captions. In this case it would be necessary for the primary video to be supplemented by an equivalent resource which contains alternative to auditory characteristics.

To achieve this, first, the primary video would need to have the following accessibility meta-data which communicates its modality attributes and expresses an equivalent relationship with an alternative resource:

<accessibility xmlns="http://www.imsglobal.org/xsd/accmd">
  <resourceDescription>
    <primary hasText="false" hasVisual="true" hasAudio="true"/>
      <equivalentResource>
        urn:uuid:56b220d0-422f-11d8-a71a-0002b3af6db8
      </equivalentResource>
    </primary>
  </resourceDescription>
</accessibility>

Conversely, the equivalent resource would need to have the following accessibility meta-data which communicates its alternative-to-audio properties and a primary relationship with the original resource:

<accessibility xmlns="http://www.imsglobal.org/xsd/accmd">
  <resourceDescription>
    <equivalent supplementary="true">
      <primaryResource>
        urn:uuid:1c9e9e80-424a-11d8-a414-0002b3af6db8
      </primaryResource>
      <content>
        <alternativesToAuditory xmlns="http://www.imsglobal.org/xsd/acclip">
          <captionType xml:lang="en">
            <reducedReadingLevel value="true"/>
            <reducedSpeed value="false"/>
            <enhancedCaption value="true"/>
          </captionType> 
        </alternativesToAuditory>
      </content>
    </equivalent>
  </resourceDescription>
</accessibility>

The above meta-data describes a supplementary caption file which has a reduced reading level, no reduced speed, and is enhanced. In other words, this caption file is meant to be used in conjunction with the original video to provide real-time, enhanced captioning with a reduced level of reading.

The final requirement is for the user to have an ACCLIP profile stating his/her needs or preferences relating to his/her hearing problems and difficulty in understanding English. The user edits an ACCLIP profile using a preference wizard as displayed below in Figure 8.5:

TILE screenshot of Alternatives to Auditory preference editing
Figure 8.5 TILE screenshot of Alternatives to Auditory preference editing.

The user specifies a requirement for reduced reading level and enhanced captions. The user's ACCLIP profile could be the following XML instance document:

<?xml version="1.0"?>
<accessForAll xmlns="http://www.imsglobal.org/xsd/acclip"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://www.imsglobal.org/xsd/acclip
                                  AccessForAllv1p0d27.xsd"> 
  <context identifier="x-LMS">
    <content>
      <alternativesToAuditory>
        <captionType xml:lang="en" usage="required">
          <reducedReadingLevel value="true"/>
          <reducedSpeed value="false"/>
          <enhancedCaption value="true"/>
        </captionType>
      </alternativesToAuditory>
    </content>
  </context>
</accessForAll>

When the above user requests to view the course on Globalization and International Migration containing the video of a lecture by Professor Stephen Castles, the system recognizes that the user requires an alternative to the auditory modality. It checks the video's equivalent resources and discovers that an equivalent exists with alternative to auditory characteristics which match the requirements of the user. Depending on the media-type of the video, the necessary actions could be to switch on that caption stream, locate and deliver a captioned-version of the video, or (ideally) locate a caption file and augment the video with it in delivery. The system then displays the video with its supplementary captions as displayed below in Figure 8.6:

TILE screenshot of video with captions
Figure 8.6 TILE screenshot of video with captions.

The logic executed is as described in Figure 3.2 in the ACCMD Information Model [ACCMD, 04a] document ( http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_infov1p0.html), a segment of which is displayed in the following:

 
Step

Element to examine

Test

if primary has audio
resourceDescription.primary.hasAudio
true
and learner requests captions
accessForAll.alternativesToAuditory.captionType
exists
for each equivalent referenced
resourceDescription.primary.equivalentResource
(list)
examine equivalent's meta-data  
 
if equivalent contains captions
resourceDescription.equivalent.content.
alternativesToAuditory.captionType
exists
if captions are supplementary
resourceDescription.equivalent.supplementary
true
display orig and equiv  
 
else
resourceDescription.equivalent.supplementary
false
display equivalent  
 
if none of the LOs have captions
resourceDescription.equivalent.content.
alternativesToAuditory.captionType
not exist
query user to display orig  
 
.......
.......

 

About This Document

 
Title 1EdTech AccessForAll Meta-data Best Practice and Implementation Guide
Editor Alex Jackl (1EdTech)
Team Co-Leads Jutta Treviranus (Industry Canada), Anthony Roberts (Industry Canada)
Version 1.0
Version Date 12 July 2004
Status Final Specification
Summary This document provides best practices and answers implementation concerns regarding the AccessForAll Meta-data specifications. It references the technical documents.
Revision Information 12 July 2004
Purpose Provides guidance for implementers of the AccessForAll Meta-data specification.
Document Location http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_bestv1p0.html

 
To register comments or questions about this specification please visit: http://www.imsglobal.org/developers/ims/imsforum/categories.cfm?catid=16

List of Contributors

The following individuals contributed to the development of this document:

 
Name Organization
Anastasia Cheetham ATRC - U. Toronto, Industry Canada
Martyn Cooper Open University, UK
Eric Hansen Educational Testing Service (ETS), USA
Andy Heath Sheffield Hallam University, CEN-ISSS Learning Technologies Workshop APLR project, UK
Alex Jackl 1EdTech Consortium, Inc.
Liddy Nevile DEST, La Trobe University Australia
Anthony Roberts Industry Canada
Madeleine Rothberg WGBH National Center for Accessible Media, USA
Jutta Treviranus ATRC - U. Toronto, Industry Canada
David Weinkauf ATRC - U. Toronto, Industry Canada

Revision History

 
Version No. Release Date Comments
Base Document 1.0 02 February 2004 Initial version of the AccessForAll Meta-data Specification.
Final Specification 1.0 12 July 2004 This is the formal Final Specification of the 1EdTech AccessForAll Meta-data Best Practice and Implementation Guide.

Index

A
AccessForAll 1, 2, 3, 4, 5, 6, 7, 8
Accessibility 1, 2, 3, 4, 5, 6, 7, 8

B
Behavior 1, 2, 3
Binding 1

D
Dublin Core 1, 2

E
Extension 1

I
IEEE 1, 2
1EdTech International Conformance Program 1
1EdTech Specifications
AccessForAll Meta-data 1, 2
Content Packaging 1
Digital Repositories Interoperability 1
Learner Information Package 1, 2
Learner Information Package Accessibility for LIP 1, 2, 3, 4, 5, 6, 7, 8, 9
Interoperability 1, 2
 

L
Learning Object 1
LOM 1, 2

M
Meta-data 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

N
Normative 1

P
Preferences 1, 2, 3, 4, 5, 6, 7, 8, 9
Profile 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

R
Resource 1, 2, 3, 4, 5, 6, 7, 8, 9
Resources 1, 2, 3, 4, 5, 6
RFC 1

S
Schema 1

U
URI 1

W
W3C 1, 2

X
XML 1, 2, 3, 4

 

 

 

1EdTech Consortium, Inc. ("1EdTech") is publishing the information contained in this 1EdTech AccessForAll Meta-data Best Practice and Implementation Guide ("Specification") for purposes of scientific, experimental, and scholarly collaboration only.

1EdTech makes no warranty or representation regarding the accuracy or completeness of the Specification.
This material is provided on an "As Is" and "As Available" basis.

The Specification is at all times subject to change and revision without notice.

It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.

1EdTech would appreciate receiving your comments and suggestions.

Please contact 1EdTech through our website at http://www.imsglobal.org

Please refer to Document Name:
1EdTech AccessForAll Meta-data Best Practice and Implementation Guide Revision: 12 July 2004