1EdTech AI-Generated Content Best Practices v1.0

1EdTech AI-Generated Content Best Practices v1.0

Date Issued: December 15, 2025
Status: This is an informative 1EdTech document that may be revised at any time.

IPR and Distribution Notice

Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the specification set forth in this document, and to provide supporting documentation.

1EdTech takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any such rights. Information on 1EdTech's procedures with respect to rights in 1EdTech specifications can be found at the 1EdTech Intellectual Property Rights webpage: https://www.1edtech.org/sites/default/files/media/docs/2023/imsipr_policyFinal.pdf .

The following participating organizations have made explicit license commitments to this specification:

Org name Date election made Necessary claims Type
Not applicable November 21, 2025 No Royalty-free RAND License

Use of this specification to develop products or services is governed by the license with 1EdTech found on the 1EdTech website: https://www.1edtech.org/standards/specification-license.

Permission is granted to all parties to use excerpts from this document as needed in producing requests for proposals.

The limited permissions granted above are perpetual and will not be revoked by 1EdTech or its successors or assigns.

THIS SPECIFICATION IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NONINFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY USE OF THIS SPECIFICATION SHALL BE MADE ENTIRELY AT THE IMPLEMENTER'S OWN RISK, AND NEITHER THE CONSORTIUM, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER, DIRECTLY OR INDIRECTLY, ARISING FROM THE USE OF THIS SPECIFICATION.

Public contributions, comments and questions should be directed to support@1edtech.org .

© 2025 1EdTech™ Consortium, Inc. All Rights Reserved.

Trademark information: https://www.1edtech.org/about/legal

Abstract

The AI-Generated Content Best Practices document was developed by the 1EdTech Consortium community in response to the growing role of generative artificial intelligence (GenAI) in education. With the rise of educational content being created or refined by GenAI applications, content providers and technology platforms share a responsibility to ensure transparency, preserve trust, and communicate the level of human validation involved. At the same time, this document recognizes that AI-generated content is used daily worldwide by educators and students and the workforce in applications and tools, and is thought to be the norm, not a special use case. This work originated with the 1EdTech Assessment Product Steering Committee, whose members identified needs around tracking and compliance, ownership, and ethical considerations. This included member use cases and metadata considerations, and led to the collaborative development of this guidance.

Currently, the marketplace offers limited direction for institutions and providers on how to annotate or disclose the use of GenAI in learning resources, assessments, and multimedia. At the same time, states and countries have begun drafting or enacting legislation related to watermarking, disclosure, or transparency of AI-generated content. These developments reinforce the need for a shared taxonomy and recommendations that can be adopted across the education sector.

This document offers recommendations, not requirements, to guide responsible use of AI-generated content. It provides a shared taxonomy and a set of practices to help organizations, with representative use cases:

  • Communicate transparently when GenAI has been used to generate content through clear labeling and disclosure. (Examples)
  • Software providers and educators should maintain provenance records and/or metadata that support trust, accountability, and portability across platforms, such as model, version, hosted themselves, number of tokens used, prompts, date, link to the LLM, metadata, etc. (Examples)
  • Apply risk-based oversight that differentiates between high-, medium-, and low-stakes use cases. The AI Generated Content Best Practices is in some alignment with the EU AI Act, in that there are recommendations based on risk-assessment levels of the AI systems. (Examples)
  • Incorporate review by the organization’s subject matter experts (SME) for accuracy, rigor, and alignment with disciplinary and institutional standards. (Examples)
  • Verify accessibility needs and validate AI-generated supports such as alt-text, captions, or formatting. (Examples)
  • Establish storage practices that retain provenance, disclosures, and usage logs appropriate to context in alignment to your organization’s retention policy if relevant. (Examples)
  • Support responsible use and integrity across publishers, educators, and students, with clear expectations and disclosure policies. (Examples)
  • Recommendation to align with open interoperability standards such as 1EdTech’s QTI®, Common Cartridge®/Thin CC, LTI®, and Caliper Analytics®, so metadata and labels carry across systems. (More information)

Because the policy and technology landscape is evolving rapidly, this document will be reviewed and updated periodically. Its purpose is to provide a foundation for transparency and accountability now, while remaining flexible to adapt as GenAI’s role in education develops.

AI Use Disclosure Statement

This document was developed with contributions from both human authors and generative AI was used only to support drafting, summarizing, and refining language through an interactive chat process. All AI-assisted text was reviewed, validated, and edited by the 1EdTech Project Group to ensure accuracy, rigor, and alignment with 1EdTech’s mission and standards. Consistent with the recommendations in this document, this disclosure is provided to ensure transparency about the role of AI in its creation.

Introduction

Educational content, assessments, and multimedia are increasingly being created with the assistance of generative AI applications. These tools bring new opportunities for efficiency, localization, and innovation but also raise questions of transparency, accountability, and trust. To ensure clarity for educators, learners, and institutions, some organizations may have requirements or want to disclose the role GenAI plays in content development and the level of an organization’s subject matter expert (SME) validation involved. This Best Practices document provides a shared framework to help the education community apply AI responsibly and consistently.

Background and Rationale

The rise of generative AI in education has created a need for common practices around annotation, disclosure, and validation. Without clear guidance, stakeholders may struggle to understand how GenAI contributes to content or whether the necessary safeguards are in place. Interest in developing this guidance members raised concerns about tracking and compliance, transparency for trust and accountability, ownership, and ethical and security considerations. Members and 1EdTech agreed there was a clear need for a shared framework, leading to the work on this collaborative effort. This document is designed to serve multiple contexts for institutions and providers. It responds to member-identified needs by offering a taxonomy and framework for annotating GenAI use, supporting both regulatory alignment and shared trust across the education ecosystem.

Scope

This document is intended for use by publishers, edtech providers, educators, and institutions across K–12, primary and secondary, higher education, and workforce learning globally. It does not prescribe a single implementation pathway but instead provides recommendations, examples, and frameworks that organizations can adapt to their own contexts. There has been consideration that there are different levels of use and restrictions on AI use worldwide and some countries and regions can be more or less stringent with their policies. The focus is on labeling, annotating, and reviewing AI-generated content in ways that ensure educational integrity, accessibility, and fairness.

Key audiences include:

  • Publishers and content providers who develop instructional and assessment materials
  • Institutions and educators who use GenAI-assisted tools to adapt and deliver content
  • EdTech companies integrating GenAI into platforms and services

Status and Conformance

This Best Practices document is non-normative—it does not establish certification, compliance, or conformance requirements. Instead, it offers recommendations, frameworks, and examples that institutions, providers, and publishers can adapt to their own contexts.

The guidance reflects input from member organizations, emerging state legislation, and market exemplars, but it is not prescriptive. 1EdTech encourages its members and the broader community to provide feedback and participate in the ongoing development of this work. To engage with the 1EdTech AI-Generated Content Best Practices Community, please visit the 1EdTech AI-Generated Content Best Practices Community (member login required) or learn more at 1EdTech.org.

For related guidance on issues such as privacy, bias, or hallucinations, members are encouraged to use companion resources such as the 1EdTech TrustEd Apps™ Generative AI Data Rubric , AI Preparedness Checklist, 1EdTech AI Data Rubric that is in development, or contact 1EdTech at support@1edtech.org.

Guiding Principles

The following guiding principles are offered as recommendations for how AI-generated content should be generated, shared, and used in educational contexts. The principles apply across publisher-produced, educator-produced, student-produced, and hybrid content, though the emphasis may vary by context. In some cases, one principle may be more critical for a particular category than another, but together they serve as a shared foundation for responsible practice across the ecosystem. These principles are intended to provide a common foundation that institutions and providers can adapt to their own needs.

Transparency

Stakeholders benefit when it is clear where GenAI was used, what role it played, and what level of subject matter expert oversight was applied. Publisher-produced content may call for structured labeling and metadata; educator-produced and student-produced content may emphasize classroom-level disclosure; and hybrid content may combine both approaches.

Accountability

Human experts play a central role in ensuring accuracy, rigor, and appropriateness. For publisher-produced and hybrid content, subject matter experts are encouraged to validate outputs; for educator-produced content, teachers are best positioned to provide oversight; and for student-produced work, guardrails can help keep the focus on learning objectives.

Fairness & Non-Descrimination

GenAI can support equity and inclusion when carefully reviewed. Publisher and hybrid content may benefit from formal bias audits, while educator- and student-produced content may focus more on cultural sensitivity and fairness in classroom interactions.

Accessibility

AI-generated or GenAI-assisted content should be designed with accessibility in mind, following internationally recognized standards such as WCAG, EN 301 549, and WAI-ARIA, where applicable. While not a mandatory requirement in this policy, adherence to these standards is strongly recommended to help ensure that digital content is perceivable, operable, understandable, and robust for users with diverse abilities.

Trust

The use of GenAI should strengthen, not diminish, confidence in educational resources. For publisher and hybrid content, practices such as provenance tracking and SME review help build trust. Transparency and consistent use of the guidance will help students and educators to build trust in the content.

Current Landscape

Research and Legislation

The development of this Best Practices document is based on both member research and emerging legislation from around the world, such as US states and the European Union's Artificial Intelligence (AI) Act. 1EdTech members and staff have compiled resources from across the community and reviewed some of the recent state-level laws that address transparency, disclosure, and oversight in GenAI use.

Research

Research highlights several consistent themes:

  • Transparency builds trust Educators, administrators, and learners want clarity about when and how GenAI has been used.
    • Teachers’ trust increases when the reasoning or “how” behind GenAI-powered suggestions or content is explained in domain-appropriate terms. That has implications for content generated by AI: the more understandable and transparent the content is (how it was generated, what data or assumptions were used), the more likely educators are to trust it.1
    • Empirical research shows that AI-generated educational content can inspire trust among learners when it matches human content in correctness and helpfulness2, and when transparency about how the content was generated is clear3. These findings suggest that disclosure, an organization’s subject matter expert review, and metadata that describes the GenAI’s role are not just compliance matters but foundational to the perceived credibility of AI-generated content. The edtech provider or institution owns the metadata, based on the RFP process. Institutions can request ownership of the content or metadata in their procurement requirement documents.
  • Subject matter expert oversight is essential AI-generated content requires human validation to ensure accuracy, rigor, and alignment with standards. GenAI models also have a tendency to repeat topics or themes and human oversight ensures content diversity in the item development and form construction processes.
    • Studies show that while GenAI tools (like ChatGPT, Claude, or Gemini) can generate helpful content (hints, practice items, feedback), they also make errors. For instance, in math education, some feedback from GenAI was incorrect (in some cases with error rates ~30-plus percent in certain topics). Research findings indicate that GenAI should not be relied on without human validation; instead, its outputs are best viewed as draft or assistive inputs that require expert review before being used in instruction, lessons, assessments, or study materials.4
    • Generative AI often produces misinformation or misconceptions, particularly in math, unless careful oversight, prompt engineering, or fine-tuning is used. The implication is that AI-generated content cannot be used “as is” for high-stakes or learning-critical contexts without subject matter expert validation to catch misleading or incorrect content.5
  • Bias and accessibility should be monitored closely Studies underscore the risk of inequitable outcomes without intentional safeguards.
    • While GenAI tools can improve engagement, motivation, personalized learning, etc., there are also “key challenges” including potential inaccuracies, bias, and threats to academic integrity. In order to benefit students equitably, use of GenAI tools must be supplemented by teacher oversight and reflection.6
    • Study participants expressed concerns about factual inaccuracy, content that may be contextually inappropriate, and bias in GenAI outputs. Without safeguards (such as review, testing, transparency, and localization), inequities—especially across cultural and language contexts—are more likely to arise.7 For guidance with this, refer to the 1EdTech AI Data Rubric.
    • There are both opportunities and challenges for GenAI in inclusive education for learners with disabilities. The challenges include accessibility issues, bias in tools, and discrepancies in benefit depending on resource levels. Without intentional design, testing, and review, GenAI risks reinforcing existing inequities.8

These findings align directly with the guiding principles of this document and provide evidence that responsible GenAI practices are not just regulatory expectations but also professional best practices. Examples from organizations such as Carnegie Learning, ETS, Digital Promise, North Carolina Department of Public Instruction, and others demonstrate how these principles are being put into practice today Examples.

Legislation

AI-generated content does not exist in a regulatory vacuum. While this document offers broad recommendations, institutions and providers must also remain attentive to state-specific legislation and policy guidance. Requirements can differ significantly by jurisdiction, and in many cases, rules are still evolving. To reduce compliance risks, organizations should review applicable state laws and adapt their practices accordingly.

While the landscape is constantly evolving, a few illustrative examples include:

  • 2023:
    • Brazil – AI Bill (PL 2338/2023): Introduces obligations for transparency, explainability, and risk management, influencing expectations for AI-generated content used in education and public services.
    • China – Measures for the Management of Generative AI Services (2023): Requires providers to label AI-generated content, ensure it is accurate and lawful, and maintain mechanisms for public complaint and accountability.
  • 2024:
    • California (September 2024) requires watermarking of content that was created using AI. It also requires developers of generative AI systems to disclose information about the data used to train them.
    • Utah (Artificial Intelligence Policy Act (2024)) emphasizes consumer transparency, including disclosures when individuals ask if they are engaging with AI.
    • Colorado (Colorado Artificial Intelligence Act 2024 - 2026) requires developers of high-risk AI systems to publish statements describing system types, foreseeable risks of algorithmic discrimination, and mitigation strategies.
    • European Union – Artificial Intelligence Act (2024): Establishes a risk-based framework for AI systems, including transparency requirements for generative models and obligations for content labeling and training data disclosure for foundation models.
    • Singapore – Model AI Governance Framework for Generative AI (2024): Non-binding guidance outlining provenance, data transparency, and content labeling practices for responsible AI deployment.
    • India – 2024 Advisory on AI Systems: Directs developers to label AI-generated outputs that may be unreliable and to obtain approval before public deployment of generative tools in sensitive contexts.
    • South Africa – Draft National AI Policy Framework (2024): Establishes guiding principles emphasizing accountability and transparency for AI systems, with potential future expansion into content labeling.
    • United Arab Emirates – SDAIA and Dubai AI Guidelines (2024): Issue best practice recommendations for responsible use of generative AI, including restrictions on misleading or synthetic media in public communication.
  • 2025:
    • Texas (Texas Responsible Artificial Intelligence Governance Act (TRAIGA) July 2025) mandates disclosure when citizens interact with AI tools operated by state agencies.
  • Other states (Montana, New York, Oregon, Connecticut) have introduced legislation or issued guidance on disclosure, human review, and transparency.

Together, these examples demonstrate a converging global expectation: organizations deploying AI-generated content must maintain transparency, human oversight, and accountability across all jurisdictions where their content is used.

Common Themes

Across these initiatives, several themes consistently emerge:

  • Transparency and disclosure when GenAI is used in content creation or interaction.
  • Human or subject matter expert oversight where GenAI plays a significant role.
  • Risk management and bias mitigation for high-impact systems.

These themes directly support the scope of this Best Practices document, reinforcing the need for labeling, provenance tracking, and validation by experts.

Ongoing Monitoring

Because legislation is evolving rapidly, organizations and individuals are responsible for consulting the most current legislative requirements before making decisions. This document does not attempt to catalog or track those changes, but instead highlights why attention to legislative context is essential.

Implementation Framework

The Implementation Framework provides a practical pathway for applying best practice recommendations in institutional and provider settings. While the document does not prescribe one “right way” to operationalize AI-generated content practices, it offers a structured approach that organizations can adapt to their own needs, capacities, and regulatory environments.

The framework is designed to translate principles into action: helping institutions and providers move from guidelines to application. It emphasizes that implementation is not a single event, but an ongoing process of validation, adoption, and refinement as both GenAI technologies and policy contexts evolve.

By following the steps outlined in this framework—covering areas such as provenance tracking, labeling, accessibility checks, metadata storage, and alignment with open standards—organizations can begin to embed transparency and accountability into their existing workflows. Institutions may choose to phase in practices gradually, prioritize high-risk content first, or align implementation with other institutional initiatives, but the goal remains the same: to create an ecosystem where AI-generated content is trustworthy, discoverable, and responsibly integrated.

Best Practice Recommendations for Implementing AI-Generated Content

The Implementation Framework translates guiding principles into recommended practices and options that institutions and providers can adopt. While approaches may vary by context, the following recommendations are consistently emphasized across this document:

  1. Transparency and Labeling
    • Clearly label when and how GenAI contributed to content creation.
    • Ensure disclosure metadata follows content across platforms (via TCC, QTI, etc.). (Examples)
  2. Provenance Tracking
    • Maintain records of GenAI involvement, model version, and human oversight.
    • Store provenance metadata with content to support auditability and accountability. (Examples)
  3. Risk-Based Oversight
    • Apply different levels of review depending on whether content is high-, medium-, or low-risk in educational outcomes.
    • Prioritize subject matter expert validation for high-stakes use. (Examples)
  4. Subject Matter Expert (SME) Review if the Organization Requires Review:
    • Require human expert validation to ensure accuracy, rigor, and alignment with standards and also provide transparency of the review (when and how it happened, potentially by whom)?
    • An organization’s staff, educators, or SMEs should fact-check content for accuracy, evaluate for potential bias, and review for accessibility compliance, with particular attention to disciplines such as science, mathematics, and social studies where errors or omissions may carry greater consequences. (Examples)
  5. Bias and Accessibility Checks
    • Conduct bias audits to identify inequities. It is suggested to assess content to identify ableist terms, harmful stereotypes, or misleading information about disability or disability-related topics.
    • Validate accessibility conformance to accessibility standards and compatibility with assistive technology
    • Continuously test and monitor AI systems for bias using defined fairness metrics and ongoing post-deployment checks to ensure equitable outcomes.
    • See the 1EdTech TrustEd Apps Accessibility Rubric and the 1EdTech AI Data Rubric for more information (Examples)
  6. Metadata & Storage Practices
    • Store GenAI-use disclosures and provenance together with content.
    • Adopt versioning and audit trails, especially in high-stakes contexts.
    • Comply with regional accessibility laws, based on country or region (e.g., GDPR, FERPA, ADA, EAA). (Examples)
  7. Alignment with Open Standards
    • Use interoperability standards (QTI, Common Cartridge/Thin CC, LTI, Caliper) to ensure transparency and metadata portability across systems. Organizations who use interoperability standards, may want to add annotations in the metadata. (Examples)
  8. Public GenAI-Use Statements
    • Publish organizational statements summarizing how AI is used, for what purposes, and what safeguards are in place.
    • Link these policies to content-level metadata for a layered approach to trust. (Examples)

This section breaks these best practices into specific focus areas—such as risk rating, subject-specific review, accessibility, metadata and storage, and standards alignment—that help institutions and providers put the principles into action. Each subsection offers a lens on implementation: how to recognize different levels of risk, how to address the nuances of content across disciplines, how to safeguard accessibility, and how to ensure transparency and portability through metadata and standards. Taken together, these elements form the building blocks of a practical framework for responsible use of AI-generated content.

Content Categories

AI-generated content in education can be grouped into several categories, each with distinct risks and oversight needs. Publisher-produced content requires the highest standards of transparency, provenance, and bias auditing, as it forms the foundation of curriculum and assessment. Educator-produced content, while more localized, still calls for clear disclosure of GenAI assistance and safeguards against bias.

Student-produced content raises unique concerns around fairness and academic integrity, especially when GenAI influences evaluation or feedback. Finally, hybrid or co-produced content should always follow the stricter requirements of the categories involved, with clear documentation of the human and GenAI contributions. By tailoring oversight to these categories, organizations can ensure compliance, accountability, and trust across the learning ecosystem.

Category Examples Recommended Practices
Publisher-Produced Lessons, textbooks, assessments, digital interactives, multimedia assets and other course content
  • Provenance tracking – Maintain records of GenAI involvement (see §4.5 Storage).
  • Risk assessment – Apply NIST-aligned review for high-stakes content (see §4.3 Risk Rating Criteria).
  • Bias & accessibility checks – Conduct audits for fairness and WCAG-aligned validation (see §4.6 Accessibility Considerations).
  • Transparency & labeling – Use clear GenAI-use labels across platforms (see §4.7 Labeling).
  • Responsible use & integrity – Ensure policies governing responsible GenAI use and disclosure are followed (see §4.8 Responsible Use & Academic Integrity)
  • Standards alignment – Ensure metadata portability via QTI, TCC, LTI, Caliper (see §6 Standards Alignment).
  • Public GenAI-use statement – Provide organizational-level disclosure (see §5.3 Public GenAI-Use Statements).
Educator-Produced Custom lesson plans, localized materials, rubrics
  • Transparency & labeling – Provide classroom-level disclosure (see §4.7 Labeling).
  • Provenance tracking – Maintain brief records of GenAI assistance + edits (see §4.5 Storage).
  • Risk-based oversight – Apply SME/teacher validation where outcomes are affected (see §4.3 Risk Rating Criteria).
  • Bias & accessibility checks – Review AI-generated alt-text/captions/examples (see §4.6 Accessibility Considerations).
  • Responsible use & integrity – Follow institutional policies for ethical GenAI use (see §4.8 Responsible Use & Academic Integrity).
  • Metadata & storage – Retain label + provenance with shared materials (see §4.5 Storage).
  • Standards alignment (where applicable) – Labels persist when sharing via TCC/LTI (see §6 Standards Alignment).
  • Usage logs – Track GenAI tool use for accountability (see §4.5 Storage).
Student-Produced Essays, projects, practice activities with GenAI help
  • Transparency & labeling – Students disclose GenAI contributions, including discipline-specific citation styles for student-produced work? (e.g. cite AI usage in APA formatting.)d (see §4.7 Labeling).
  • Risk-based oversight – Apply closer review when work affects grades/placement (see §4.3 Risk Rating Criteria).
  • Responsible use & integrity – Clarify permitted vs. prohibited uses in alignment with academic integrity policies (see §4.8 Responsible Use & Academic Integrity).
  • Academic integrity alignment – Clarify permitted uses (see §4.2 Content Categories).
  • Bias & accessibility awareness – Flag issues in generated media (see §4.6 Accessibility Considerations).
  • Metadata & storage – Retain disclosure with submissions in the LMS (see §4.5 Storage).
Hybrid GenAI + human co-authored items, adapted lessons
  • Transparency & labeling at multiple touchpoints – Disclose GenAI contributions at creation and delivery (see §4.7 Labeling).
  • Dual provenance tracking – Record both human + GenAI contributions (see §4.5 Storage).
  • Risk-based oversight + SME review – Apply the stricter standard where outcomes are high stakes (see §4.3 Risk Rating Criteria and §4.4 Subject-Specific Concerns).
  • Bias & accessibility checks – Ensure fairness and usability (see §4.6 Accessibility Considerations).
  • Metadata portability & storage – Ensure provenance/labels persist across systems (see §4.5 Storage).
  • Standards alignment – Apply QTI, CC, TCC, Caliper as relevant (see §6 Standards Alignment).

Risk Rating Criteria

AI-generated content can play very different roles in education, from influencing grading and placement decisions to supporting lesson planning or providing drafting assistance. Because these uses carry different levels of potential impact, this framework proposes a three-tier risk model. The categories that follow—high, medium, and low risk—are offered as recommendations to help institutions and providers reflect on how AI-generated content may affect educational outcomes and what level of oversight may be most appropriate. This model is not prescriptive, but rather one way to approach risk in a structured, transparent manner. Institutions and individuals are encouraged to adapt the categories to their own policies, risk tolerance, and regulatory environment.

High Risk

GenAI systems that directly influence educational outcomes (grades, placement, credentialing, or access to resources) or are used in the delivery of regulated professional services (e.g., licensed teacher functions, counseling). Also includes systems that process sensitive personal data where errors or bias could cause harm.

  • Publisher-produced: Automated scoring systems, adaptive assessments, placement tools, high-stakes assessments (adaptive or otherwise) including entry exams, summative assessments, credentialing and certification exams.

  • Educator-produced: GenAI delivering direct instruction in a licensed teaching capacity, or making evaluative judgments

  • Student-produced: GenAI used in assignments that directly determine grades or advancement

  • Hybrid: Any co-produced content where the GenAI component contributes to grading, credentialing, or regulated service delivery

Medium Risk

GenAI systems that shape instructional content or learning experiences but do not directly determine outcomes. These tools guide or adapt learning while requiring human oversight for evaluation.

  • Publisher-produced: AI-generated lesson plans, formative assessments, or practice sets
  • Educator-produced: AI-generated rubrics, scaffolds, or classroom activities
  • Student-produced: GenAI used for drafts, outlines, or practice work that is not graded
  • Hybrid: Co-produced materials where GenAI provides instructional suggestions but outcomes remain human-determined
Low Risk

GenAI systems used for enrichment, drafting, or support without affecting grades, placement, or regulated services. These systems enhance efficiency, creativity, or productivity but carry minimal compliance risk.

  • Publisher-produced: GenAI drafting metadata, alt-text, or summaries
  • Educator-produced: GenAI tools for brainstorming, resource discovery, or translation aids
  • Student-produced: GenAI used for study support, practice prompts, or idea generation
Hybrid

Co-produced content that is exploratory or optional, with no impact on evaluation.

Subject-Specific Concerns

AI-generated content presents different challenges depending on the subject area. In some disciplines, accuracy is largely objective, while in others, nuance and interpretation play a larger role. To maintain quality and trust, GenAI outputs should be reviewed with subject-specific considerations in mind:

  • Math – Accuracy is critical, both in problem-solving and in the presentation of equations or symbols. Best practices include rigorous accuracy checking, verification of solutions, and ensuring styling aligns with established mathematical conventions.
  • Language and Literacy – Whether in English or any other language, AI-generated text must be fact-checked for accuracy and reviewed for clarity, tone, and grade-level appropriateness. In addition, linguistic precision and cultural sensitivity are essential, particularly in world language instruction, where errors in vocabulary, grammar, or context can distort meaning. Best practices include ensuring fidelity to the target language, avoiding literal translations that lose nuance, and maintaining cultural authenticity.
  • Science – GenAI outputs must be checked against authoritative sources to confirm factual accuracy. Additionally, scientific terminology, units of measure, and formatting should be consistent with domain standards.
  • Social Studies – Fact-checking is especially important to avoid inaccuracies, oversimplification, or biased perspectives. Content should reflect multiple viewpoints where appropriate and align with curricular frameworks.
  • Other Disciplines – Beyond core subjects, AI-generated content must also be reviewed with discipline-specific nuances in mind. In the Arts, reviewers should confirm that creative expression is authentic and appropriately contextualized. In technical and vocational subjects, accuracy of procedures, tools, and terminology is essential. For interdisciplinary or project-based learning, GenAI outputs should be checked for coherence across domains and for alignment with intended learning outcomes. While the details vary by field, the guiding principle remains the same: subject matter experts must validate GenAI contributions to ensure accuracy, appropriateness, and educational integrity.

By tailoring review processes to subject-specific needs, organizations can reduce the risks of error or bias while ensuring AI-generated content maintains pedagogical integrity across the curriculum.

Metadata and Storage

Storage practices for AI-generated content should be designed to support transparency, accountability, and compliance across the full lifecycle of content. While the recommendations below reflect broadly applicable best practices, institutions and providers must recognize that storage policies may be shaped by additional factors such as sector requirements, institutional risk tolerance, and the stakes of the content involved.

Recommended practices include:

  • Store Content and Metadata Together AI-generated content should always be stored with its associated metadata—provenance records, disclosure labels, timestamps, training reuse declarations, and review history—so that the context of creation is preserved. Separating metadata from content risks losing critical transparency and accountability information. Institutions and/or the tech provider will own the content, metadata, and prompts, depending on the use case and ownership agreement in the procurement process.
  • Enable Interoperability Across Platforms Because content often originates in one platform and is delivered through another, storage systems should ensure metadata portability. Standards such as Thin Common Cartridge (TCC) and Learning Tools Interoperability (LTI) make it possible for disclosure and provenance information to remain intact across systems.
  • Implement Versioning and Audit Trails AI-generated content often evolves through multiple cycles of human and GenAI contributions. Some organizations, like summative assessment providers, store Ai-generated content. Depending on the organization's policies around storing content, the best practice is to maintain version control and audit logs that record when GenAI was used, the nature of changes, the model version employed, and who validated the outputs. (See ETS’s example) This helps ensure compliance and supports institutional accountability.
  • Record Training Reuse Declarations Metadata should indicate whether content, student data, or educator-generated materials will be used to train or refine GenAI models. This practice clarifies ownership, permissions, and downstream risks and ensures alignment with institutional and legal expectations.
  • Ensure Security and Compliance Storage must align with regional data protection and privacy laws (e.g., FERPA, GDPR) and institutional requirements. This includes access controls, encryption, retention policies, and audit readiness.
  • Sector-Specific Requirements In some contexts, additional storage considerations apply. For example, high-stakes assessment organizations such as ETS have emphasized stricter retention, audit, and evidentiary standards for assessment materials. Where AI is used in summative assessment, placement, or credentialing contexts, storage solutions should reflect the higher compliance and legal risks associated with these use cases.

These practices are consistent with emerging frameworks such as the 1EdTech TrustEd Apps Generative AI Data Rubric which recommends disclosure of when GenAI is in use, what data sources are involved, and whether stakeholders may opt in or out of training reuse. Aligning with these practices ensures storage solutions not only meet compliance needs but also reinforce trust in AI-generated educational content.

Accessibility considerations

Accessibility is a critical dimension of AI-generated content, as automated tools can both introduce new opportunities for inclusion and create risks if outputs are not carefully reviewed. Unlike human-created materials, AI-generated text, images, and multimedia often lack built-in safeguards for accessibility. For example, GenAI may generate alt-text that is vague or inaccurate, produce transcripts that omit context or produce inaccurate content, or create diagrams that are visually polished but unusable with assistive technologies.

Institutions and providers are encouraged to treat accessibility as an integrated part of the AI content workflow. This includes:

  • Checking AI-generated outputs against relevant accessibility standards, frameworks and laws to ensure all learners can engage equitably with content. An example of a framework is the Framework for Accessible and Equitable AI in Education. Key frameworks and laws include WCAG 2.1 AA and Section 508 (U.S.), the European Accessibility Act (EU), and the Accessible Canada Act. Institutions and providers should also be mindful of regional laws in other jurisdictions, such as the New Zealand Web Accessibility Laws, India’s Rights of Persons with Disabilities Act, Brazil’s eMAG (Electronic Government Accessibility Model), and the Dubai Universal Design Code. These frameworks, while varied, share a common expectation: AI-generated educational content must meet accessibility requirements to avoid exclusion and ensure compliance.
  • Validating AI-generated metadata and alt-text to ensure it accurately conveys meaning, particularly for complex content such as math equations, scientific diagrams, or charts.
  • Reviewing automatically generated captions or transcripts for accuracy, since GenAI systems often misinterpret technical or domain-specific vocabulary or fail to recognize the varied ways people with disabilities communicate.
  • Applying universal design principles so that AI-generated materials support diverse learning needs, regardless of jurisdiction, and make sure it is also compatible with assistive tech. (See the 1EdTech TrustEd Apps Accessibility Rubric for more information)

Because accessibility laws vary by region, institutions and providers should follow the requirements applicable to their context (e.g., ADA in the U.S., EAA in Europe) while adopting a proactive approach that treats accessibility as a baseline expectation for all AI-generated content. Embedding accessibility early reduces remediation costs and ensures that machine-generated materials support equitable participation for all learners.

Labeling AI-Generated content

There are multiple ways institutions and publishers can disclose the role of GenAI in educational content, but what matters most is that a clear labeling system is in place. The GenAI Content Transparency Rating System presented here is one example of how content can be categorized to show the degree of GenAI involvement and the extent of subject matter expert review. By using a structured approach—whether this one or a comparable model—organizations can ensure transparency, build trust with educators and learners, and meet emerging regulatory expectations, and do so in adherence to professional citation standards. The table that follows illustrates one labeling framework, demonstrating how content can be classified from fully human-created to fully AI-generated.

Label Definition How It Affects Educators & Students Example Use Cases in Education
Human-Created, GenAI-Assisted GenAI is used only for minor support tasks like spelling, grammar checking, or formatting. All educational content is fully human-created. Educators can trust this content as traditionally developed. No AI-generated material affects instruction. A textbook manually written by a subject-matter expert, with GenAI-powered spell-check or formatting tools used in editing.
Human-Led, GenAI-Enhanced GenAI is used for content refinement or optimization, but a human controls the instructional design and pedagogical intent. GenAI does not generate core instructional content. GenAI plays a supporting role in making content more accessible, localized, or structured, but subject-matter experts remain the primary knowledge source. GenAI-assisted validation of alignment to content specifications, GenAI-assisted reading level adjustments, GenAI-supported localization for different student demographics.
GenAI-Co-Created, Human-Guided GenAI creates significant portions of the content, but human experts guide and refine it to ensure educational accuracy. Educators and students should review critically, as GenAI may introduce biases or errors. However, human oversight ensures quality control. GenAI-generated practice questions reviewed and selected by teachers, GenAI-assisted curriculum planning with final approval by an instructional designer.
GenAI-Dominant, Human-Supervised GenAI generates most or all of the content, and humans only review or approve it without deep revisions. Educators should exercise caution, as AI-generated content may have factual inaccuracies, lack pedagogical depth, or require human verification. AI-generated quizzes auto-published in an LMS, GenAI-created instructional videos reviewed but not modified by educators.
Fully AI-Generated GenAI autonomously generates and publishes content with minimal or no human oversight. No human expert reviews, refines, or approves the material. Educators should validate before using—this content may lack curricular alignment, educational rigor, or cultural nuance. Not recommended for high-stakes learning. AI-generated lesson plans, self-adapting learning modules without human review, GenAI-created textbooks published directly without educator intervention.

Responsible Use and Academic Integrity

Best practices for AI-generated content extend beyond technical considerations to include the responsible conduct of all content producers. Whether the creator is a publisher, educator, or student, transparency and disclosure are essential for maintaining trust, academic integrity, and alignment with institutional or professional standards.

Recommended practices include:

  • Clear disclosure of GenAI contributions by any content producer (publisher, educator, or student), with labeling that makes the extent of AI involvement visible.
  • Alignment with institutional or professional integrity codes, so GenAI use is governed by the same principles as citation, authorship, and originality.
  • Role-appropriate oversight — publishers ensure compliance and SME review; educators validate appropriateness for learning; students differentiate their own work from GenAI support.
  • Consistent policies and consequences, communicated institution-wide, so expectations for responsible GenAI use are clear regardless of role.

Metadata and Storage

Strong metadata and storage practices are essential to ensuring transparency, accountability, and trust in AI-generated content. These practices help organizations track how content was created, communicate GenAI involvement clearly to end users, and maintain consistency when content is delivered across multiple platforms. The following best practice recommendations address provenance, disclosure, and public transparency.

Provenance Tracking

A recommended best practice is to maintain provenance records that document the origin and history of AI-generated content. These records should capture information such as the GenAI system or model used, the version, the date of creation, and the role of human or subject matter expert oversight. By storing provenance metadata alongside the content itself, institutions and providers can address questions of authorship, accuracy, bias, or compliance long after the content is delivered. Provenance metadata also enables interoperability, ensuring that information about content creation persists when content moves between systems.

Disclosure Statements

Disclosure statements are recommended as a practical way to inform users that GenAI contributed to the creation of content. These statements can take the form of visible labels, annotations, or notes embedded within platforms. In addition to surfacing at the point of use, disclosure information should be stored as metadata, linked directly to the labeling strategy (e.g., human-authored, GenAI-assisted, AI-generated). This approach ensures that disclosure remains consistent when content is shared across platforms. For example, metadata carried through 1EdTech’s Common Cartridge® (CC), Thin Common Cartridge (TCC) or Question & Test Interoperability® (QTI®) can allow a disclosure label applied by a publisher to be displayed in the institution’s LMS, even though the delivery platform differs from the origination platform. This consistency is a cornerstone of trust and transparency. This group has not worked on a technical solution around the metadata format or model within these standards specifications, but the suggestion is to look into a format in the future with the technical experts in those standards’ groups.

Public GenAI Use Statements

As a complement to provenance tracking and disclosure labels, organizations are encouraged to publish public GenAI-use statements or policies describing how GenAI is used across their content and services. These statements should outline the purposes of GenAI use, safeguards in place to protect fairness and accessibility, and the organization’s overall approach to responsible GenAI. A public statement builds credibility with educators, learners, and policymakers, while also signaling alignment with emerging regulatory expectations. Linking organizational statements with metadata practices and labeling strategies creates a layered approach: detailed provenance at the content level, consistent disclosure at the point of use, and organizational accountability through public communication.

Standards Alignment

To ensure that information about AI-generated content is consistently communicated across platforms, institutions and providers are encouraged to align metadata and labeling practices with existing 1EdTech interoperability standards that include metadata (eg. 1EdTech’s Common Cartridge®, Thin CC, 1EdTech’s Question & Test Interoperability®, QTI®, 1EdTech’s Caliper Analytics®, or Resource Search®, etc.) or can be part of metadata (eg. 1EdTech’s Competencies and Academic Standards Exchange®, CASE®). These standards provide the technical framework for content portability, provenance tracking, and disclosure, making it possible for GenAI-use information to persist from origination through delivery.

Question and Test Interoperability (QTI)

For AI-generated or GenAI-assisted assessments, metadata should be embedded using QTI structures. This allows provenance and disclosure information—such as whether items were generated with GenAI or validated by a subject matter expert—to be preserved when assessments are exchanged between authoring systems and delivery platforms. Best practice is to use QTI’s extensible metadata fields to capture GenAI-related annotations without altering the underlying assessment logic.

Common Cartridge/Thin Common Cartridge (CC and Thin CC)

AI-generated content used in course materials can be packaged with metadata labels in Common Cartridge or Thin CC formats. When disclosure statements or risk ratings are included at the content-object level, they will carry through when imported into a Learning Management System (LMS). Best practice is to align GenAI labeling with existing metadata fields so that LMSs can display consistent transparency information regardless of whether content originated with a publisher or educator.

Learning Tools Interoperability (LTI)

When AI-generated content is accessed through third-party tools using LTI (Learning Tools Interoperability), labeling and provenance metadata should remain visible to the end user within the provider’s environment. While LTI does not itself transmit metadata, best practice is for the content provider’s platform to surface disclosure labels (e.g., “Human-Created,” “AI-Assisted,” “AI-Generated”) in a consistent and transparent way once launched. Institutions should ensure that their LTI-connected tools support label visibility and provenance consistency—so that whether content originates in a publisher system or a local LMS, users encounter the same transparency indicators. In other words, LTI serves as the access pathway, while the content provider maintains and presents the labeling metadata.

Caliper Analytics

For tracking the use and impact of AI-generated content, Caliper provides a framework for capturing learner interactions. Institutions and providers may extend Caliper event models with fields indicating whether the content or activity was AI-generated or GenAI-assisted. This enables more granular analytics on how GenAI-labeled content is being used in practice and supports institutional oversight. Best practice is to incorporate GenAI labeling into Caliper events in a way that aligns with privacy and data governance requirements. Together, these standards provide the infrastructure to make AI labeling and metadata portable, visible, and trustworthy across the education ecosystem. Adopting them as part of best practice ensures that transparency does not stop at the point of creation but extends through delivery, use, and evaluation.

What's Next?

AI-generated content is advancing rapidly, and the education community is still at the early stages of understanding its long-term implications. The recommendations in this document reflect the best practices identified by 1EdTech members to date, but they are not static. Because both the technology and the policy landscape are evolving, these practices must be treated as living guidance.

The 1EdTech community of member experts intend to review and update this document, as needed, over the next several years. This cadence allows the community to respond to emerging research, legislative changes, new interoperability opportunities, and lessons learned from member exemplars.

Although the future of GenAI and AI-generated content in education remains uncertain, this document provides a foundation of recommendations that institutions and providers can act on now. By implementing these practices, organizations can begin building systems of transparency, accountability, and trust that are resilient enough to adapt as new challenges and opportunities emerge.

Real-World Examples

Communicate Transparently

  • Communicate transparently when AI has been used to generate content through clear labeling and disclosure.
    • Example from Unicon: Unicon defers to the client for their use cases. For one of their clients they included a statement at the beginning of the course that GenAI tools were used as co-collaborators in the creation of the content. Unicon recently released our AI Tenets on Linkedin which emphasize that “we are human first, AI-enabled.”
    • Example from D2L: For D2L AI internal tooling, they have identified AI status: list of status. AI content is gated until a human reviews it, only when users use their internal AI tool (see section on AI Utilization for list of statuses tracked in data sets)
    • Example from EU AI Act: The EU AI Act is directed towards providers of General Purpose AI (GPAI) systems.
    • Example from the state of California: CA requires the labeling of AI- generated documents.

Maintain Provenance Records and/or Metadata

  • Software providers and educators should maintain provenance records and/or metadata that support trust, accountability, and portability across platforms, such as model, version, hosted themselves, number of tokens used, prompts, date, link to the LLM, metadata, etc.
    • Example from ETS: Tracking of AI-generated content is standardized across ETS programs. AI-generated content that is banked externally carries metadata information that includes Source (e.g., “Artificial Intelligence”; differentiating a human contribution from an AI-assisted contribution to the item pool), Label (identifying the AI-enabled platform from which the item was sourced), and SessionID (identifying the full AI-human conversation that led to the original draft item).

Apply Risk-Based Oversight

  • Apply risk-based oversight that differentiates between high-, medium-, and low-stakes use cases. The AI Generated Content Best Practices is in some alignment with the EU AI Act, in that there are recommendations based on risk-assessment levels of the AI systems.
    • Example from Carnegie Learning: Any AI-generated content that directly influences educational outcomes (grades, placement, etc) requires human validation by a SME.
    • EU Artificial Intelligence Act (EU AI Act): The Annex III use cases list high risk AI systems. The EU AI Act has prohibited the use of some types of AI use and has risk levels of AI use including:
      • Acceptable Risk
      • Minimal Risk
      • Limited Risk
      • High Risk- The majority of the High-Risk level is placed on providers. There is also an EU AI Act Compliance Checker to inform providers, developers, and users if or how the EU AI Act may or may not apply to them.

Incorporate Review by the Organization’s Subject Matter Experts (SME)

  • Incorporate review by the organization’s subject matter experts (SME) for accuracy, rigor, and alignment with disciplinary and institutional standards.
    • Example from D2L: The educator /SME is encouraged to review content created by an AI engine prior to posting in a class in their LMS.

Verify accessibility needs

  • Verify accessibility needs and validate AI-generated supports such as alt-text, captions, or formatting.
    • Example from Teach Access: Teach Access discloses how AI tools are used in the development of their educational content and resources. Any AI-generated content, such as alt-text, captions, and plain language adaptations, is reviewed by staff or external collaborators to ensure accuracy, accessibility, and adherence to quality standards before publication.

Establish Storage Practices

  • Establish storage practices that retain provenance, disclosures, and usage logs appropriate to context in alignment to your organization’s retention policy if relevant..
    • Example from SameGoal: AI-generated content can be stored for seven years or longer depending on the customer’s needs.
    • Example from ETS: AI-generated assessment content is retained indefinitely.

Support Responsible Use and Integrity

Glossary

  • Accessibility Compliance Ensuring that AI-generated content conforms to established accessibility standards and legal frameworks so that all learners, including those with disabilities, can use the content equitably.
  • AI: Artificial Intelligence. Computational systems that perform tasks traditionally requiring human intelligence, such as generating text, images, or recommendations.
  • GenAI-Generated Content: Learning resources, text, videos, images, audio, or other materials created wholly or partly with the assistance of generative AI. See §4 Implementation Framework.
  • Annotate: To add explanatory notes, metadata, or labels to clarify GenAI involvement in content creation or refinement. See §4.7 Labeling.
  • Audit Trail: A chronological record of actions taken on AI-generated content, including edits, validations, and approvals. Supports accountability and compliance. See §4.5 Storage.
  • Bias Audit: A systematic review of AI-generated content to check for cultural, demographic, or accessibility bias, ensuring fairness and inclusivity. See §4.4 Subject-Specific Concerns.
  • Compliance Lead: A designated role responsible for ensuring that AI-generated content aligns with transparency, risk, and disclosure practices. See §4.1 Best Practice Recommendations.
  • Consumer AI-Generated Content: Content created by end users such as students or educators using AI tools. See §4.2 Content Categories.
  • Content: An umbrella term referring to learning materials, assessments, multimedia, and other instructional resources. Content can be fully human-authored, AI-generated, or hybrid.
  • Disclosure Statement: A clear notice indicating that GenAI contributed to the creation or refinement of content (e.g., “GenAI-assisted” or “AI-generated”). May appear as visible labels, annotations, or metadata. See §5.2 Disclosure Statements.
  • Editorial/SME Review: Validation of AI-generated content by subject matter experts (SMEs) or editors to ensure accuracy, rigor, pedagogy, and appropriateness. See §4.3 Risk Rating Criteria and §4.4 Subject-Specific Concerns.
  • Interoperability Standards: Shared technical specifications (e.g., QTI, Common Cartridge/Thin CC, LTI, Caliper) that ensure metadata and labels for AI-generated content persist across platforms. See §6 Standards Alignment.
  • Labeling Strategy: The method by which content is annotated to indicate GenAI involvement, such as “human-authored,” “GenAI-assisted,” or “AI-generated.” Labels are tied to metadata and ensure transparency across systems. See §4.7 Labeling.
  • Learning Resources: Assignments, assessments, textbooks, lesson plans, or other instructional assets used for teaching and learning.
  • Metadata: Descriptive information attached to content that communicates its provenance, GenAI involvement, risk rating, and disclosure status. Metadata ensures transparency is portable across platforms. See §5 Metadata and Storage.
  • NIST-Aligned Risk Assessment: An evaluation of AI-generated content aligned with the NIST AI Risk Management Framework (AI RMF) to assess potential harms, biases, or inaccuracies. See §4.3 Risk Rating Criteria.
  • Opt-Out Provision: A safeguard that allows users (e.g., students or parents) to decline certain high-risk uses of GenAI, such as automated grading.
  • Provenance Record: A documented log that captures the origin and history of AI-generated content. It should include the GenAI system and version used, prompts or parameters, data sources, the role of human oversight, and revision history. Provenance records must remain attached as metadata to support transparency, compliance, and auditability. See §5.1 Provenance Tracking.
  • Public GenAI-Use Statement: An organization-level disclosure summarizing how GenAI is used across products or services, including purposes, safeguards, and validation practices. See §5.3 Public GenAI-Use Statements.
  • Risk Rating Criteria: A framework that categorizes AI-generated content into high, medium, or low risk based on its potential impact on educational outcomes. Higher-risk categories call for stricter oversight and SME validation. See §4.3 Risk Rating Criteria.
  • Transparency: The practice of clearly communicating when and how GenAI was used in content creation. Transparency builds trust among educators, administrators, and learners. See §2.1 Transparency and §4.7 Labeling.
  • Usage Log: A record of how and when GenAI was applied in content creation, including dates, context, and responsible individuals. See §4.5 Storage.
  • Vendor Vetting: The process of evaluating third-party GenAI vendors to ensure systems meet ethical, technical, and regulatory standards before adoption.
  • Version Control: A method for tracking changes to AI-generated content over time, preserving historical versions for compliance and review. See §4.5 Storage.

References

Research References

  1. Feldman-Maggor, Y., Cukurova, M., Kent, C. et al. The Impact of Explainable AI on Teachers’ Trust and Acceptance of AI EdTech Recommendations: The Power of Domain-specific Explanations. Int J Artif Intell Educ (2025). https://doi.org/10.1007/s40593-025-00486-6
  2. Denny, P., Khosravi, H., Hellas, A., Leinonen, J., & Sarsa, S. (2023). Can we trust AI-generated educational content? Comparative analysis of human and AI-generated learning resources. arXiv preprint arXiv:2306.10509. https://ui.adsabs.harvard.edu/abs/2023arXiv230610509D
  3. SCALE Initiative. (2024). Understanding students’ acceptance, trust, and attitudes towards AI-generated images for educational purposes. arXiv preprint. https://scale.stanford.edu/genai/repository/understanding-students-acceptance-trust-and-attitudes-towards-ai-generated-images
  4. Córdova-Esparza, D.-M. AI-Powered Educational Agents: Opportunities, Innovations, and Ethical Challenges. Information 2025, 16, 469. https://doi.org/10.3390/info16060469
  5. Fulsher, A., Pagkratidou, M. & Kendeou, P. GenAI and misinformation in education: a systematic scoping review of opportunities and challenges. AI & Soc (2025). https://doi.org/10.1007/s00146-025-02536-y
  6. Xiaoyu, W., Zainuddin, Z., & Hai Leng, C. (2025). Generative artificial intelligence in pedagogical practices: a systematic review of empirical studies (2022–2024). Cogent Education, 12(1). https://doi.org/10.1080/2331186X.2025.2485499
  7. Yusuf, A., Pervin, N. & Román-González, M. Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives. Int J Educ Technol High Educ 21, 21 (2024). https://doi.org/10.1186/s41239-024-00453-6
  8. Melo-López, V.-A.; Basantes-Andrade, A.; Gudiño-Mejía, C.-B.; Hernández-Martínez, E. The Impact of Artificial Intelligence on Inclusive Education: A Systematic Review. Educ. Sci. 2025, 15, 539. https://doi.org/10.3390/educsci15050539

Revision History

Release Date Comments
December 15th, 2025 The original public release of the Best Practices Document.

A. List of Contributors

The following individuals contributed to the development of this document:

Name Organization Role
Susan Haught 1EdTech Consortium, Inc. Editor
Raul Alanis Houston ISD  
Bill Bass Parkway School District  
Jason Collette Savvas Learning  
Rocco Fazzalari University of Central Florida  
Rebecca McNulty University of Central Florida  
Kristen Franklin Digital Promise  
Evelyn Galindo Carnegie Learning  
Kevin Allard Carnegie Learing  
Joseph Gehling Educational Testing System (ETS)  
Sarah Wood Educational Testing System (ETS)  
Richard Gibbons Anthology  
Sadie Gill Chan Zuckerberg Initiative, LLC.  
Sue Ellen Gilliland Alabama State Department of Education  
Frankey Goss Accelerate Learning  
Viktor Haag D2L Corporation  
Tammie Helmick Unicon, Inc.  
Jana Hitchcock Pennsylvania State University  
Srinivas Javangula Alabama State Department of Education  
Allan Johnson SameGoal Inc.  
Sean Joyce Unicon, Inc.  
Jun Kim Stanislaus County Office of Education  
Christine Mai Houston ISD  
Ashley McBride North Carolina Department of Public Instruction  
Rolando Méndez Teach Access  
Ashley Miller Digital Promise  
Chris Millet Pennsylvania State University  
Kate Morgan Pennsylvania State University  
Kimberly Moore Wichita State University  
David Petersen Western Govenors University  
Melissa Scholtens PowerSchool Group LLC  
Tom Small PowerSchool Group LLC  
Carolyn Speer Wichita State  
Erin Steed D2L  
Eric Stuebner Arizona Department of Education  
Charles Taylor Stanislaus County Office of Education  
Shannon Terry SAFARI Montage  
Kim Varnell WIDA  
Claude Vervoot Cengage  
Stephen Vickers EdTech for Learning  
Jaymes Walker Myers Southern New Hampshire University  
Webs Webber Edmentum  
Tammy Yasrobi University of British Columbia  
Steven Brawn Oracle Corporation  
Nora Murray 1EdTech Consortium, Inc.  
Sarah Barth 1EdTech Consortium, Inc.  
Tom Hoffmann 1EdTech Consortium, Inc.  
Tim Couper 1EdTech Consortium, Inc.  
Kevin Lewis 1EdTech Consortium, Inc.  
Xavi Aracil 1EdTech Consortium, Inc.  

1EdTech™ Consortium, Inc. ("1EdTech") is publishing the information contained in this document ("Specification") for purposes of scientific, experimental, and scholarly collaboration only.

1EdTech makes no warranty or representation regarding the accuracy or completeness of the Specification.

This material is provided on an "As Is" and "As Available" basis.

The Specification is at all times subject to change and revision without notice.

It is your sole responsibility to evaluate the usefulness, accuracy, and completeness of the Specification as it relates to you.

1EdTech would appreciate receiving your comments and suggestions.

Please contact 1EdTech through our website at www.1edtech.org

Please refer to Document Name: 1EdTech AI-Generated Content Best Practices v1.0 1.0

Date: December 15, 2025