On Monday, August 15, at the 1EdTech quarterly meeting at Utah Valley University, the leaders of the Advanced Distributed Learning (ADL) initiative, responsible for the Experience API (xAPI), came together with the leaders of the Caliper Learning Analytics Framework (Caliper) from 1EdTech. ADL has joined as a Contributing Member of 1EdTech and the parties are committed to exploring a unified path for xAPI and Caliper. This may mean alignment at some level or potentially even convergence.
There were roughly 50 experts in the room for an entire day on August 15, and I believe it is fair to say that both ADL and 1EdTech were very pleased with the eagerness of participants to cooperate and depth of discussion. Presentations from the meeting are posted here and there should be a synopsis of the comparison posted soon.
At the next 1EdTech quarterly meeting November 7-10, 2016 at Arizona State University, there will be a follow-up session on Wednesday November 9 open to the public (registration required). I expect it to be another great session that should lead to some tangible prioritization and next steps. On November 8 there will also be a day-long Summit on Creating an Educational Analytics Ecosystem.
1EdTech and ADL are committed to collecting and achieving public input as the process moves forward. We are likely to create a public forum to encourage the public community to comment and also help crosswalk the information models. Stay tuned to ADL and 1EdTech announcements and twitter feeds (1EdTech twitter feed is @LearningImpact). Currently there is a survey for those that have views on the potential convergence of the specifications here if you would like to participate.
I attended the entire August 15th session and would like to provide my sense of the big picture. I’m not the technical expert, but I have a long history in the applications of standards, including those that evolved into SCORM, and of course the dramatic growth of 1EdTech standards adoption the last 10 years in the education sector.
I am extremely impressed with both the xAPI and Caliper work. While they are perceived by many as “solving the same problem” my own take is that this is largely because there is still quite a lot of variability in the Learning Analytics field in general. Thus, almost any set of technical work that deals with transcribing, sending, logging and analyzing data during any form of e-learning is considered “learning analytics.” From my perspective, while xAPI and Caliper certainly have some overlap, they are largely complementary at this phase in their respective evolution.

At the risk of over-simplifying things, my impressions of the strengths of each specification are captured in the table above (while there are four “strengths” shown for each specification they are not meant to be compared in four dimensions - the above are just lists of the primary strengths). The strength of xAPI is as a way for a single application to log freeform natural language statements and for that data store to be retrievable by the logging entity. The strength of Caliper is as a way to aggregate agreed upon events across a set of applications to enable processing across the aggregation of data.
It is also very clear that while neither specification “requires” the body of work from which they came (SCORM for xAPI and 1EdTech standards, like LTI and OneRoster for Caliper) that these foundations greatly influence each. For instance, in the case of Caliper the co-existence of the other standards in an implementation means that Caliper can simply complement a slew of outcomes and context data that are already flowing via LTI. Again, because the Learning Analytics field is rather loosely defined at this point with widely varying “use cases” one can easily get confused about whether they even need Caliper or should just be using LTI to it’s fullest potential (for instance, LTI can already send data to an endpoint that is not the LMS).
I will also note that in the 1EdTech Communities of Practice relating to data and analytics in 1EdTech, we are seeing a very high priority need to simply be able to see all the data visualizations in one place. The level of data processing may be fairly minimal – but the humans in the equation need a better way to integrate visualizations that may already be available from various products they are using.
Bottomline, in my humble opinion, is that as we move forward on alignment and/or convergence of xAPI and Caliper I think we need to consider the use cases more carefully and, in essence, better define the “categories” or “scenarios” of Learning Analytics. While that may seem like “a step backwards” well, I think we need more clarity. The good news is that I don’t believe we will have any problem given the wealth of actual implementations we are working with in 1EdTech and ADL.
Our goal in 1EdTech is to enable serious school districts, higher ed institutions, states and nations to implement high impact learning analytics use cases and to enable a plug and play ecosystem of products that provide the data needed to solve those use cases. Either specification is good enough to begin doing some work. But, in order to do what we need to do we need a very large interoperable ecosystem of products, users, and researchers that can all invest together to get to the understanding we wish to collectively achieve. I would see elements of xAPI and elements of Caliper as both being essential in that goal.
But, the education sector is different. While historically there has been a steady pattern of what I call the "alien invasion" theory of edtech adoption, there is huge potential for institutions that can work together to shape the ecosystem, and therefore the platform strategies of suppliers, going forward. Indeed, we are at a period in time right now where not only institutional leaders, but leading suppliers are largely agreeing that vendor-specific ecosystems are NOT the way forward in the education sector. No single vendor can provide the diversity of digital resources, tools, apps and platforms. And no single vendor can provide the sales channel to dominate education revenue generation.
Finally I will leave you with a simple idea and figure (shown here) to help illustrate the road we are on in terms of a maturity model of edtech products and the evolution toward next generation digital learning environments. At the base level, an educational institution should be able to support the use of technology to support teaching and learning. This is stuff like BYOD and Google Apps for Education or Microsoft Office 365. The next level up is the enhancing of productivity using technology. Herein lies the success of LMSs and many other tools and technologies. However, one should not think it trivial to get to this 2nd/middle level because there are loads of examples of digital tools, and lack of integration of those tools, that have made life less productive for teachers and students. The “top” tier are emerging applications that have the power to improve student achievement and learning outcomes. Some of these are new product categories and others are existing products that are used in ways that clearly support better outcomes. But the key point is that for an NGDLE, the expectations are evolving from the perspective of the customers – the students, the faculty, the institutions and society in general.
The figure here shows what I was able to ascertain. Apologies to all concerned that this is dated now – and therefore may be different today.
For next gen learning apps the objective is to enable better information outputs to understand progress, further personalize the learning experience and just in general understand the usage of various digital resources. Shown in the figure are some categories of potential outputs (on the right hand side) that are discussed further here:
#5 Creating an EdTech ecosystem leadership community that will lift the sector in the sort term and is likely to endure in the long term. This is really the most important impact that 1EdTech is having. As I mentioned in a recent EDUCAUSE Review article, “
The reason I say “once again” is that the lego metaphor has been called upon many times before since the early days of e-learning (last half of 1990’s) when it was used to describe reconfigurable “learning objects” that could potentially be chained together to meet personal learning preferences and goals. There was a lot of coverage and development around the learning objects concept - showing some magazine and articles here that I was personally involved with.
A lot of evolution has occurred in the last 20 years. Of course the learning “platform” (software that enables a learning experience and environment beyond simple interaction with content) has become a prevalent component in education. But perhaps the most important evolution has been from “content that moves around from system to system” to “content that is hosted and accessible via a web interface.” That later category of content might also be called a web application or just “an app.” An app typically contains its own “learning platform” of some sort, meaning the web application is typically intertwined with the content and not as broad in its ability to support “LMS-like” features.
It should also be noted that some things HAVE NOT changed much since the first use of legos in e-learning. One that is very important to keep in mind with respect to the vision of user configurable NGDLE is that even though the concept of fitting learning objects from different sources sounds good that it is nearly impossible to do in any automated fashion. There has been success in configuring personalized learning paths from content modules from a single source in an automated fashion (like an adaptive learning application) but generally all attempts at automating content reconfiguration across suppliers have fallen short because stuff has to be designed to really work well together.
The plenary sessions for the upcoming May 2016
In a recent EDUCAUSE Review Viewpoints column entitled 
