Friday, December 12, 2014
Open Education, MOOCs, and Opportunities
Reusable Learning Resources
The initial development of online learning technology began at scale with the development of the learning management system (LMS) in the mid-1990s. These systems were modeled on distance education resources such as programmed texts and course workbooks. These were designed originally by organizations such as the Open University, in Britain, and Canada's Athabasca University. Online courses were developed according to a protocol that developed over 20 years of experience. These combined learning materials, activities and interaction, and assessments.
Technological systems based on these designs were first developed by the aviation industry, in the form of the AICC specification (Aviation Industry Computer Based Training (CBT) Committee). These standards were adapted by the Instructional Management Systems (IMS) consortium, a collection of academic and corporate training providers. These defined metadata standards describing small and reusable resources first called 'learning objects' by AutoDesk's Wayne Hodgins. These standards, called Learning Object Metadata (LOM), enabled the development of resources that were reusable, discoverable, and sharable.
Major LMS companies such as WebCT, Blackboard, Angel, Saba and Desire2Learn all adopted standards originally developed by IMS. In addition to LOM, the consortium designed Content Packaging, to bundle sets of learning objects for storage and delivery, and sequencing and design specifications, to organize them into courses. In addition, an organization called Advanced Distributed Learning (ADL), working for the U.S. military, developed the Sharable Courseware Object Reference Model (SCORM) to both describe learning resources and enable them to communicate simple messages with LMSs. SCORM 2004 remains the dominant learning resources specification in both the corporate and academic learning technology marketplace to this day.
Additional technologies to support resource discovery and reuse have been developed since then. Among them were IMS Common Cartridge (CC) to support the ability to 'plug in' applications into LMSs, the Learning Technology Interoperability (LTI) to enable LMSs to launch typical applications such as chat or discussion engines. These were supported in term with application-specific application programming interfaces (APIs) and specialized software, such as Blackboard's 'Collaborate' synchronous conferencing system.
Open Education Resources
Alongside the development of educational technologies is an equally important movement to support open educational resources. This movement predates the development of the world wide web in 1994, as seen for example in the Gutenberg Project, an open access archive of public domain works of classic literature. In addition, there were during this time nascent free software licenses were also developed, first to support 'freeware' software applications distributed across pre-internet electronic bulletin boards, then to enable the distribution of onlione gaming libraries, such as the LP-MUD.
Free software was formalized with the development by Richard Stallman of the GNU Public License (GPL) in 1998 (ref). This license not only promoted the free use of software, it also guaranteed access to the orig8inal source code of the application, and was 'viral', which meant that any new work produced using the source was required to carry the same license. The license did not prohibit commercial use of the software, however, the viral clause made it unattractive to companies that wanted to develop proprietary applications, so licenses without the viral clause, such as the Lesser GPL and the Berkeley Systems Distribution (BSD) were later developed.
Content was treated differently by Stallman. To support the free distribution of software documents and support materials, the free software documentation license (FSDL) was developed. It was similar to the GPL, but in order to ensure the integrity of documentation, did not allow derivatives. Learning content required a more flexible model, which was provided first by David Wiley, with the Open Content License, and then by Lawrence Lessig, with Creative Commons. Both of these licenses allowed for the free reuse and redistribution of the resource, but with conditions.
The Creative Commons license (CC) has become widely used and today thousands of libraries and millions of resources use the license. It because successful because it offered flexibility to content authors and publishers. By allowing a set of sublicenses, it allowed authors to specify several things:
- By using the Non-Commercial (NC) clause, could restrict copying and reuse of the resource to non-commercial purposes only
- By using the Attribution (By) clause, could require that any reuse identify by name (and typically URL) the original author of the resource, and
- By using the Share-Alike (SA) clause, required that the subsequent copying and reuse of the content use the same license as the original, as in the 'viral' GPL
- By using the No-Derivatives (ND) clause, authors could require that only exact copies of the original be made and redistributed
In 2002, UNESCO, in an examination of the needs of developing nations, and the potential offered by the free distribution of digital learning resources, developed the concept of the 'Open Educational Resource', as follows: (ref)
In a related but separate initiative, the Massachusetts Institute of Technology (MIT) launched what it called OpenCourseWare (OCW). This project, funded by Foundations such as Hewlett, involved the conversion and distribution of all MIT course materials on the internet. Though not the equivalent of a full MIT education, these resources were visited by millions of people around the world over the last 12 years. OpenCourseware spawned a number of additional projects, including the OpenCourseware Consortium, and the Open University's OpenLearn initiative.
Content Syndication Networks
The concept of content syndication originates with the newspaper industry. The idea is that a news story published in one newspaper might be of interest to readers of other newspapers, and so the same story, after its original publication, is distributed to these other newspapers as well. In time, press agencies such as Associated Press and Reuters, formalized the syndication of news content and provided some original news coverage as well.
In 1998 Netscape, the creators of the first commercial web browser, developed a web site called NetCenter and encouraged contributors to 'syndicate' their content in it. This was supported using a technology called Rich Site Summary (RSS), co-developed by Netscape and Dave Winer. RSS went through several early versions: RSS 0.91, which was the first production version, RSS 1.0, which used a web technology called Resource Description Framework (RDF), and RSS 2.0, known as 'Really Simple Syndication', which was adopted by blog engines such as LiveJournal and Blogger. A parallel standard, called Atom, was also used to support blog post syndication, and additionally defines a standard for uploading contents, including comments and new blog posts. These specifications brought content syndication to the online publishing community.
In education and academia, meanwhile, a parallel initiative called 'Open Archives Initiative' (OAI) was created. This followed the calls of academics (in documents such as the Berlin Declaration and Budapest Open Archives Initiative) for the free distribution of academic content. Technology supporting open archives, called OAI, to create lists of academic journal articles in repositories for retrieval and distribution. This technical development now numbers in the thousands of repositories, and millions of articles, as listed in the Repository of Open Archives Repositories (ROAR) and has been paralleled with the 'Open Access' movement led by Stevan Harnad and Peter Suber, who promote the creation of institutional services rather than reliance on commercial publishers.
Content syndication has been behind some of the most innovative developments on the world wide web. Microcontent services such as Twitter and Facebook originally supported RSS. Millions of people have uploaded photos to Flickr and videos to YouTube, many licenses with Creative Commons and shared with RSS. What we now know as the social web, and the social network, evolved from these origins.
Learning Technologies at the National Research Council
In 2001 members of the National Research Council's e-Learning Research Group joined a pan-Canadian network of learning resource providers called eduSource. This initiative, a three-year $10 million project, brought together institutions like Athabasca University and TéléEducation Quebec (TelUQ) together with colleges, corporations and other government partners. This initiative designed and built software to support a pan-Canadian network of learning object repositories and to develop a Canadian version of LOM called CanCore.
In addition to participation on the Steering Committee, NRC staff (including Stephen Downes and Rod Savoie) drafted the organization's core philosophical principles. Together with several partners in Atlantic Canada, the NRC developed the core framework for licensing and authentication, known as Distributed Digital Rights Management (DDRM). In addition, an alternative content syndication framework, called Distributed Learning Object Repository Network (DLORN) was developed.
The National Research Council also engaged in a pioneering content recommendation project with a New Brunswick company, Mosaic Technologies. Working with the Semantic Web group located in Fredericton, the eLearning group developed a product called Sifter/Filter. This product enabled the company to describe resources with metadata in such a way that properties of the learning resource could be matched with the needs of the course developer. Mosaic was eventually acquired by a British technology company while the core technology was rebranded as a music recommendation service, RACOFI, and commercialized.
In 2003 NRC research Stephen Downes developed the concept of e-Learning 2.0 by employing content syndication technology and social media to support learning. Learning, he argued, would be best supported through social networks with the development and free exchange of learning resources, thus enabling students to add to the instruction provides by schools, employers and universities with their own contributions and interactions. Working with George Siemens in 2004 and 2005, the learning theory known as Connectivism was developed, with the idea that learning takes place not just in an individual person but across a network of connections. This would be supported by open educational resources, and to this end NRC defined a set of sustainability models for the OECD in 2006.
The National Research Council's e-Learning group continued development work in collaboration with the major LMS company, Desire2Learn, and the Université de Moncton. The SynergiC3 project helped D2L implement a collaborative learning content development system in its core project. NRC contributions to the technology included content harvesting and syndication technology, a semantically supported workflow engine, a data representation format called 'resource profiles', and an upgraded version of distributed digital rights management. NRC patent applications resulting from this work have been incorporated into Desire2Learn core technology.
Massive Open Online Courses
In 2008 Stephen Downes and George Siemens developed the first Massive Open Online Course. This course, called Connectivism and Connective Knowledge (CCK08), was designed to explain and expand the learning theory they had been developing since 2004. At a Desire2Learn conference in Memphis, Tennessee, Downes and Siemens determined that the online course they were developing should emulate the structure of the theory they were describing in connectivism - that is, it should be an open course, designed as a network of connected parts, designed to facilitate communication using social networks and the sharing of learning resources.
The course was launched in September, 2008, using Moodle, MediaWiki, WordPress, and an application designed by Downes called gRSShopper. This application, the same employed in the construction of DLORN, is used to author and syndicate Downes's e-learning newsletter, OLDaily. The course was sponsored by the University of Manitoba as part of its Certificate in Adult Education (CAE) and had 24 paying students enrolled. It was also opened to the general public and attracted 2,200 registrations. Dave Cormier and Bryan Alexander coined the term 'MOOC' - Masssive Open Online Course - to describe this new form of online learning.
Between the years 2008 and 2014 the NRCDC led or was a part of the following MOOCs:
- CCK08 - the first course, conducted over 10 weeks in the fall of 2008. Weekly synchronous sessions were conducted using a conferencing system called Ellimunate. This course proved the concept of the networked course and led the use of distributed content networks to incorporate student contributions to the core course content. By the end of the course 170 students were contributing RSS feeds, while 1800 participants were subscribed to the course content newsletter.
- CCK09 - the second version of the same course saw many of the original participants of the first course return to help teach the second, thus proving that the same model can be duplicated with more robust student interaction in the second iteration.
- Critical Literacies - along with researchers Helene Fournier and Rita Kop, this course attempted to engage learners with the core skills they need in order to become proficient participating in MOOCs. It was designed in response to criticisms that participants must already be literate and educated in order to benefit from the instruction.
- Personal Learning Environments, Networks and Knowledge (PLENK 2010) - this course examines the idea of the personal learning environment (see below) and the creation of self-organizing learning communities.
- Change 2011 - this was the longest course, at 30 weeks, running through the fall of 2011 and the spring of 2012. It proved that a MOOC can be run for a long period of time with the same core group of participants, and that new participants can enter the course at any time.
- CCK11. The third version of the first course included a test of the Big Blue Button (BBB) open source conferencing system. Because Elluminate was sold to Blackboard, becoming Blackboard Collaborate, an API connecting gRSShopper and Big Blue Button was authored. However the number of users proved to be too much for BBB.
- Course on the Future of Higher Education (CFHE) - organized in cooperation with Athabasca University, EDUCAUSE, the Chronicle of Higher Education, the Gates Foundation, and Desire2Learn. This course demonstrated that a traditional LMS could be used to support a connectivist-style MOOC, albeit with an integration with gRSShopper. To support this course an API between gRSShopper and D2L was developed.
- MOOC-REL - this course was offered in French and covered the topic of Resourses Educatif Libre (REL). It was offered in cooperation with the Organisation international de la francophone and the Université de Moncton. MOOC-REL also involved the production of a series of videos and the development of content for the OIF.
Through six years of MOOC development the National Research Council has gained considerable expertise and research through numerous iterations. NRC staff have published numerous articles documenting the forms of participation and interaction that take place in MOOCs.
Following the NRC's work, the MOOC became much more widespread. Most notably, Solvig and Thrun launched the Stanford Aritificial Intelligence MOOC, which attracted 170,000 enrollments. Although the Stanford MOOC is described as a distinct form of MOOC, it is noteworthy that it has the same origins as the Connectivist MOOCs. The Stanford authors reported that they were inspired by Salman Khan, who launched the Khan academy a series of freely accessible video lessons offered on YouTube. Both the connectivist MOOCs, and the Stanford MOOCs, depended crucially on free and open resources.
The Structure of a MOOC
Both forms of MOOCs, the connectivist MOOC (cMOOC) and the Stanford MOOC (xMOOC) are based around a common core of content. This content serves different roles in the two types of MOOCs, which will be discussed.
In both types of MOOCs, weekly synchronous events are held. Both xMOOCs and cMOOCs record these as videos. In addition, both offer supplementary materials, such as additional videos, articles, and learning activities. In some forms of cMOOC, such as the ds106 MOOC conducted by Jim Groom of Mary Washington University, collaborative and creative activities are the core of the course. In xMOOCs, adaptive learning technology supports instruction. Probably the most advanced of this can be found in CodeAcademy, a system that allows students to teach themselves programming.
The difference between the cMOOC and the xMOOC is in the distributed nature of the course. While both types of MOOC involve the creation and distribution of open educational resources, the cMOOC in addition draws on student participants to develop and distribute their own resources, and to find related resources from around the internet and incorporate them into the course. For this reason a cMOOC requires a much less intensive body of resource production, and can be developed for a much lower budget. Even so, both forms of MOOC require the creation of some core content, to serve as the focus around which subsequent interactions and activities take place.
Additionally, the cMOOC draws on self-organizing social networks in a way the xMOOC does not. While it is true that informal learning groups, such as in-person meetings, online conversations, Facebook and Twitter groups, and other social networks form around the xMOOC, they are not integrated into the structure of the course itself (this lack of integration is so profound it has led some to propose the MOOC 2.0, which is essentiallythe cMOOC design). The cMOOC employs content syndication technologies to collect resources and conversations conducted by course participants around the world, and to build their contributions into the structure of the course itself.
This results in a course that is not only more relevant to participants, it also results in a course that is a dynamic entity, growing and changing over the years as new resources are created and added. Unlike traditional courses, including xMOOCs, which require redevelopment after a shelf life of three to five years, a MOOC can be rerun indefinitely, as the four iterations of the CCK course have shown.
The xMOOCs were deployed with limited testing and commercial software developed and funded without reference to earlier technology, including content syndication and social media. As a result, products such as Udacity, Coursera and edX - all of which have commercial intent and all of which were derived from the same Stanford AI model - involve only the prewsentation of video material, adaptive learning exercises, and testing. It is this lack of integration with a student community, and the resulting inflexibility of the xMOOC, that have caused large numbers of students to drop out and for Sebastian Thrun to criticize his own technology.
Next Generation: Personal Learning Environments
In two conferences in Manchester, UK, in 2005 (Alt-C) and 2006 (PLE), the concept of the personal learning environment was first defined. The core idea of the PLE was to take a student-centred point of view for the provision of learning resources and services.
As described above, the first generation of learning technologies centred around the development of learning objects. These early learning materials, which could be complex and engaging, were packaged and distributed through learning management systems. In order to access a learning object, a student would register as a student in a course on an LMS. Even Moodle, which is an open source learning management system, requires that a student register on the service and sign up for a course in order to access a learning resource (or even to participate in a course discussion).
This form of content management does not mesh well with open educational resources, which are intended to be freely accessed and shared. This is why MIT, when developing Open Courseware, also developed technology called DSpaace, which is an adaptation of Open Archives Initiative. It needed a way to distribute academic resources without requiring that readers register and enroll.
In addition, the use of the LMS disabled portability of content. Artifacts created by students and stored in LMS ePortfolios, and comments and interactions among students, could not typically be shared beyond the LMS. Moreover, a student who was enrolled in one LMS could not take advantage of content or assets stored in another LMS. Learning records were also locked inside institutional LMSs, making transfer or recognition of prior learning difficult or impossible. Finally, many services in common use outside an LMS - including blogging tools, personal calendars and email, task managers, microcontent and social media, could not be used inside the LMS. The student's social world and academic world remained apart.
The PLE was developed in response to the need to facilitate interoperability between the different systems. As originally developed, it was more of a concept than a digital application - the addition of an open LMS would enable the deploy,ent of the rest of the interneet technologies in support of learning. This was the basis for the original design of Instructure's Canvas, which creates greater interaction in and out of the learning management system. Blackboard and D2L have both developed open versions of their software for free courses as well. Data portability, however, remains a challenge.
In 2010 the National Research Council embarked on an internal PLE project called Plearn. This was a proof-of concept prototype that demonstrated that an environment could be built that connected traditional learning services, open archives and open repositories, and social networking software and services. It also demonstrated that the personal learning environment was pedagogically feasible, that is, that the instructional and social supports, known as 'scaffolds', necessary to enable learning, could also operate in a personal learning environment.
The Plearn project also demonstrated some of the newer technologies for interoperability. It was designed around a Javacript Object Notation (JSON) framework. This is a data representation system that replaces RSS, and can be used for both content syndication and application programming interfaces (APIs). This allows different software applications to work much more closely together and ultimately enables the replacement of SCORM with more robust messaging between components. The same sort of system, called OAuth, which allows a person using one online social networking tool, such as Facebook,. To share content with another such too, such as WordPress, can enable an individual learner to combine the functions of a learning management system with those of a social network.
Learning and Performance Support Systems
When the National Research Council reorganized in 2012-2013 it dissolved individual project-based groups and instead focused on a smaller set of research programs. One of these, structured around the idea of the personal learning environment, is the learning and performance support system (LPSS). This is a five-year $20 million initiative to advance and deploy core PLE technologies while at the same time supporting commercial development of related technologies and meeting key policy objectives, such as increasing employability in disadvantaged populations.
The LPSS does not replace the LMS or the MOOC. Rather, these continue to be provided by employers, academic institutions, foundations and governments in order to serve important learning, training and development environments. These agencies have the option of creating and making available open educational resources, drawing on the relevant community to locate and create open educational resources, or to supplement these materials with commercial content and custom software applications. The personal learning environment is designed as a means to enable an individual learner, student or staff member to access these resources from multiple providers.
Hence, just as a connectivist MOOC is based on the concept of content syndication to bring together resources from multiple providers around a single topic, LPSS employs the same technology, called the resource repository network (RRN), to allow an individual to obtain several parts of his or her education from multiple providers. At its simplest, an LPSS can be thought of as a viewing environment for multiple MOOCs. In this way, an LPSS is much more like a personal web browser than it is a resource or a service.
Like a browser, each person has his or her own instance of LPSS. Though resources can be shared, each LPSS user has his or her own list of bookmarks and resources. Because each LPSS is individually owned, it can also act as a personal agent for the student. For example, by providing credentials to remote systems (using technology such as OAuth), it can eliminate the need to register at multiple services. An LPSS can therefore serve as an important place for a student to store his or per personal learning records. A JSON formatted data exchange called xAPI (the eXperience API) created by ADL makes it possible for an LPSS to communicate with learning resources directly, and with an LMS or a MOOC. This creates, finally, a personal and portable learning record.
Like the original concept of the PLE, the LPSS envisions a user employing a number of online services to create and share content. Hence, for example, a user (who may be a student, an employee or even an expert or a professor) may wish to store videos on YouTube, photographs on Flickr, and documents and spreadsheets on Google Docs. It should provide access to, and synchronize content among, a variety of online storage media, such as Dropbox or Cubby. A person can even sort custom applications on Windows Live or Amazon Web Services. Data exchange services, in addition to authentication, are required for this service.
It is commonplace today to depict online learning as something that takes place at a computer screen. However with an increasing number of people using smartphones in their everyday lives, mobile deployment is also important. An LPSS, however, envisions a world where learning resources are available outside the domain of traditional computing environments.
One application developed by LPSS, for example, is the Multiple Interactive Trainer (MINT). This is a firearms training system used by military and policy. The objective of LPSS is to example the exchange of information between the MINT trainer and the LPSS application, enabling the results of sessions to be available one day to the next, or for a person to report traing results to an associated LMS.
A similar application is being developed by the NRC's medical devices portfolio. Medical simulation systems that emulate the look and feel of actual humans are already deployed in hospitals around the world (including Riyadh). The medical devices team is working with LPSS to exchange xAPI data and to support training scenarios with external applications.