Friday, December 12, 2014

Open Education, MOOCs, and Opportunities

Reusable Learning Resources

The initial development of online learning technology began at scale with the development of the learning management system (LMS) in the mid-1990s. These systems were modeled on distance education resources such as programmed texts and course workbooks. These were designed originally by organizations such as the Open University, in Britain, and Canada's Athabasca University. Online courses were developed according to a protocol that developed over 20 years of experience. These combined learning materials, activities and interaction, and assessments.

Technological systems based on these designs were first developed by the aviation industry, in the form of the AICC specification (Aviation Industry Computer Based Training (CBT) Committee). These standards were adapted by the Instructional Management Systems (IMS) consortium, a collection of academic and corporate training providers. These defined metadata standards describing small and reusable resources first called 'learning objects' by AutoDesk's Wayne Hodgins. These standards, called Learning Object Metadata (LOM), enabled the development of resources that were reusable, discoverable, and sharable.

Major LMS companies such as WebCT, Blackboard, Angel, Saba and Desire2Learn all adopted standards originally developed by IMS. In addition to LOM, the consortium designed Content Packaging, to bundle sets of learning objects for storage and delivery, and sequencing and design specifications, to organize them into courses. In addition, an organization called Advanced Distributed Learning (ADL), working for the U.S. military, developed the Sharable Courseware Object Reference Model (SCORM) to both describe learning resources and enable them to communicate simple messages with LMSs. SCORM 2004 remains the dominant learning resources specification in both the corporate and academic learning technology marketplace to this day.

Additional technologies to support resource discovery and reuse have been developed since then. Among them were IMS Common Cartridge (CC) to support the ability to 'plug in' applications into LMSs, the Learning Technology Interoperability (LTI) to enable LMSs to launch typical applications such as chat or discussion engines. These were supported in term with application-specific application programming interfaces (APIs) and specialized software, such as Blackboard's 'Collaborate' synchronous conferencing system.

Open Education Resources

Alongside the development of educational technologies is an equally important movement to support open educational resources. This movement predates the development of the world wide web in 1994, as seen for example in the Gutenberg Project, an open access archive of public domain works of classic literature. In addition, there were during this time nascent free software licenses were also developed, first to support 'freeware' software applications distributed across pre-internet electronic bulletin boards, then to enable the distribution of onlione gaming libraries, such as the LP-MUD.

Free software was formalized with the development by Richard Stallman of the GNU Public License (GPL) in 1998 (ref). This license not only promoted the free use of software, it also guaranteed access to the orig8inal source code of the application, and was 'viral', which meant that any new work produced using the source was required to carry the same license. The license did not prohibit commercial use of the software, however, the viral clause made it unattractive to companies that wanted to develop proprietary applications, so licenses without the viral clause, such as the Lesser GPL and the Berkeley Systems Distribution (BSD) were later developed.

Content was treated differently by Stallman. To support the free distribution of software documents and support materials, the free software documentation license (FSDL) was developed. It was similar to the GPL, but in order to ensure the integrity of documentation, did not allow derivatives. Learning content required a more flexible model, which was provided first by David Wiley, with the Open Content License, and then by Lawrence Lessig, with Creative Commons. Both of these licenses allowed for the free reuse and redistribution of the resource, but with conditions.

The Creative Commons license (CC) has become widely used and today thousands of libraries and millions of resources use the license. It because successful because it offered flexibility to content authors and publishers. By allowing a set of sublicenses, it allowed authors to specify several things:
-    By using the Non-Commercial (NC) clause, could restrict copying and reuse of the resource to non-commercial purposes only
-    By using the Attribution (By) clause, could require that any reuse identify by name (and typically URL) the original author of the resource, and
-    By using the Share-Alike (SA) clause, required that the subsequent copying and reuse of the content use the same license as the original, as in the 'viral' GPL
-    By using the No-Derivatives (ND) clause, authors could require that only exact copies of the original be made and redistributed

In 2002, UNESCO, in an examination of the needs of developing nations, and the potential offered by the free distribution of digital learning resources, developed the concept of the 'Open Educational Resource', as follows: (ref)

In a related but separate initiative, the Massachusetts Institute of Technology (MIT) launched what it called OpenCourseWare (OCW). This project, funded by Foundations such as Hewlett, involved the conversion and distribution of all MIT course materials on the internet. Though not the equivalent of a full MIT education, these resources were visited by millions of people around the world over the last 12 years. OpenCourseware spawned a number of additional projects, including the OpenCourseware Consortium, and the Open University's OpenLearn initiative.

Content Syndication Networks

The concept of content syndication originates with the newspaper industry. The idea is that a news story published in one newspaper might be of interest to readers of other newspapers, and so the same story, after its original publication, is distributed to these other newspapers as well. In time, press agencies such as Associated Press and Reuters, formalized the syndication of news content and provided some original news coverage as well.

In 1998 Netscape, the creators of the first commercial web browser, developed a web site called NetCenter and encouraged contributors to 'syndicate' their content in it. This was supported using a technology called Rich Site Summary (RSS), co-developed by Netscape and Dave Winer. RSS went through several early versions: RSS 0.91, which was the first production version, RSS 1.0, which used a web technology called Resource Description Framework (RDF), and RSS 2.0, known as 'Really Simple Syndication', which was adopted by blog engines such as LiveJournal and Blogger. A parallel standard, called Atom, was also used to support blog post syndication, and additionally defines a standard for uploading contents, including comments and new blog posts. These specifications brought content syndication to the online publishing community.

In education and academia, meanwhile, a parallel initiative called 'Open Archives Initiative' (OAI) was created. This followed the calls of academics (in documents such as the Berlin Declaration and Budapest Open Archives Initiative) for the free distribution of academic content. Technology supporting open archives, called OAI, to create lists of academic journal articles in repositories for retrieval and distribution. This technical development now numbers in the thousands of repositories, and millions of articles, as listed in the Repository of Open Archives Repositories (ROAR) and has been paralleled with the 'Open Access' movement led by Stevan Harnad and Peter Suber, who promote the creation of institutional services rather than reliance on commercial publishers.

Content syndication has been behind some of the most innovative developments on the world wide web. Microcontent services such as Twitter and Facebook originally supported RSS. Millions of people have uploaded photos to Flickr and videos to YouTube, many licenses with Creative Commons and shared with RSS. What we now know as the social web, and the social network, evolved from these origins.

Learning Technologies at the National Research Council

In 2001 members of the National Research Council's e-Learning Research Group joined a pan-Canadian network of learning resource providers called eduSource. This initiative, a three-year $10 million project, brought together institutions like Athabasca University and TéléEducation Quebec (TelUQ) together with colleges, corporations and other government partners. This initiative designed and built software to support a pan-Canadian network of learning object repositories and to develop a Canadian version of LOM called CanCore.

In addition to participation on the Steering Committee, NRC staff (including Stephen Downes and Rod Savoie) drafted the organization's core philosophical principles. Together with several partners in Atlantic Canada, the NRC developed the core framework for licensing and authentication, known as Distributed Digital Rights Management (DDRM). In addition, an alternative content syndication framework, called Distributed Learning Object Repository Network (DLORN) was developed.

The National Research Council also engaged in a pioneering content recommendation project with a New Brunswick company, Mosaic Technologies. Working with the Semantic Web group located in Fredericton, the eLearning group developed a product called Sifter/Filter. This product enabled the company to describe resources with metadata in such a way that properties of the learning resource could be matched with the needs of the course developer. Mosaic was eventually acquired by a British technology company while the core technology was rebranded as a music recommendation service, RACOFI, and commercialized.

In 2003 NRC research Stephen Downes developed the concept of e-Learning 2.0 by employing content syndication technology and social media to support learning. Learning, he argued, would be best supported through social networks with the development and free exchange of learning resources, thus enabling students to add to the instruction provides by schools, employers and universities with their own contributions and interactions. Working with George Siemens in 2004 and 2005, the learning theory known as Connectivism was developed, with the idea that learning takes place not just in an individual person but across a network of connections. This would be supported by open educational resources, and to this end NRC defined a set of sustainability models for the OECD in 2006.

The National Research Council's e-Learning group continued development work in collaboration with the major LMS company, Desire2Learn, and the Université de Moncton. The SynergiC3 project helped D2L implement a collaborative learning content development system in its core project. NRC contributions to the technology included content harvesting and syndication technology, a semantically supported workflow engine, a data representation format called 'resource profiles', and an upgraded version of distributed digital rights management. NRC patent applications resulting from this work have been incorporated into Desire2Learn core technology.

Massive Open Online Courses

In 2008 Stephen Downes and George Siemens developed the first Massive Open Online Course. This course, called Connectivism and Connective Knowledge (CCK08), was designed to explain and expand the learning theory they had been developing since 2004. At a Desire2Learn conference in Memphis, Tennessee, Downes and Siemens determined that the online course they were developing should emulate the structure of the theory they were describing in connectivism - that is, it should be an open course, designed as a network of connected parts, designed to facilitate communication using social networks and the sharing of learning resources.

The course was launched in September, 2008, using Moodle, MediaWiki, WordPress, and an application designed by Downes called gRSShopper. This application, the same employed in the construction of DLORN, is used to author and syndicate Downes's e-learning newsletter, OLDaily. The course was sponsored by the University of Manitoba as part of its Certificate in Adult Education (CAE) and had 24 paying students enrolled. It was also opened to the general public and attracted 2,200 registrations. Dave Cormier and Bryan Alexander coined the term 'MOOC' - Masssive Open Online Course - to describe this new form of online learning.

Between the years 2008 and 2014 the NRCDC led or was a part of the following MOOCs:

-    CCK08 - the first course, conducted over 10 weeks in the fall of 2008. Weekly synchronous sessions were conducted using a conferencing system called Ellimunate. This course proved the concept of the networked course and led the use of distributed content networks to incorporate student contributions to the core course content. By the end of the course 170 students were contributing RSS feeds, while 1800 participants were subscribed to the course content newsletter.

-    CCK09 - the second version of the same course saw many of the original participants of the first course return to help teach the second, thus proving that the same model can be duplicated with more robust student interaction in the second iteration.

-    Critical Literacies - along with researchers Helene Fournier and Rita Kop, this course attempted to engage learners with the core skills they need in order to become proficient participating in MOOCs. It was designed in response to criticisms that participants must already be literate and educated in order to benefit from the instruction.

-    Personal Learning Environments, Networks and Knowledge (PLENK 2010) - this course examines the idea of the personal learning environment (see below) and the creation of self-organizing learning communities.

-    Change 2011 - this was the longest course, at 30 weeks, running through the fall of 2011 and the spring of 2012. It proved that a MOOC can be run for a long period of time with the same core group of participants, and that new participants can enter the course at any time.

-    CCK11. The third version of the first course included a test of the Big Blue Button (BBB) open source conferencing system. Because Elluminate was sold to Blackboard, becoming Blackboard Collaborate, an API connecting gRSShopper and Big Blue Button was authored. However the number of users proved to be too much for BBB.

-    Course on the Future of Higher Education (CFHE) - organized in cooperation with Athabasca University, EDUCAUSE, the Chronicle of Higher Education, the Gates Foundation, and Desire2Learn. This course demonstrated that a traditional LMS could be used to support a connectivist-style MOOC, albeit with an integration with gRSShopper. To support this course an API between gRSShopper and D2L was developed.

-    MOOC-REL - this course was offered in French and covered the topic of Resourses Educatif Libre (REL). It was offered in cooperation with the Organisation international de la francophone and the Université de Moncton. MOOC-REL also involved the production of a series of videos and the development of content for the OIF.

Through six years of MOOC development the National Research Council has gained considerable expertise and research through numerous iterations. NRC staff have published numerous articles documenting the forms of participation and interaction that take place in MOOCs.

Following the NRC's work, the MOOC became much more widespread. Most notably, Solvig and Thrun launched the Stanford Aritificial Intelligence MOOC, which attracted 170,000 enrollments. Although the Stanford MOOC is described as a distinct form of MOOC, it is noteworthy that it has the same origins as the Connectivist MOOCs. The Stanford authors reported that they were inspired by Salman Khan, who launched the Khan academy a series of freely accessible video lessons offered on YouTube. Both the connectivist MOOCs, and the Stanford MOOCs, depended crucially on free and open resources.

The Structure of a MOOC

Both forms of MOOCs, the connectivist MOOC (cMOOC) and the Stanford MOOC (xMOOC) are based around a common core of content. This content serves different roles in the two types of MOOCs, which will be discussed.

In both types of MOOCs, weekly synchronous events are held. Both xMOOCs and cMOOCs record these as videos. In addition, both offer supplementary materials, such as additional videos, articles, and learning activities. In some forms of cMOOC, such as the ds106 MOOC conducted by Jim Groom of Mary Washington University, collaborative and creative activities are the core of the course. In xMOOCs, adaptive learning technology supports instruction. Probably the most advanced of this can be found in CodeAcademy, a system that allows students to teach themselves programming.

The difference between the cMOOC and the xMOOC is in the distributed nature of the course. While both types of MOOC involve the creation and distribution of open educational resources, the cMOOC in addition draws on student participants to develop and distribute their own resources, and to find related resources from around the internet and incorporate them into the course. For this reason a cMOOC requires a much less intensive body of resource production, and can be developed for a much lower budget. Even so, both forms of MOOC require the creation of some core content, to serve as the focus around which subsequent interactions and activities take place.

Additionally, the cMOOC draws on self-organizing social networks in a way the xMOOC does not. While it is true that informal learning groups, such as in-person meetings, online conversations, Facebook and Twitter groups, and other social networks form around the xMOOC, they are not integrated into the structure of the course itself (this lack of integration is so profound it has led some to propose the MOOC 2.0, which is essentiallythe cMOOC design). The cMOOC employs content syndication technologies to collect resources and conversations conducted by course participants around the world, and to build their contributions into the structure of the course itself.

This results in a course that is not only more relevant to participants, it also results in a course that is a dynamic entity, growing and changing over the years as new resources are created and added. Unlike traditional courses, including xMOOCs, which require redevelopment after a shelf life of three to five years, a MOOC can be rerun indefinitely, as the four iterations of the CCK course have shown.

The xMOOCs were deployed with limited testing and commercial software developed and funded without reference to earlier technology, including content syndication and social media. As a result, products such as Udacity, Coursera and edX - all of which have commercial intent and all of which were derived from the same Stanford AI model - involve only the prewsentation of video material, adaptive learning exercises, and testing. It is this lack of integration with a student community, and the resulting inflexibility of the xMOOC, that have caused large numbers of students to drop out and for Sebastian Thrun to criticize his own technology.

Next Generation: Personal Learning Environments

In two conferences in Manchester, UK, in 2005 (Alt-C) and 2006 (PLE), the concept of the personal learning environment was first defined. The core idea of the PLE was to take a student-centred point of view for the provision of learning resources and services.

As described above, the first generation of learning technologies centred around the development of learning objects. These early learning materials, which could be complex and engaging, were packaged and distributed through learning management systems. In order to access a learning object, a student would register as a student in a course on an LMS. Even Moodle, which is an open source learning management system, requires that a student register on the service and sign up for a course in order to access a learning resource (or even to participate in a course discussion).

This form of content management does not mesh well with open educational resources, which are intended to be freely accessed and shared. This is why MIT, when developing Open Courseware, also developed technology called DSpaace, which is an adaptation of Open Archives Initiative. It needed a way to distribute academic resources without requiring that readers register and enroll.

In addition, the use of the LMS disabled portability of content. Artifacts created by students and stored in LMS ePortfolios, and comments and interactions among students, could not typically be shared beyond the LMS. Moreover, a student who was enrolled in one LMS could not take advantage of content or assets stored in another LMS. Learning records were also locked inside institutional LMSs, making transfer or recognition of prior learning difficult or impossible. Finally, many services in common use outside an LMS - including blogging tools, personal calendars and email, task managers, microcontent and social media, could not be used inside the LMS. The student's social world and academic world remained apart.

The PLE was developed in response to the need to facilitate interoperability between the different systems. As originally developed, it was more of a concept than a digital application - the addition of an open LMS would enable the deploy,ent of the rest of the interneet technologies in support of learning. This was the basis for the original design of Instructure's Canvas, which creates greater interaction in and out of the learning management system. Blackboard and D2L have both developed open versions of their software for free courses as well. Data portability, however, remains a challenge.

In 2010 the National Research Council embarked on an internal PLE project called Plearn. This was a proof-of concept prototype that demonstrated that an environment could be built that connected traditional learning services, open archives and open repositories, and social networking software and services. It also demonstrated that the personal learning environment was pedagogically feasible, that is, that the instructional and social supports, known as 'scaffolds', necessary to enable learning, could also operate in a personal learning environment.

The Plearn project also demonstrated some of the newer technologies for interoperability. It was designed around a Javacript Object Notation (JSON) framework. This is a data representation system that replaces RSS, and can be used for both content syndication and application programming interfaces (APIs). This allows different software applications to work much more closely together and ultimately enables the replacement of SCORM with more robust messaging between components. The same sort of system, called OAuth, which allows a person using one online social networking tool, such as Facebook,. To share content with another such too, such as WordPress, can enable an individual learner to combine the functions of a learning management system with those of a social network.

Learning and Performance Support Systems

When the National Research Council reorganized in 2012-2013 it dissolved individual project-based groups and instead focused on a smaller set of research programs. One of these, structured around the idea of the personal learning environment, is the learning and performance support system (LPSS). This is a five-year $20 million initiative to advance and deploy core PLE technologies while at the same time supporting commercial development of related technologies and meeting key policy objectives, such as increasing employability in disadvantaged populations.

The LPSS does not replace the LMS or the MOOC. Rather, these continue to be provided by employers, academic institutions, foundations and governments in order to serve important learning, training and development environments. These agencies have the option of creating and making available open educational resources, drawing on the relevant community to locate and create open educational resources, or to supplement these materials with commercial content and custom software applications. The personal learning environment is designed as a means to enable an individual learner, student or staff member to access these resources from multiple providers.

Hence, just as a connectivist MOOC is based on the concept of content syndication to bring together resources from multiple providers around a single topic, LPSS employs the same technology, called the resource repository network (RRN), to allow an individual to obtain several parts of his or her education from multiple providers. At its simplest, an LPSS can be thought of as a viewing environment for multiple MOOCs. In this way, an LPSS is much more like a personal web browser than it is a resource or a service.

Like a browser, each person has his or her own instance of LPSS. Though resources can be shared, each LPSS user has his or her own list of bookmarks and resources. Because each LPSS is individually owned, it can also act as a personal agent for the student. For example, by providing credentials to remote systems (using technology such as OAuth), it can eliminate the need to register at multiple services. An LPSS can therefore serve as an important place for a student to store his or per personal learning records. A JSON formatted data exchange called xAPI (the eXperience API) created by ADL makes it possible for an LPSS to communicate with learning resources directly, and with an LMS or a MOOC. This creates, finally, a personal and portable learning record.

Like the original concept of the PLE, the LPSS envisions a user employing a number of online services to create and share content. Hence, for example, a user (who may be a student, an employee or even an expert or a professor) may wish to store videos on YouTube, photographs on Flickr, and documents and spreadsheets on Google Docs. It should provide access to, and synchronize content among, a variety of online storage media, such as Dropbox or Cubby. A person can even sort custom applications on Windows Live or Amazon Web Services. Data exchange services, in addition to authentication, are required for this service.

Advanced Services

It is commonplace today to depict online learning as something that takes place at a computer screen. However with an increasing number of people using smartphones in their everyday lives, mobile deployment is also important. An LPSS, however, envisions a world where learning resources are available outside the domain of traditional computing environments.

One application developed by LPSS, for example, is the Multiple Interactive Trainer (MINT). This is a firearms training system used by military and policy. The objective of LPSS is to example the exchange of information between the MINT trainer and the LPSS application, enabling the results of sessions to be available one day to the next, or for a person to report traing results to an associated LMS.

A similar application is being developed by the NRC's medical devices portfolio. Medical simulation systems that emulate the look and feel of actual humans are already deployed in hospitals around the world (including Riyadh). The medical devices team is working with LPSS to exchange xAPI data and to support training scenarios with external applications.

Friday, December 05, 2014

#OEB14 - Open Educational Resources 2.0

Alan Tait
From Distance Learning to Open Education

- ICT is now normal - we don't talk about distance learning, but 'technology enhanced learning'
- students learn as much outside the classroom as inside
- I wonder if OERs are really delivering on their promise
    - there's a huge number of resources
    - I wonder whether MOOCs are proving more useful than OERs
    - 'Who stole our cheese' dog-whistle
    - painful for the image of open universities
- I took a MOOC from the University of East Anglia
    - it was really good, where did they learn to do that?
- FutureLearn
    - a pedagogically-designed platform from the very beginning
    - designed from the outset for social learning, mobile learning
    - 750K learners in the first few months
- MOOCs are showing there is a huge demand for learning in the world
- The adage that MOOCs have poor pedagogy isn't true any more
- We need a new taxonomy of MOOCs
- Open education is fighting a battle of a different sort:
    - open education is anti-commoditization of learning
    - can the marketisation of education be reversed?
    - distance and e-learning will not remain separate

Oivind Hoines - Norway
Norweigan Digital Learning Arena

- large scale working model for Open Educational Resources
- objectives:
    - free and open digital learning resources for all
    - teachers and pupils contributing to their own learning resources
- we have the most traffic in Norway for digital learning resources
    - costs - 55 Euros per student - 1.5 Euros per student per subject
- 2-layered approach
    - editorial staff and private companies to produce learning resources
    - people contribute their own creations and remixes
- mechanism that allows publishers to acquire and resell material back to us
- preconditions for the paradigm shift:
    - legislation - accountability
    - economy - we had financial backing
    - technology - wifi everywhere, students have laptops
    - inspiration - conferences like these
- factors for a sustainable model
    - public commitment to long-term backing
    - involvement of the teachers
    - open licensing, metadata and open formats
    - don't reinvent - use what is already working


- the field is less developed in Germany - how much is state institute, and how much is NGOs funded by the state - response - NDLA is owned by the counties

- values behind open education to create an open society - a number of OERs are not creating the open society - they are still in silos, still in the academic self-referential world
- thinking of MOOCs - do we just want students producing OERs, or do we want them co-creating knowledge - we should look at MOOCs as a space to transform society, not just higher ed
- I wonder also about educational institutions giving grades - Einstein - grading made scientific enquiry distasteful - how can we keep these as being the metrics for the system

Paul Bacsich
POERup - Policies for OER Uptake
Mapping OERs, MOOCs, Open Learning, etc

- inventory of 501 OER initiatives, 200 more in in the queue
- 33 country reports, 8 case studies
- we are working in the context of UK, US activities, flexible learning barriers, etc
- picture of the map tool
    - mapping is easy with Google Map Engine and Semantic MediaWiki
    - set up database, find addresses, map it
- (OER includes MOOCs, in my definition, sorry UNESCO, the world is changing)
- (SD - very odd mapping eg, nothing in Utah)
- what about open access? - see OpenDOAR mapping open access
- Reflections:
    - don't trust maps that much, but they're useful as a guide
    - have pins had their day and should we use shading?
- EU - Opening Up Education - a hierarchy
    0. open access - general use and research
    2. Freely available / wrong license
    3. Free to closed communities
    4. Freemium
    5. Lower-cost eg $3000 degree
    6. Commercial cost - eg. UK MSc Online
- back to the Iron Triangle
- life does not stop if we have to pay 10 Euros a month for some services
    - eg., for their music, spotify; or netflix
    - but some people can't pay it
    - ir suddenly makes the economic models much more tractable
    - UKeU died, people moved on
    - focus on audiences vs markets
    - be honest with mainstream students, who benefits, and why
    - start reducing the system and life-cycle costs - an end to gtrand gestures
        - tilt the balance slightly

Larry Cooperman
UC Irvine, OCW Consortium
Open Education 2.0

- 2000-2001 - we had this view of the university - certification - etc
- then came the internet - faculty came up with the answer: open courseware
- at this time the internet was primarily a publication and distribution network
- I view an overall trend (Martin Trow) - transition from elite to mass
    - eventually would reach a state of universal higher education (50%)
        - this has already been reached in a few countries
    - higher education is the gateway, from a family perspective
    - also, a question of competition for increasingly scarce professional jobs
        - we are seeing the beginning of trends against social mobility
    - finally, it's a question of an educated citizenry taking on global problems
- Colombia - vast increase in enrollment, but...
    - decrease in graduation rates
    - attainment remains steady or rising very slightly
    - leads to the question: what targets do we have for open education
- A new iron triangle? Access - Cost - Success
    - we have more and more access, but it doesn't translate to success
    - vs. eg. CUNY project - they had 23% completion rate
        - so they set up blocks of time; analyzed issues of costs; added counseling to support study habits and organizational methods
    - student success costs money - but every $4K spent on students saved $250K in society (Henry Levin study)
    - MOOCs have forced a new conversation
        - not that xMMOcs etc have solved any problems
        - it's about those lecture halls - lecturing is ineffective, but it's the dominant mode - eg. meta-analysis of biology lectures - if they were a clinical trial, they would have been stopped
        - in moving the MOOC model forward
            - universities have sole professors - why not ask large communities how to do it
            - we have to get better at analytics
            - weak peer learning capabilities
            - open licensing will play a dramatic role

- have you explored the possibility of translating (Norwegian) content? Response: no. We have to produce in two languages.

- given a $4K expenditure results in a $250K benefit to society, is there any justification for tuition fees?

Responses (Alan) - in UK, education has been repackaged as 'entirely for personal benefit' and must be paid for entirely by the student - this is a completely untenable situation

(Larry) - what if we said the same situation should exist for primary education? It cuts across the notion that we should have universal education. In California, we began to charge fees, the master plan was abandoned. What did we develop from that? These world-class institutions. Silicon Valley was a product of the educational system. Ed should be enhanced by policies of free access.

- moved from classroom teaching to exclusively online - the internet provides an opportunity for providing resources - but missing is the human connection - I miss seeing my students face to face - what about the human element?

Responses (Paul) - in virtual schools ongoing, there is very little shared - look at longstanding virtual schools - they take great care with how they construct learning experience - they are criticized by MOOkie types for being too prescriptive - people won't open up their minds - you should go and see them

Thursday, December 04, 2014

#OEB14 - Does Data Corrupt Education

Summary of a debate at Online Educa Berlin. Note that the debate format is not serious and that participants to not necessarily agree with the points that they are making. I am summarizing talks, so if the first person is used, it is the voice of the speaker. Enjoy.

This house believes that data is corrupting education

For: Ellen Wagner

- Data as the meme
- It bothers me that I leave this trail of data everywhere we do
- Data without context is really without value
- data with context is information, and information is power, and power corrupts...
- data in the wrong hands might be misused
- will this change what we do? Absolutely - how many of you as instructional designers like the idea of your profession being reduced to algorithms?
- How do your students feel abiut having every move onserved... cinstrained, analyzed, measured by the data we leave in out tracks
- Taxi apps - Uber - the drivers rate their passengers - if you have been a bad passenger you will not get a ride
- I worry about naive or nefarious uses of data to restrict access or to punish

Against: Viktor Mayer-Schönberger,

- Why does it need the side of attacking this motion in the first place? Seriously?
- Since the beginning of time we humans have tried to make sense of the world around us by observing it, by gathering data
- We already see that learning and looking at data are entwined
- Denying it would not only be foolish, it would be dehumanizing
- our progress of making sense of the world by using data has been nothing short of staggering
- not just knowing more, knowing more accurately
- it would be ridiculous not to use what powers what we learn to inform how to learn
- we are not yet there, we have to move beyond PISA, which measures only outcomes and not process
- the other side wants to make you believe that ignorance is bliss

For: Inge de Waard

- Now what a sly fox you are... using words and making them use different things
- from a digital divide and a data divide it is quite dangerous
- as Facebook links up with big universities, little universities do not get that opportunity
- it costs a lot to access this data - if you don't have the data you can't do the analytics
- you might say, data is open to all - but lets point to Victor, associated with Harvard and Oxford
- and then there are algorithms - if you see a vulnerable group is less successful, you start thinking, let's not recruit from thst group
- 84 percent of government and corporate employees say education is not preparing people for the future - but big data would fix this
- but then I saw a study saying 1 in 3 jobs will disappear, not be changed, just gone - how does data fix this? we have to think about using data for different purposes
- at a time when education is being cut, big data is getting much more funding than ever
- point 2: the norm - a human can't be more than human
- a system represents the thoughts and values of those who create it
- eg. an app analyzes music and determines whether it has hit potential
- but... the masses don't move - the individuals - Gandhi, Jobs - change - if I see them I see they will change the world
- so what is the algorithm of these people? It is not big data - it is very teeny weeny small data that makes the difference
- life is quality, creativity, really living the dream
- data should be shifted toward a new goal, where everybody is included, where we have marginality and exceptions to the rule

George Siemens

- Our two opponents ended siding with us - and they started by attacking my friend of the last two hours
- Why data is not corrupting data: massification, cognitive extension, quality of learning
- in history, the quality of your education was determined by class
- today, the classroom helps, but the biggest barrier to education is poverty
- my opponents want to subject your children to a lifetime of poverty
- we will only massify education through an effective use of data. That's point 1
- analytics as a cognitive extension: like the plough, we use tech to become more productive
- if we were still planting by hand we could support 7 billion people
- in 1550 there was already an overabundance of ideas, so we created classification systems, encyclopedias, etc
- we have always used methods to make data more manageable
- that's what we are doing with data
- through the effective use of data we can manage what was unmanageable. That's point 2.
- Elon Musk, Bill Gates, etc., say 50% of jobs will be taken over by robots
- so we all have to reskill. That will be supported by analytics.
- the assessment process - most learning form you today does not happen in a classroom
- analytics will take those experiences, match them to curricula, and give you credit for what you have done. That's point 3.

Each speaker is separated with a blank line.
- talking about humanity making sense of the world, the starting points is ourselves
- I think simplicity is key, I think data can confuse and blind

- to women, arguing to keep everything the same, two men acting like the hunters...
- Einstein would not agree with you, not all technology is good

- whichever side you think you belong on, big data is coming
- so it's not whether we like it, but how we will adapt to it

- if we accept that the outcome of education is knowledge, and if knowledge is the outcome of relevant information,and if we accept that information is an aggregate of data, then why would we think that data corrupts education

- corruption - almost every other session has been about disruption, moocs, etc...
- why on earth would we not want to disrupt - and corrupt is not such a different word
- why wouldn't we want to corrupt the existing educational systems? (Gilly Salmon)

- the data shows half the world is not connected to the internet
- so how do you collect data from them? Or do you just ignore them?


(George) That's an excellent point - we create tech that creates problems that can only be solved with more technology. Even in low-data environments, you're already collecting some data. We can use mobile data, human observation, etc.

(Ellen) All data are not created equal. We are equating different types of data. Until we are better at understanding what we are talking about we should maybe be careful. Remember it took 30 years before people thought online ed was a good idea.


Data is corrupting education, and I hope we are riddled with it soon. It is the natural order for it to corrupt education. The system is broken, it was built broken, so wither stay on the sinking ship or hope for something else.


(Ellen) I like the idea of disruption, but 'corruption' is a self-serving deviation from an ideaal. So we have to be careful here.


- if we are talking about big data, we are talking about access to data, analysis of data online
- but it's not data we want, when I see my students, I don;t want to see data


(Viktor) the motion concerns data, not big data. I am not here to defend data. But I have children and sometimes care about them too much. What we want is for teachers to focus on what they do best. So we also need to help them get better, and the only way to get better is to get feedback, and the only way to get feedback is with data. I depend on data.

(Inge) I want to add that data is of course positive, but there is an example of data that has not made any data though we know its effects: climate change. We have data but we don't act on it.


- key words - privacy, ownership, corruption, Snowden....

- if something that corrupts education is something that stops it happening, then it's not data that corrupts education, but it's subject specialists
- when the data becomes the priority but it's not used meaningfully, then it becomes corrupting because it shifts attention away from teaching

- the words education and data have not been defined
- education, if we talk about the big structure of education, then it can be corrupted
- if we change from 'big data' to 'open data' then we don;t have a problem, but then owning data creates power, and that power is corruption

- at the management board of the university we are glad to have data because professors would never change otherwise
- it's not big data, it's small data, just a few students unhappy with the way professors are teaching

- I want to compare with farming - we had to mechanize
- probably we will discover many efficient ways to improve efficiency

- it's not whether data will corrupt education - we agree it will - it's how we should use it

- something you haven't pointed out, is that data can be corrupt
- if data is corrupt, what does that do to education?

- I think the rwal danger is that we're talking about beliefs
- I think we should have facts, perhaps a large survey, so we can have a proper debate

Against: Viktor Mayer-Schönberger

- for Ellen and Inge it was a significant challenge, and they rose to the challenge
- we had the easier task, data doesn't disrupt education, that's clear
- we talked to Sal Khan who told a story about a small child in California who just didn't get math, didn't get it through the videos, data was collected, and they readjusted the videos, and suddenly she got it, and by the end of the summer she was the best in her math class
- it's not about the content, it's about the data we can collect to learn how to improve learning

For: Ellen Wagner

- taking a position in support of the motion is not such a stretch
- my actual experience with data that is corrupt, that has not been cleaned, that is in the wrong hands, makes this clear
- I wonder whether it's the data we should be worried about, but whether it's the people
- there is no algorithm that defines what is a good person
- we need to have more than data helping us make decisions


Ironically, we had a vote

from app: 27% for, 72% against
in room, same percentages

#OEB14 Rheingold, Lewin, Stevenson

These are summary notes of the presentations at Online Educa Berlin, 2014. If the text uses the first person, it is the presenter speaking, not me.

Aida Opoku-Mensah
E-Learning Africa

I've seen the impact of e-learning. In countries where we can not invest enough in schools, e-learning is the only option. There are far too many people without access to learning.

- Big data - potential for personalization
- Cloud - provides computing power for many activities

World market for e-learning 51.5 billion in 2018.

Introduction of Howard Rheingold.

Howard Rheingold

For me it has always been about the kind of learning and teaching that technology enables. Social learning is what makes us human. Without the connection to the students and the pedagogy, I don't think the technology would have been so useful.

I'm talking about a new culture of learning - also the title of a book by JS Brown and Doug Thomas. One aspect: learner centred. Before, you used to have to go to a school. Learners have many other options now. It's also more social - learning has been more and more peer-to-peer and not just listening to the teacher. It's also more inquiry-based. It's also more collaborative - this used to be called cheating. It's also cooperative - I talk about co-learning, being responsible for each other's learning. Networked: this is new - children in schools are able to connect with each other in ways that weren't possible before.

In 1996 I wrote this article, virtual communities. I created a 'university of the future' that cost several million dollars - I didn't imagine that 10 years later people would be able to have all this virtually for free.

In 2008 I was able via a MacArthur grant to create a social media classroom. The idea was to enable students to use different media in the same browser-based environment.

The forum, for example, enables the sort of online conversations that are really not feasible using email lists. The forum explicitly enables the group to have a voice. In school these conversations are truncated; the bell rings and students move on. Online these continue. It's about the group being more than the sum of its parts.

On another tab you find the blogs. It is the individual blogger that chooses the subject. This is more and more important at younger ages. Think of the difference between writing a paper only the teacher sees, and writing a paper the world sees.

There's also the wiki, to create pages that anyone authorized to edit. We use it to collaboratively author documents. We also use social bookmarks.

A lot of what I am talking about isn't something I knew. It is something I learned along with my students. It was about empowering the students, to take control of their own learning. Here's a picture of how the classroom has changed. Working with students we developed the concept of co-teaching. As well, we reorganized the room into a circle - there's no back row in a circle.

One of the things we did was to develop a lexicon of the words and phrases we encountered; it was up to all of us Wikipedia-style to add to the definitions. If each one of us does some little thing we are able to create something larger.

We also used mind-maps to allow people to break out of the linear, to think visually about the subjects. One of the co-teaching duties was to make a mind-map of the materials assigned.

A few years ago I decided to experiment outside the university and go into purely online teaching. I started Rheingold U for students all over the world. Interesting - I changed my greeting from 'esteemed students' to 'esteemed co-learners'. It reflected a different attitude to students, to give them power over learning.

We use a wiki, we meed online once a week - we can use Collaborate, Adobe Connect, Big Blue Button - it's an exercise in multi-tasking in many ways. I put up a page and asked participants to take new roles = search, mondmap, create lexicon, etc. This creates interactivity - you could go to YouTube and just listen. BB Collaborate has a very nice whiteboard that lets people put up stuff anonymously. We use it to brainstorm; then some students would create more formal mindmaps.

It's really not a matter or memorizing facts; it's a matter of finding connections and finding meaning.

Since then many services have started up to allow people to learn outside school - Khan Academy, Coursera, etc. If you have internet access you can learn. That started me thinking, what's the next step? What do self-learners need to know in order to effectively teach and learn from each other. More and more people are finding out they can learn what they want to learn online.

What of we eliminated the teacher altogether? How would they organize themselves? What would they do? So, I got together with a group at Berekeley and talked about it. Over time the group in the room dropped out, but an international group online worked together to created what we called 'Peeragogy'. It was an exercise in peeragogy. We studies and worked online - we used Google Hangouts and Wordpress - we created a Handbook. You can join our meetings and heelp us edit and revise it.

I tend to work ahead of the rest of the worls and I think peeragogy will be more common over the next 2, 3, 5 years. Yes there is a place for expertise. But we can't scale up traditional brick and mortar schools. Also, there is an economic question: do people want to pay more taxes to pay for teachers and schools.

A recent McArthur grant got me involved in open and connected learning. A good example is - instead of using the digital classroom, this is teaching and learning on the open the web. I ask my students to claim a domain name and get a WordPress server on the web - costs $25 - and take control of what they do on the web. The web is not just Facebook; it is not just social classrooms. I created a course hub that aggregates their blogs.

They are learning how to create a public voice. I tell them, whether or not they like it, they have a public face. People talk about them. This is about them being able to take control of their own selves. We use MediaWiki, we use Discourse ( - working with the organizers of these open learning courses (eg. Jim Groom) led us to create an open course on creating open courses. There are core groups, and thousands worldwide, and you can use the Wordpress filters to control what you see.

Right now online you can find our work at - it's not about driving this top-down, but to enable people to co-evolve the pedagogy, not to replace traditional learning, but to enhance it. Every year in the syllabus I introduce students to new ways of participating in ways they are not accustomed to. They can read books and listen to lectures, but they're not used to co-teaching and working together. They don't just take a package of learning from the course.

This not the only way of learning. There are many things you need to learn how to do - how to change a lights switch. More and more this procedural knowledge is something that more and more we need to do together.

Lisa Lewin - Pearson   @lisalewinlive
The Ed Tech Revolution

The big news story of higher education in the 20th century was one of access. There was a big explosion of access to tertiary education, something that used to be reserved to the elite. It became accessible to the disadvantaged, ethic minorities, to women.

The OECD's list of nations that have 40% who have reached higher education attainment. (North America, Europe, Japan, Korea, Australasia). I did. I'm from the midwest US. My grandfather was born 1925 in southern Illinois. He was in WWII in Japan, used the GI Bill to pay for his education. My mother was born 25 years later; she attended a land-grant university (the US granted land to states that could be used to start universities - Many of these grew to become the mega-universities). That's where she met my dad, a bright international student from Jamaica. I was born in 1975, so when I went to Harvard I was able to take advantage of a loan program (unsubsidized loan at low interest backed by the government). It shows that policy interventions can have both a micro-impact (my case) and a macro-impact (all of yours).

Now, we're making a bet that technology can continue to improve access. It gives flexibility, more diverse options. We also hope that technology can increased access in the developing world. We're placing a bet that tech can expand access on that axis.

We also hope that etch will not only expand the quality of education, but also improve the quality. That's where facts and figures are a bit murky. When we look at the data, on balance, we're not getting the big learning gains that we would expect. Access has expanded, but we still have issues of completion, employability, training in 21st century skills. It's tougher having e-learning produce gains in quality.

Maybe this is a problem in innovation. Here is a (Foster's) technology S-curve (a theory that suggests technology follows a common curve, from trial and error to rapid adoption, to mainstreaming at a performance limit, to eventual replacement). I argue that we are at the end of  the first curve in educational technology. We've seen some mainstreaming and rapid adoption - it's no accident that the sponsors are Pearson, Blackboard and D2L - we have LMSs and 'online homework systems'. So we might be at that phase near the top of the curve, facing diminishing returns. On the measure of learning gains - we're not seeing 'double the learning gains'. So the question becomes, how do we get to that next technology?

Some candidates. For example - brain scans - we didn't know before that during a lecture the brain slows down to an activity below that of sleeping. We didn't know about the impact of nurtirents. Etc. If we could create brain-informed teaching and learning strategies, that might bet us to the next level.

Or another theory: it's not brain science that will get us to the next level, it's data science. The theory here is that we were not able to understand what was happening at the point of instruction. We only test what students every few weeks or every few months. Imagine what can happen if we observe every course resource, to actually dissect the effectiveness of all the micropoints. Now we have a critical mass of student s- we have a zettobyte of data. That's  lot of data; we could mine that.

But wait. Maybe, the limit to progress sin ed tech is a human limit. Eventually machines will be able to do everything better. If we want truly personalized learning, an algorithm will be able to produce this better than a human ever could. Engineers in augmented reality and virtual worlds suggest that machines can help us overcome physical limits - to demonstrate nuclear fission, for example, or to think about how we better scale and give training teachers better practice without subjecting studetns to their experimentation.

Or there's one more possibility. What's preventing us fro  getting these outsized learning gains is that we have not had a metascience that pulls all those threads together, helping us tie all of these things in a manner that's somewhat wholistic. That's what learning science - a new discipline - will do.

My personal view is that any of these, any one of them, if we could figure out how to apply it correctly, could just explode our ability to apply ed tech. But what that will rely on is for all of use to do out part to develop a better and bolder innovation ecosystem. Here's what I mean by that.

At a certain point, there's basic research. Following that, there's a technology development space - an incubation space. Early-stage startups, taking new discoveries and trying to apply the,. Then you have a product development phase, and a scalability phase, where you have people whose job it is to productize those innovations, so they can be used by the masses.

We need to do two things. First we need to go deeper in all of these things - being bolder and more creative in research and innovation, etc. And the other thing is what I'll call horizontal coordination, all of these working together, so that we're actually translating the great work that's happening on the frontiers into actual products everyone can enjoy.

Mark Stevenson - @optimistontour

Hello! Hello! Hello! I will go very fast.

So I grew up in a drepressive household, that led me to create a consultancy.

Douglas Adams - three types of tech
- tech that existed when you were borm - doesn't feel like tech
- tech created before middle age, that's exciting and useful
- tech created after middle age, which is pointless and makes you angry

So in learning, we have technologies that looks interesting but we don't know what to do with it. Eg. genetic testing. For example, I was screened, I have the same risk as a black mad for a certain disease. I said I wanted a test, my doctor said, I can only recommend that test if you are a black man. My doctor is not a racist, but our system is, relying on superficial cues rather than actual data.

Looking at the exponential growth of tech. Your mobile phone has more computing power than the entire Apollo program. You might think a car driving itself is amazing. But you can see somebody in such a car being amazed. Now they will be allowed on the roads. What does this do to insurance countries? If there's no driver, who do they blame? But humans are bad at driving cars; it will reduce accidents.

In a few years, there will be a 1 cent human genome. What does that do to medicine. Fuel created from carbon, which they feed to algae. Taking carbon from the atmosphere. Many companies bid on this project. Kleinworks win the bid, and is making diesel from the air right now. This will hit ion a niche market in 2023. The cost of solar power is dropping, and capacity is doubling every couple of years. Companies are switching from oil and gas to renewables.

The world will change dramatically, because all of our politics and economics are the politics and economics of energy.

Another amazing thing. 3D orinting. 3D printed technology - a 3D printed heart. A German technology, Nanoscribe, 3D printing components for microchips on a nano scale. Eventually 3D printers will be able to print all the components required to make a 3D printer.

Solar powered mobile phones. Blood tests you can run on your mobile phone. Designs for printers that produce pharmaceuticals. But these together. bioCurious.

Digital was the cocktail sausage before dinner.

You're creating an industry that is supposed to usher us into that world. JS Brown - nearly every social technology and business structure can't survive, and yet we're trying to educate people into those structures. Learning quicker than your competitors may be the only way to survive. The future needs a different model of education. Automation applied to an inefficient operation will just magnify that inefficiency - Gates.

We all want to innovate. We love innovation. It's amazing. All my clients want to do that. But what they want is innovation-wash - to appear to do it, without actually doing it.

We need radical change, but  most of us can't do it, because of the way we were educated. Sinclair - "it is difficult to get a man to understand something when his salary depends on not understanding it." Remember Wang? Remember Blockbuster. Big companies die because of their culture. Culture eats strategy for breakfast. None of the new technologies were developed by incumbents.

If it doesn't interact with you, it's broken. Publishing, medicine, manufacturing, energy - not just consumers, but producers. In education, it's about co-learning. learning is not a place - it's something that we are, something that we do. If you try to just recreate the place, you're not innovating.

Mass power is coming to you. And with power comes responsibility. With mass power, mass responsibility. That's why your work is so important. You have to become citizen and state. You can't predict the future, you can only prepare for it. The future is just a mirror, and asks us what kind of world we want. If we don;t look in that mirror and see a world of justice, and humanity, and compassion, we'd better be prepared for the consequences.

Wednesday, December 03, 2014

EPortfolios and Badges Workshop - #OEB14

E-Portfolio Workshop, Online Educa Berlin: These are content summaries; if written in the first person it is the speaker speaking, not me. Some comments in parentheses are my own.

Launch of the Europortfolio German Chapter during the Workshop

Igor Balaban - Europortfolio Community and Portal

We've completed one year of the two year project. 351 members from 52 countries are members - 3rd version of the portal was launched yesterday. The portal is the min driver, a getaway that allows people to work together. What you will see today are four key products we have been working on this year.

The portal allows you to collaborate, to create and announce events, offline or online. You are also free to publish your recent work, to use the online collaboration, and to invite people to join you and produce a deliverable. Today we will look at some deliverables that may inspire you to get started.

This started as a project; we hope it becomes a self-sustaining network. For now we have core partners, which should be mentioned.

Serge Ravet - Mturity Matrix
Now involved in another project called BadgeEurope.

A few words about the matrix. How did we produce them? The idea was to provide a framework to describe the maturity of eportfolio initiatives, trying to describe the complex nature of learning, to go beyond the basic 'how to implement an eportfolio system', to the point where an eportfolio will have a transformative effect.

What we didn't want to do is take a framework and 'add ICT to it' or 'add portfolio to it'. An eportfolio is about learning, so the matrix should be about learning. If it didn't mention eportfolio, that would be great. But in fact what we looked at was the learning itself, and what can be added with the eportfolio.

Example of matrix (very detailed slide with tiny impossible-to-read text). Learning: aware, exploring, developing, integrated, transformative. We wanted to find specific examples of activities outside eportfolios themselves, and then look at them in eportfolios. We have a paper version of the matrix which will be distributed this afternoon.

Janet Strivens - Competency Framework
(No slides - ack!) CRA has taken the lead in the competency framework (CF). Not as mature as the maturity matrix; still open to revision.

The CF recognizes that one of the major purpose of a portfolio is to gather and display evidence of competences. We are trying to do two things:

    - arrive at a consensual understanding of the nature of competence. One of our concerns is that the view of competence presented as a project view is too UK-centric. So we want to share the understanding of competence with you and ask you whether this aligns with your understanding. We have already shared with Australia and New Zealand, but they are already closely aligned to the UK. We will share this with you this afternoon.

    - to do with the technology which can support the recognition and accreditation of competence. The document takes the view that a range of functionalities is associated with eportfolio technologies. It looks at these in terms of how they support competency recognition and accreditation. We will share this with you this afternoon.

In the final version of the document, 3 or 4 months time, we intend to link the competency framework document to an ongoing and developing spreadsheet of organizations and frameworks related to the recognition and accreditation of competencies.

Marcelo Fabian Maina (in place of Lourdes Gardia)
UOC - creation of an online course that serves as an entry point for early adopters or non-expert users of eportfolios.

Three specific aims:
    - create an EP environment for the organizers
    - create an EP environment for the students
    - help people create their own EP

The design principles for the course modules:
    - non-stop, always open, self-study, self-paced
    - customizable, task-oriented
    - use and reuse OER (content)
    - creation of the 'learning scenario'
    - orient them toward tangible results
    - create sharing options through the social web

The course is made up of 7 modules (which I won't list here). The first five are oriented to individuals, the last two are oriented to organizations, focusing on systemic change and moving from an individual to an organizational initiative. Each module has six sections: objectives, questions, investigate, activities, etc.

We have a 'pyramid of objective' to find what the common objectives are of individuals so we can find what questions they all have. We will share this with you this afternoon.

The course isn't available yet - we have the structure and write-throughs as the modules, but what we hope to get from you is input from you.

Anastasia Economou, EUfolio, by video

The project is 'EU Classroom ePortfolios'. Started in May 2013, ends April 2015. It includes 14 partners from u7 European countries.

The need was based on 21st century skills. There were broken into 'develop' these skills, and to 'assess' these skills. Assessment was both 'for learning' and 'of learning'.

Goals: to design innovative ePortfolio models, to pilot these modules through case studies in 40 schools, to select evidence for the the efficiency of teaching and learning approaches, and promote strategies of effective practice.

The challenges: we needed to share a common understanding of what an eportfolio is. Most of the cases referred to higher education. We also had to prepare the teachers to use the EPs in their teaching practice; they needed to go through a transformation, and not just add this as an extra thing to their teaching. We needed to examine the affordances of the technology and adapt online learning strategies for the implementation. Finally, it's a challenge to communicate these results - we're still trying to find a way to communicate these to policy-makers.

The main tasks: design, develop, implement, suggest.

The content areas included assessment and eAssessment, 21st century skills, and the learning design process. This was to help them align not only the content knowledge but also the learning outcomes. Three steps: 21st century technologies, activities, then assessment.

The 21st century skills include:
    - ways of thinking
    - ways or working
    - tools for working
    - the 21st century world

How do we assess these skills? This is where the ePortfolios come in. There were three stages or levels to the process:
    - storage
    - workspace
    - showcase

Storage: students can search and gather material, video, audio, etc. We develop skills like uploading, downloading, search and so on.

In the third level the students will present their products: skills like presenting, sharing, assessing.

The workspace level we wanted to emphasize. We wanted students to provide evidence of the process they used to achieve their learnings: journals, poems, etc.

In the assessment, students go through: reflection, talking, creating, interacting, using forums and pages.

The process to implement the model: train the trainers, train the teachers, then train the students. We used Mahara and Office 365, and are still working on the customization of it.

We held a trainers forum, covering a common understanding of what a portfolio means, and alos sessions on 21st century skills, assessment, and the portfolio design process. We also had workshops on Mahara as a tool for the portfolio project. We are now running pilots, but we don't have any data as of yet.

We decided to use an approach having to do with embedded multiple case studies. That is, for each teacher, with each class, we refer to it as a case. Since some classes have more than one teacher, this is again another case. So each student might have an eportfolio with 2 or 3 or 4 teachers.

We developed some tools to do the research. These included questionnaires. and also ways to gather notes and observations, as well as ways of looking at the artifacts.

I will show (with scepticism) some of the results from the first phase, from Cyprus. One of our questions was about the impact of the ePortfolio. There was an impact in the teachers design process. The teachers had a transformation in terms of how to use the technology, not just as a tool, but as a part of the whole learning. They used Mahara not just in class but also in other spaces. From a teachers discussion group in Cyprus, it was very obvious how the students were re-engaged in learning. The teachers had evidence of the students' learning.

It was important to see how the portfolio facilitated learning. We got three comments from teachers, about the use for summative assessment, about the communication and interaction during the implementation, and about the accessibility of the system.

Barriers included: infrastructure, administration, barriers at home, also, how to use ICT in the learning process.

We will finish the pilot at the end of 2014 and will work on the analysis January-February and share our analysis thereafter. We will be having a conference.

Ilona Buchem, CreditPoints with OpenBadges - @mediendidaktikCentre for Recording Achievement (CRA)

Open Badges for Job Application (slides by-NC-SA :) )

This is a qualification project for students who come from abroad, migrants who have difficulty finding a job, who have education, even a PhD, by assigning 'credit points', as part of the Federal Network Integration through Qualification (IQ) project in Germany.

Each participant co-constructs an individual qualification schema. We use the badges to recognize the skills and combine them with an ePortfolio to help them find employment, by making them more visible and potentially more attractive.

The approach is not based on the 'mastery' approach - we simply observed how people were progressing while they were learning and then 'discovering' the skills, identifying them as competencies, and awarding badges for those.

One of the most important processes was competency recognition - we used ProfilPASS, a German Tool, for this. For example, for language-based badge, we linked this to the European Qualification Framework to describe different competency levels; each badge connects to a PDF that described what the badge means - the the person can read, write and speak the language, for example.

We designed badges that have a lot of space to include as much information as possible - we created a cube, with information in three areas (and colour indicates module).

The showcase type of ePortfolio (or 'digital job application videos' as we call it in German). You have you're c.v. shown on the site, the badges are integrated, to show in one place the skills of the person. Participants used the backpack to organize and display their badges on different sites; we also included the badges in the certificates. Credit points count as ECTS points (whatever those are).

A publication (in German) is coming soon.

Eric Rousselle, CEO, Discendum, Inc (an ePortfolio service)
We are a Finnish service provider, we have an open LMS platform, badge factory, ePortfolio system built on Mahara. There has been for several years a strong interest in PLEs and personal learning; that's why we went with Mahara. Not perfect, but it has a big potential. is a national e-portfolio service designed and hosted by Discendum, launched in August, 2010, used by 38,500 learners and 200 organizations. The users are mainly schools, vocational and higher education institution. is used mostly for reflective learning, student counseling, and assessment.

It's free for end users; organizations pay for the use, and users own their own contents.

There has been a very good response to date, with positive attitudes from students and authorities. The community is still growing after 4 years.

Some of the positive outcomes are: students are moving from the 'aware' stage (in the maturity matrix) to 'exploring' and 'developing'.  Some schools are integrating ePortfolio practices in their curriculum. Some student shave used their ePortfolios after graduation.

Challenges: there is strong LMS culture which emphasizes the teacher's role and the importance of controlling and monitoring the student's progress. As a PLE and social media application, Kyvyt is challenging for teachers and also for students who are used to being monitored. There is an issue (for example) around students forming groups by themselves. We still need good pedagogical models, user cases and templates (we have a conference every year to attract these). The big problem for many teachers is learning how to use yet another system.

We are often missing an organization-level strategy - teachers are using it, but not organizations as a whole. Also, because the institution is not hosting it, teachers and schools are reluctant to use it - IT departments are not interested in supporting a system they don't host. As a business, hosting and developing a national eportfolio system is most challenging.

For learners, many ePortfolio systems are too complex and too academic. Learners should have the feeling "it's nice" to use the portfolio, and not "I have to do it". The 'academic' eportfolio systems are probably not what citizens need in a life-long learning environment. In many countries we are creating a portfolio that is useful in an academic context, but not a real-life context.

So: badges. There is the potential to grow the value of a portfolio by adding badges. So we have developed a plug-in so we can issue badges in Mahara. We're looking at a 'badges first' approach to ePortfolio - we want to build a "better backpack" because Backpack is not a very good tool. We are looking at an open badge passport as a micro-portfolio. The idea is to gather badges, organize them in pages, and reflect and build the big picture.

But we need to bring portfolios to life. A mechanic working on engines all day doesn't see why they should go to Mahara and write a page. So we want to integrate them, they take a picture on their mobile and get a badge. The key words here are simplicity and usability. Using badges gets us away from text-heavy portfolios, and introduces the possibility of search, which for employers is very nice. They can search employees for badges and set badges as goals for skills training.

Tuesday, December 02, 2014

Have a Happy....

Hiya Steve,

I’m not religious and I don’t celebrate Christmas. That’s my choice and I don’t expect or require anyone else to do the same.

So far as I’m concerned (and so far as pretty much every other person in my position is concerned) it doesn’t matter to us whether you choose to say “Merry Christmas” or “Happy Cheese Day”.

In the same way, I’m sure you don’t expect me to say “Merry Christmas” when I don’t believe any of the religion and don’t practice the holiday in any way myself. Right?

So, here’s the thing….

A lot of public entities, stores, and other agencies have decided that they would like to include me in their holiday messages. It’s nice of them. I appreciate it. I didn’t actually ask for it, but some people have, because they feel a bit left out when they city government or local grocer says “Merry Christmas”.

But when you’re telling them to stop doing it, and to say “Merry Christmas”, you are telling them that they were wrong when they decided to include me in their holiday message.

Is that what you really meant? You speak below of the “standards and traditions upon which this great country of ours was founded and flourished.” These are not my traditions, and I’ve been around for more than a third of our country’s history.

I had always felt my contributions were part of what made this country great. My values – which include sharing and compassion and peace and understanding – are pretty bedrock for me, and I had always felt that they were fundamental to Canada too.

I know many people who do not celebrate Christmas. None of them minds if you say “Merry Christmas”, just as we respect and honour the practices of all religions. We understand how important it is for you to express your faith.

Please understand and let me live my life in quiet enjoyment. If I or anyone else chooses not to say “Merry Christmas”, it’s not because we’re oppressing you or your people, it’s not because we’re “waging a war on Christmas,” nor anything like that. It’s just that we’re doing something else that day.

I am particularly concerned about your enlisting of Canada’s military veterans to support your cause. Yous ay their sacrifice was so that we could “have the freedom to maintain these values and traditions in the free and proud country that we call our home.”

I think you should leave them out of this. I don’t think any one of them dies so that you could complain that some store is using the word “holiday” or that some politician has forgotten to “keep the ‘Christ’ in ‘Christmas.’”

Again – you and everyone else can use the word ‘Christmas’ all you want. I don’t care. But it's not "reverse discrimination" when somebody says 'holiday' or 'festival' or whatever, and it's really ridiculous to suggest it is. 

When you send messages like this, you’re telling me that you don’t regard me as equal, that you don’t value my contributions to this country, and that you even think that I am in some sense unpatriotic and dishonouring our veterans.

Maybe you might what to send me a different message for the holidays. I know that my message to you is one of peace and understanding. Have a happy Christmas.

-- Stephen

From: Steve
Sent: December 2, 2014 10:38 AM
To: Steve
Subject: Yes Virginia, there is a Santa Claus...and a CHRISTMAS, too!

Yes Virginia, there is a Santa Claus...and a CHRISTMAS, too!
Well, it’s getting to that time of year again, when children get excited over the new snowfall so they can go out and play and make snowmen, while the adults cringe at the thought of how sore their back will be after all the shovelling. It also means that children, both young and old, will be looking forward to the arrival of that special what is it called?

Last weekend I was sitting on the sofa, mindlessly channel surfing through the hundreds of channels on television, when I came across the old movie “Yes Virginia, there is a Santa Claus”. It reminded me of how, as a child, I looked forward with great anticipation for his arrival. Well, perhaps we only get to see his “representatives” in the stores or at parades. But Santa’s spirit lives in all of us who have it in our hearts to think of others who may be less fortunate than ourselves. We also look forward to spending time with the friends and family members that we love, and look forward to sharing this special time of year with them. But what are we sharing, exactly?

Everywhere you look there are the commercial trappings of the season, as stores do everything they can to attract shoppers and entice them to spend their hard-earned dollars. All through the stores, and in their glossy sale flyers, there are phrases such as “Holiday Super Sale” or “Best Buys of the Season”. But what “Holiday”, what “Season”? Even most company or professional association parties are referred to as a “Seasonal Celebration” or a “Holiday Reception”. I have become increasingly bothered by the fact that the word “Christmas” has taken on something of a blasphemous meaning in our modern society, thanks to rampant and unbridled political correctness.  The safe, benign and meaningless word “holiday” seems to have taken its place in our vocabulary.

Have we all forgotten or lost track of the standards and traditions upon which this great country of ours was founded and flourished? Are we all embarrassed to stand up for the founding principles that our forefathers, and others who have gone before us, worked so hard to establish? We have just recently observed Remembrance Day, a day to remember and honor the supreme sacrifices made by so many men and women so that we could continue to have the freedom to maintain these values and traditions in the free and proud country that we call our home, CANADA. Let their efforts and lives not have been in vain.

While I recognize that we have people from different faiths who do not celebrate the same holidays as the rest of us, I do not agree that we need to dis-associate ourselves from the true meaning for the season.  I respect the rights of others to celebrate the holiday season in whatever form their religion (or lack thereof) may dictate. Those of different ethnic and religious backgrounds are free to celebrate their own holidays, but they are not asked to change the name so as not to “offend” the rest of us. The attempts to “de-sensitize” the name of a holiday in a hollow attempt to make it more “inclusive” for the minority, is in fact a blatant case of “reverse discrimination”.

Let’s not forget why there is a Christmas holiday in the first place. Let’s celebrate our freedom to call it what it really is. Let’s keep the “Christ” in “Christmas”, and not be so concerned about what someone else might think or say. I think it’s a small price to pay, not only for our sake and our children, but for those who worked so hard and those who died to allow us to do so.  

So Merry Christmas, Joyeux Noel, Happy Hanukkah, Happy Winter Solstice, Happy Rohatsu,  Happy Id al-Adha, Happy Saturnalia, Happy Sabbat, Happy Zaratusht-no-diso, Happy Fesivus, Merry Krismas, Happy Kwanza – whatever you call it in your own language or religion. And best wishes to all for a Happy and prosperous New Year!!!

And, yes Virginia, there is a CHRISTMAS…and a Santa Claus!

Saturday, November 29, 2014

Knowledge as Recognition

This is an assignment for a grade 12 philosophy course.


Most theories of knowledge depict knowledge as a type of belief. The idea, for example, of knowledge as 'justified true belief' dates back to Plato, who in Theaetetus argued that having a 'true opinion' about something is insufficient to say that we know about something.

In my view, knowledge isn't a type of belief or opinion at all, and knowledge isn't the sort of thing that needs to be justified at all. Instead, knowledge is a type of perception, which we call 'recognition', and knowledge serves as the justification for other things, including opinions and beliefs.

Philosophical Enquiry

One of the long-standing problems of philosophy concerns the justification of knowledge. Noam Chomsky called this problem Plato's Problem. It results from what he calls the "poverty of the stimulus." The evidence and information we receive from the senses, he argues, is insufficient to justify the knowledge we have.

A child, for example, can learn a language even though not explicitly instructed. The knowledge of a language is a way of knowing about universals, because we can generate an infinite number of different sentences in a language. But our experiences are always finite and limited. No matter how much we experience, we can always imagine, and express in language, something that goes beyond our experiences.

In the field of 'epistemology' - that is, the philosophy of knowledge - this is known as the problem of the justification of induction. How can we know about general properties, such as colours or shapes, when we only have limited experiences of them? How can we know universal truths, such as "2+2=4", when we have only finite experiences? We can't! This was the conclusion Descartes reached in Meditations, and he argued that experience is insufficient and unreliable. We must rely on our rationality, our innate knowledge pre-written in our mind like a "mark of God" to make sense of the world.


Cartesian scepticism, as it came to be called, marked the beginning of a tradition in philosophy called rationalism. The rationalists believed that knowledge comes first from the mind, and that through the application of the principles of rationality we can come to know about the world. That is not to discount the role of experience and perception. But these, argue the rationalists, are unreliable.

Perhaps the most important of the rationalists was Immanual Kant. In the Critique of Pure Reason he described the "transcendental deduction" in which he established the "necessary conditions of the possibility of experience". Though direct experience of the external world is impossible, he argues, we can understand its fundamental structure because our experiences of it would be impossible without it. These fundamental structures are the principles of space and time, and these are governed by the principles of pure logic.

In the 20th century, the logical positivists attempted to realize Kant's vision by constructing a universal theory of knowledge based on fundamental data from experience - sense data - and logical inferences from that data. As described by A.J. Ayer in Language, Truth and Logic, knowledge begins with the inference of general principles from observation language, and then proceeds by means of verification of these principles through the process of making and testing predictions. The meaning of a sentence was equivalent to the conditions of its verification; a sentence that could not be verified by experience was, literally, meaningless.

Logic is, in essence, pure abstraction, produced by thought alone. Without the material of observation statements, it has no meaning on its own. Logic can be used to derive knowledge from experience, but not to produce knowledge by itself. Logical and mathematical truths are true only within the language of logic itself; they are then applied to statements about experience and used to infer new statements about experience. So the theory goes, at least.

Scientific Philosophy
In the 20th century the sciences flourished, greatly bolstered by the application of logic and mathematics to physical phenomena. Our understanding of language and meaning led to the development of computer science, which in turn led to the information revolution.

The scientific model described by Ayer was described in much more detail by philosophers such as Carl Hempel, who formalized the method of hypothesis formation and prediction into what he called the Deductive-Nomological Model.Another model was created by Karl Popper, who emphasized falsification rather than verification. Instead of proving that a scientific theory is true, argued Popper, we need to try to prove that theories are false.

Even our study of the mind was impacted; based on logical positivist principles philosophers like B.F. Skinner and Gilbert Ryle developed and popularized the science of behaviourism, which reduced all statements about mental phenomena (such as beliefs, desires and hopes) to statements about physical behaviours.

However, the science of logical positivism was based on a critical flaw, which was first described by W.V.O. Quine in his important paper 'Two Dogmas of Empiricism'. Logical positivism depends, he writes, on two principles that turn out to be false:
  • the analytic-synthetic distinction, which distinguishes between observation statements and and pure logic
  • the principle of reduction, which argues that all knowledge can be reduced to observation and perception
Our observation of the world, our perceptions, our experiences - these are all theory laden. Scientists work in what Thomas Kuhn called paradigms and these define not only the problems that need to be solved and the principles we use to solve them, but also the meanings of the words we use and what counts (and doesn't count) as observation and data. 


Today, we don't know what exists, and what is just an artifact of our mind or of our scientific theories. We are immersed in our world. The meanings of our words is not fixed and determined by observations and reality, but vary and change, as Ludwig Wittgenstein argues, by the way we use them. Our languages are not constructions we create from experience and reason, but games we play with each other in the day-to-day fact of existence.

The theoretical stance we adopt determines what we know (or at least, what we think we know) about the world. One major stance is called 'realism' - this is the idea that we can know that there is a real world, and that science is the process of studying that world. The best evidence of the reality of the world, according to this approach, is that it exists. "Here is a hand," says G.E. Moore, holding out a hand. What more proof could you have? What more proof could you need?

But realism has its sceptics. Not everything that we perceive is 'real'. Take, for example, the colour red. Is the colour red real? Plato thought it was, and that it existed on a plane of ideal forms (along with goodness and virtue, justice and beauty). But even as early as 800 years ago, philosophers like William of Ockham were questioning this doctrine. "Do not multiply entities beyond necessity," argued Ockham in the first formulation of what we now call Ockham's Razor.

Contrasting realism is the philosophy called phenomenology. Most completely described by Edmund Husserl, it is the study of the structure of human experience. This experience typically involves what Husserls calls intentionality, or the property of being directed outward toward the world. The idea is that experience represents or 'intends' external objects or properties. Experience, therefore, is something that is interpreted through a process of reason and reflection. This approach to phenomena can be illuminating; Jacques Derrida, for example, finds through the interpretation of language the essence of hidden meanings and what he calls the difference in the meaning of a word according to the alternatives to that word imagined by the speaker.

In contemporary this has evolved into the idea that knowledge and reality are contained in representations, which are essentially mental models constructed as the result of experience. Thus, for example, when we say that a proposition P is 'true', what we mean is that 'P is true in M', where M is a model or representation of the world. Most science today is conducted through the creation and testing of models or representations as a whole. One example of this approach is described in Bas C. van Frassen's The Scientific Image, which describes what he calls 'constructive empiricism'.

Representationalism has also been advanced as a theory of mind. In his book Representations Jerry Fodor outlines the thesis that our mental states are composed of mental representations, which in turn are created out of what he calls the language of thought. Like Chomsky, Fodor believes that the capacity to build these representations is innate, and that we are born with the potential to realize a fully formed language of thought already realized. Knowledge, therefore, is a true and justified statement in this language, and a collection of such statements combine to form a representational state.

Toward a Theory of Knowledge as Recognition

The history of philosophy is the history of the attempt to justify knowledge through some mechanism of justifying statements describing states of affairs in the world. But this attempt has been thwarted by the fact that we do not have direct experience of the world, and hence are forced in one way or another to study ourselves in an attempt to study the world.

Ultimately, this is unsatisfying. Logic and language require that statements be true or false, or that we have what are called 'attitudes' toward propositions. If knowledge is formed of propositions, therefore, there will always be the question of what comes before knowledge, that will justify or otherwise lead us to forming these attitudes - that a proposition is believed, that it is probable, that it is true, that it is necessary, that it is intentional, and the like. But knowledge should be the foundation of these attitudes, and not the result of them. The idea, therefore, that knowledge is composed of statements in a language, or propositions in a representation, is inherently self-contradictory.

What if knowledge were something else? What if it were something that is sybsymbolic? What if language was useful as a way to express knowledge, but not what knowledge actually is?

In the 1700s the English philosopher David Hume conducted a sceptical enquiry of human reason and reached much the same conclusion. Among other aspects of knowledge, he examined the principle of causation. Without causation, we do not have any coherent concept of science, or of explanation, or of human action and morality, at all. So if anything is an element of knowledge, cause and effect is.

But cause and effect cannot be derived from experience, and it cannot be derived from pure reason. The idea that, because one event happens, another necessarily follows, cannot be derived from any form of inference at all. But, he observes, it is universally believed, and not only by lecturers and scientists, by by the common man, small children, and even animals! So we have knowledge, even if we don't have the language to express it.

In his Treatise of Human Nature, Hume argued that we arrive at principles like causation through the process of custom and habit. "Men will scarce ever be persuaded, that effects of such consequence can flow from principles, which are seemingly so inconsiderable, and that the far greatest part of our reasonings with all our actions and passions, can be derived from nothing but custom and habit." And "Thus it appears, that the belief or assent, which always attends the memory and senses, is nothing but the vivacity of those perceptions they present."

Today we call this form of learning 'associationism' and it forms the basis for theories of neural connectivity. Hume's basic principle of contiguity, where one idea or impression is commonly followed by another, is an instance of the principle of association described by Donald O. Hebb in what we today call Hebbian Associationism, the basic learning theory for neural networks.

When we associate experiences in our mind, we aren't performing any sort of inference on them, and we don't even typically represent them in a language. We see our child's face every day, and we don't describe it to ourselves, we simply come to recognize this particular collection of features as it is presented to us every day. To 'know' that one sort of thing causes another is simply to recognize this circumstance each time we see it. To be able to read, to infer, and even to reason, is to recognize common word forms, syllogisms, or commonalities. The recognition, and the fact of recognition, is the knowledge and the justification for knowledge all rolled into one - a direct, non-inferential form of knowledge.


Ayer, A.J. LLanguage, Truth and Logic. 1936, London: Victor Gollancz Ltd.

Chomsky, Noam. Modular Approaches to the Study of the Mind. San Diego: San Diego State University Press, 1984.

Descartes, René.  Meditations. Translated by John Veitch, 1901. 

Fodor, Jerry. The Language of Thought, Harvard University Press, 1975

Fodor, Jerry. Representations: Philosophical Essays on the Foundations of Cognitive Science, Harvard Press (UK) and MIT Press (US), 1979

Hempel, C. and P. Oppenheim., 1948, ‘Studies in the Logic of Explanation.’, Philosophy of Science, 15: 135–175.

Hume, Davis. A Treatise of Human Nature. Project Gutenberg, 2010. 

Husserl, E., 1963, Ideas: A General Introduction to Pure Phenomenology. Trans. W. R. Boyce Gibson. New York: Collier Books.

Kant, Immanual. Critique of Pure Reason (1929: Norman Kemp Smith translation).

Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: University of Chicago Press (1970, 2nd edition, with postscript).

Ockham's Razor. Encyclopedia Britannica. 

Plato: (427 B.C. – 347 B.C.). Theaetetus. 2004. Farlex, Inc. 15 Mar. 2006.

Popper, Karl, 1959, The Logic of Scientific Discovery, London: Hutchinson.

Quine,  W.V.O. Two Dogmas of E,piricism. The Philosophical Review 60 (1951): 20-43.

Wittgenstein, Ludwig. Philosophical Investigations. 1958: Basil Blackwell.

van Fraassen, Bas C. The Scientific Image. Oxford: Clarendon Press, 1980.