Friday, May 23, 2014

Mass Collaboration Workshop, Day Three

Darren Gergle
Understanding and Bridging the Wikipedia Language Gap


There is an extensive literature on Wikipedia, everything from collaboration to participation to embedded bias to the uses of Wikipedia data structures by AI and natural language processing application.

So, in different places people speak different languages, but the presumption is people are reporting on the same things, and have the same sort of coverage biases. But as we consider the gloabl nature of information we need to think about equitable information access, retaining diverse perspectives, and algorithmic biases and the role they play in information structure, representation, and consumption. A part of this talk is to look at tools that address this.

Gerhard Fisher presented a nice picture of how things are - this presentation is more about how things should be.

To what extent is there content diversity among Wikipedia language editions? I thought this would be a simple question - five years ago. There are two aspects here in the implicit assumption that every language describes the same sorts of things, and so there is a global consensus among the concepts. But there is diversity at the concept and sub-concept levels.

    We have concept diversity - that is, the set of titles of articles.
   We have sub-concept diversity - that is, the set of topics within a given article.

First of all, the methodology, matching pages to concepts. So we want (eg.) to find a core concept ('chocolate') and then match the pages in different languages to this topic. There are 'language links' on the wikipedia pages that will take you to the other language for the same page. We take a hyperlingual approach, which gives us a cluster of pages (sometimes even if the link between two language editions is missing).

The 'conceptual drift' problem also occurs - the boundaries of conceptions changes across language. So, eg., we have a page in English on 'river', which links to a German page, which links bacl to 'canal', which links to a French page, which links back to 'canyon'. So the algorithm has parameters to tune the tightness of concepts - it first limits the number of edges from a given language, and second, has a minimum number of languages to allow an edge to remain.

So after examining the concepts we discovered that the archetypical concept looks nothing like 'chocolate'. They look more like a region or place or person (explore this?) We also find that the bulk of concepts are single-language concepts, with only a small number of concepts common to even three languages.

So - why? Maybe it's just that English, being the largest, is a superset. And everything else is just growing into it. But when we do pairwise comparisons this doesn't bear out - comparing English to German, for example, we find 40 percent of the concepts unique to each language.

So - analyzing the concepts - we can (Adafre and de Rijke, 2006) use the links on the page to create a conceptual representation of the page. Now we can compare these between languages. So we look at which links they have in common, which defines an 'overlap coefficient'. The mean OC is 0.41 - that is, about 59 percent of the links in the shorter articles (in one language) don't appear in the larger articles (in another language).

So, what accounts for this diversity?

We look at the concept of self-focus - the degree to which language-based cultures centre their descriptions around objects and entities of regional importance. So we used geo-tags to examine this, drawing out articles with a spacial location. We use a technique called 'spatial indegree sums' - so, for example, we take an area like 'Poland', find the entities within that region, and then find all the articles that link to those original entities. So we can give a particular weight to a given geographic area. This allows us to compare an indegree sum for different languages for a given geographic area, which allows us to measure self-focus.

So - the global consensus hypothesis would suppose that each region has relatively equal indegree sums for each region, while the diversity hypothesis would predict that the polish-language Wikipedia would favour Poland. The actual results show regional diversity - people regionalize the information around them. The English Wikipedia focuses on the US and Britain, the Russian Wikipedia on Russia, etc.

So: self-focus is a systemic boas in Wikipedia. people orient the knowledge and language around themselves. This diversity is a serious concern for semantically based AI and NLP applications.

On the converse side: we could create applications that take advantage of this diversity. We built a system called Omnipedia that provides access to 85M+ concepts in 25 languages. Pick a concept - it displays the number of related links for each of the languages, as nodes: you click on the node, and we use a representative snippet, translated through Google translate, to give the gist of the article. As you continue, you can find concepts that are listed in many languages - but you can still see how the same concept might be described differently.

We've done some evaluations on the system...

Some tools we've developed for researchers: an API, WikAPIdia and WikiBrain (get access from GitHub). These are tools that download and organize Wikipedia datasets. You can do eg. relatedness metrics (eg., how related 'engine' and 'car' are in different languages).

Some comments & questions - there are huge difference in the practices around different language editions - first-mover advantages, management of articles, etc.

What we were doing (in part) was a pushback against a lot of data scientists and AI researchers who use the English version of Wikipedia as a baseline. The degree and extent to which there is unique content across languages is shocking - only 1 percent conceptual overlap over 25 languages.

Comment - if there ar diverse concepts, then the philosophy of a 'neutral point of view' is contrary to fact. They (Wikipedians) believe there is a world-wide 'neutral point of view'.




Mass-Collaboration as a basis for procedures for e-participation
Thomas Herrman


How can several people be supported to contribute to the solving of problems in the social context? In this situation, how is creativity encouraged? How can democratic principles be taken into account? How large a group can this support?

The concepts of social and collaborative creativity partially overlap; so do the concepts of mass collaboration, e-participation, and social/cultural creativity (but there is 0 overlap in Google Scholar for all three).

In Germany, there are laws supporting co-determination that determines the degree of co-management and employee contribution (mitbestimmung). There are special teams called Betriebrat that manage this. So eg. there are discussions concerning working conditions, layoff, which ultimately allow for co-determination among people.

There is a range of increasing degrees of participation, from informing people and observing people to interaction to co-management of outcomes. These have technical equivalents ranging from linear MOOCs to big data to brainstorming through to discussion forums, open source and discussion and e-voting.

So now: the case of the German Citizen dialogue on demographic trends. The goal was to propose improvements to society. It took place in  cities, with 80 people each. Three major themes:
- how will we live together
- life-long learning
- the influence of this on work environments

There people sitting at individual tables (10 tables of 8?) which would address one of these questions for th full day, and report back. There were two phases: first, the comment on the current situation, and second, proposals for improvements.

Afterword, representatives from the cities were called to meet in Berlin, and a final report was produced.

Facilitation: we had at every table a facilitator and a taker of minutes. Participants didn't see the notes being taken; from time to time a summary was read to the participants. Results were not visualized; the goals were not visible. Facilitators tried - with limited success - to encourage less active participants.

Role of the experts: experts were invited to give an opinion, but then citizens complained that they did not want to be the victim of expert participation, so in the next round the experts stayed in the background and waited for the citizens to ask for help, which never happened. Experts were not asked to support the finding or proposals or to clarify whether an idea had been tested elsewhere. There was a strong focus on practical regional knowledge, but this was not compared at the global level of what has already been done. A lot of singular experiences were discussed but there was little attempt to discuss at a systemic level.

Overall, there was a low level of interaction between the tables - sometimes highlights were presented (by the opinion leaders) and participants had no input in the production of the final report. The experts made additional contributions which were included without any critical discussion.

Lessons learned: people who are highly interested and willing to be engaged are not necessarily well-prepared to contribute novel ideas. The process of converging a huge number of contributions and exploit potential for synergy is difficult. And the influence people have on real political decisions was unclear - was it really about mass collaboration, or merely mass contributions.

So the questions are: how can people be encouraged to relate their ideas to each other? Eg. to escape the hidden profile trap - people take out of the whole set of mass contributions those parts which sound familiar to them. The group does not base its decisions on the information because it's not really shared. It's difficult for people to relate to what others have said.

Also, how can participants be motivated to take existing knowledge or expertise into account?

How can the dominator-follower relation be transformed into a symmetrical relation?

And how can research on small group creativity support be transformed to the level of large numbers of participants? How can the transition from mass contributing to mass collaboration be defined?

Which facilitation studies are sufficient - visualization, prompting, etc.?

Why do people takee part in mass collaboration? Many of them just like it - it's not because they believe they will create change, it's just because they like the experience of participation.

There is a need for scaffolding and prompting, to support directly referencing to others' experiences - alternating between contributing and comparison of contributions, detecting the most interesting similarities and contrasts, to create something 'new'. Needed, the activation of more passive people.

We look for socio-technical approaches to maintain the awareness of existing information - maybe we need several facilitators, to represent different positions or interest groups (eg. if you have a conservative facilitator, he or she will filter out more progressive contributions). The political directions of the attendees should be mirrored at the facilitator level.

In general, we may have:
- type A1 - not appropriate to be carried out in small groups
- type A2 - not appropriate to be carried out via mass collaboration
and
- type B1 - more efficient when carried out in small groups,
- type B2 - more efficient when carried out vie mass collaborations

How can sociotechnical solutions support a shift from A to B? Example, production-blocking and fear of evaluation in small groups can be avoided by organizational and technical measures, eg., anonymous comments submitted electronically.

Claim: facilitation is needed!? Question: can facilitation develop spontaneously? The whole facilitation business is developed on the basis of the ineffectiveness and inability of small groups.

What is facilitation? 'any activity that makes tasks for others easy'.

A procedure is proposed:
- preparing participants (mess finding, data finding, etc) - often we get brainstorming, etc., without this phase of preparation
- making contributions - online we typically think of written contributions, maybe there's a way to do oral contributions (it's easier for a lot of people) - at the same time taking new content into account
- observe the process of merging

So we need to shift facilitation work to support mass participation.
- from one facilitator, to several facilitators (plus a metafacilitator)
- from qualitative comparison of viewpoints, to quantitative evaluations
- from., giving every contribution equal weight, to, the main perspectives should be represented
- from simple voting mechanisms, to complex voting mechanisms
- from contributions visible to all participants, to extra support needed to make the facilitators work visible
- from interventions and prompts being delivered to all, to them being delivered selectively  

The transition from collaboration to mass collaboration is not clear. Mechanisms arre needed to build the most promising subsets (7 +/- 2). Collaborative facilitation is needed for prompts and representing diverging positions.

Comments: differences between small groups and mass - eg. scientific community as an example of mass collaboration. Journal editors as facilitators? It seems odd - the facilitators are mostly for co-located groups. In mass collaboration, facilitation is created by means of the structures. Eg. Wikipedia has a particular set of structures and norms, Linus does, web communities do, but they typically don't have facilitators. (But - by contrast, go back to the field of business in the 1950s - that was mass collaboration - why shouldn't the same improvement be possible in mass collaboration - it is worth at least typing, to not only rely on the structures).

Comment: comparing the role of curation and facilitation. Even in the OS community, people identify needs. In this context we have virtual ecologies of participation. The facilitator role is changing. Response:there is less research in the area of the effects of facilitation in the context of mass collaborations.

(SD - the presumption here seems to be that people are not participating correctly - but maybe that's backward)

Comment: people are typing to scale from small groups to masses; and on the other side, people who study masses, less interested in impacting or steering or moving the masses. Also there is this concept of 'liquid democracy' by means of technology. Maybe there is research on this.



Summary Discussion


Major issues:
- collaboration vs cooperation
- democracy: governance vs democracy, collaboration vs right of minorities
- how masses really operate
- can we design or influence how masses think or operate - can we make the masses more effective? the MOOC?
- (?) tools to study mass collaboration

Discussion: concept of the 'flipped classroom' - we do our readings ahead of time - somee of us did extended abstracts - but these create resources constraints. We see this with MOOCs - they are not a big success, because they require additional resource commitments (on the part of students). And similarly with aa book - it creates resource constraints.

Moving beyond the workshop? (Discussion of the idea of publication)

Thursday, May 22, 2014

Mass Collaboration Workshop, Day Two

(My presentation will appear separately)

Collective Knowledge in Social Tagging Environments
Joachim Kimmerle, KMRC


Even though it is hard to find good definitions of knowledge, most psychologists would agree that knowledge is an internal mental representation of external environments. This may seem to be contradictory to the idea of 'collective knowledge', but the point of this presentation will be that the concept makes sense. In collective knowledge, large groups of people externalize their representations into digital artifacts. An example is social tagging networks.

Background: there is a huge quantity of information on the web - this makes it hard for users to find the best resources for adequate navigation, but at the same token, the web can trigger learning. So in our work we examined the potential of social tagging, and the impact of individual and collective knowledge on social tagging systems.

Prior to the experiement, literature background: Information Foraging Theory (2007, Fu and Pirolli) describes how individuals select links and forage for information. Users have to choose between different links and navigation paths. The 'information scent' is the perceived usefulness of the resource. This information is scent is based on 'the semantic memory' (my quotes - SD). There are cognitive models of semantic memory - eg. Anderson, 1983; Collins and Loftus, 1975. Chunks are connected to other chunks; connections may have different strengths.

Also, some background on tagging: this is the practice of annotating resources with keywords (aka 'tags') on sites like Delicious, Flickr, etc. People use tags in order to structure, organize and refind resources. Social tagging aggregates the tags of all users (Trant, 2009; Vander Wal. 2005). The resulting collective tag structure represents the collective knowledge. Note that coordination here is not really needed, in contrast to other systemss of mass collaboration. The tags establish a network of connections among the resources and the tags, and among the resources themselves, and the users who read them. These associations are represented in 'tag clouds', in which the font represents the strength of the links in the tags.

In our experiments - interested in independent variables (individual strength of association, collective strength of association) and dependent variables (navigation, incidental learning). The topic in question: wine from the country of Georgia (this topic is chosen in order to prevent people from having preconceptions about the topic). (Surprisingly, 10% indicated they had prior knowledge on this topic, so they were not used in the experiment). We measured hiow people tagged, and how they clicked on the tags, and what region of Georgia they would select if they wanted typical wine from Georgia. People tended to click on the larger tag (ie., the tag with the higher association strength). (Some discussion here of whether they were just clicking on the biggest links.)

Spreading activation theory: the activation of one chunk leads to the activation of associated chunks. (Meyer & Schvaneveldt, 1971). This was the subjected of a secondary piece of knowledge. Again people were recruited via Mechanical Turk and eliminated people who had prior knowledge of Georgian wine. The secondary association was based on wine colour (specifically, 'white wine') and the question was, was white detected as 'typically Georgian', and aromas associated with white and non-white wines. Again, people selected the bigger link.

So: individual and collective associations are both relevant. Navigation and learning are linear combinations of both types of associations. And (as a conseuqence?) People internalize collective knowledge - they do not use it only to select which links to click, but they seem to acquire some knowledge about the topic.

Comment: but how is this different from just reading a text? And what is the role of agency? Response: I don't think it is completely different - when you read an article, I understand some, I don't understand some, I try to use it - I would lable this a collaborative process. The 'collective' aspect of social tagging basically comes from what the technology does.



A socio-cognitive approach for studying and supporting interaction in the social web
Tobias Ley


We want to talk about how we can make massive social network data work for us. The focus here will be a tagging system, and in particular a system for recommending tags.

We need a good understanding of the cognitive mechanisms involved in producing and consuming the data, and you need system affordances that facilitate its use. These affordances describe the coupling points between humans and machines. They have typically been studied at the individual level (eg., the door handle) - it's a concept that sits between these things., ie., you have a door handle, but then you have a cognitive representation of it.

Affordances are socially constructed and can be created by aggregating social signals (eg., cowpaths). Similarly, in social systems, affordances can be the result of aggregated behaviour.

We want also to talk about what we can do to make these systems work better, eg., via recommendation systems.

This whole system can be viewed as a distributed cognitive system, an ecosystem of humans and artificial agents, where affordances co-evolve in the system. Cf. work on imitation in social tagging (eg., I see some tags, I decide to use them for myself - this is how some tags get popular and others don't get popular). Also - what is the role of memory in producing social tags - how are tags represented and processed in memory?

So, from the perspective of models of imitation, how does consensus emerge? A typical imitation mechanism is preferential attachment - you just copy a tag that has been used by someone else. Another mechanism is semantic imitation - you gdon't copy the word directly, but the tag creates a certain representational context in your mind, and you use these other concepts (eg., a tag 'book' leads you to use a tag like 'read').

A model for studying this is 'fuzzy trace theory' - you can be in different recall states when you have learned something. Sometimes you forget the words, eg. of a song, but you can remember the meaning. This can inform tag-based search - sometimes you learn the tag, sometimes you learn the gist (this is called the gist-trace). See Brainerd, et.al. 2010.

So - the experimental study: what role do verbatim and semantic processes play when imitating tags? And can these processes be disassociated using practically significant variables? The experiment uses the RTTT-Procedure (basically a way for people to tag the same photo several times).

Here's the model then, to dissociate verbatim and semantic imitation: if you learn the tag, you have 'direct access' to it, and you imitate it. Or you maay have no direct access, then you may either reconstruct the tag, possibly even the original tag, or another semantically relevant imitation; or finally, you may have no recall, in which case you are guessing. The model fits the data very well.

The results? The rate of semantic imitation was relatively constant at about 13%; verbatim imitation varied quite a bit (8% - 20%). Influencing factors included semantic layout of the tags, size of the tags, and connectivity of the tags.

A technology recommender system was developed based on the principles of this model. It basically figures out the sorts of tags you would use, so it can recommend them to you. It is based on a connectionist network with a hidden layer, where resources are encoded in terms of topics or categories (eg., Wikipedia page categories). The recommended learnss for each person all the tag assocciations they have done in the past, and then tries to match this pattern to all the different examples.

Can this algorithm guess which tags people will use? The algorithm, was superior to semantic categories extracted from existing tags, and approaches where you just choose the most popular tags, or a spreading-activation model. But we don't know the answer to the question yet - it would be interesting to apply to a real system. So, eg., a 'tagging in the real world' project: eg., tagging real-world objects in construction, health care - some examples of people tagging machines (with warnings, instructions, etc). Another project - 'My learning episode' - a sensemaking interface. http://developer.learning-layers.eu/tools/bits-and-pieces/RunningDemo

Future work includes the study of tagging processes - is it an automatic process, done out of habit, or is it a deliberate process, where you look at other tags and decide whether to resuse them or not. Also, how strong is the affordance character of social recommendations? And what is the influence of the physical environment.


Network analysis of mass collaboration in wikipedia and wikiversity
Iassel Halatchliyski


We're looking at long-term self-organizing processed based on stygmatic methods of coordinaation, where these knowledge artifacts have a network structure.

What is important from the theoretical background is the focus on the link between individual and collective knowlege. (Reefercne to a buncch of theories by title - complex systems, socio-cultural construction, situated learning, etc).

The approach is to use network analysis technics, metrics and algorithms, and apply them to networks of knowledge artifacts. Three studies.

Study 1: based on the assumption that the internal logic of knwoledge is reflected in the network structure of the artifacts. This leads to the exploration of the potential for modeling collaborative knwoledge through its networks structure. It was a crosss-sectional analysis of hyperlinked articles in Wikipedia. It asked the question, "what is the editing experiemce of authors who contributed to pivotal articles?"

So, eg., we have a network of two combined domainss - education and psychology - with about 8,000 articles and 2,000 articles. We look at boundary-spanning articles using a 'betweeness' measure of centrality, as well as the articles that are 'central' in each of the two domains. The experience of the authors from working on different articles in Wikipedia is related to how pivotal the articles are that they work on 0 in the long run, experienced articles create pivotal articles. The explanation is that in the long run, experienced authors will write pivotal articles that set the stage for new knowledge.

Study 2. How is the development of new knowlege related to the articles with a pivotal network position? The background here is based around preferential attachment (Barabasi and Albert, 1999) and the idea of a worl of ideas with their own licefyfle. This study followed the same network as it developed in Wikipedia over 7 eyars. 'New knowledge' in Wikipedia may be new articles or editss to existing articles.

Study 3. How do we identify pivotal contributions and moments in a discourse process? Discourse happens continuously over time and builds on previous contributions. This study used a scientometric method for quantitative studies of scientific work, which is used to identify the main network flows in a scientific literatire connected by citations. (Some diagrams shown illustrating the 'flow of ideas' through a reserach community).



Olga Slivko
Is there a peer effect in knowledge generation in productive online communities? The case of German Wikipedia


From an economic perspective, we look at interactions between indivoduals sharing existing resources to produce a common socially valuable output. What processes drive contributions to online communities> There's pure altruism, there's social image, there's reciprocity to peers, etc. So the question is, is there any social reciprocity / social interaction in contributionss to Wikipedia.

In Wikipedia there are differences from other social networks. There is need for coordination on a single page. There are no explicit friendship structures on Wikipedia. Indivisuals do not get a high 'reputation' on Wikipedia (so there are not potential monetary gains). So, do the 'peers' activity affect knowledge generation?

In previous reserach on peer interactoon, we find stong input by peers on group behaviour (eg., health-related attributes such as smoking, GPA and choice of major). And social ties matter for engagement into open source software development projects, online music, and video gaming networks.

Measurement: the utility of an individual contribution to wikipedia.

Network of editors: Editors are connected if they made a revision on a page within a 4-week spaan (these links can expire as well). We can construct networks of editors out of these connections (long 'standard' formula used to describe this).



Ben Shapiro
Where do we go from here? Teaching, mentoring, and learning in the games and crowdsourcing era


The point of philosophy is to change things (Marx)and maybe we should be looking at what we want mass collaboration (or cooperation?) to change things to.

We've heard of Wikipedia as a community of prctice - but what does that mean? Reference to Lave and Wenger and legitimate periphrial participation. Bryant, Forte and Buckman 2005 - role and goal changes from being on the outside to gradually becoming a core member.

So, vodeogames: Minecraft. It's very popular, but it's also very unstructured. A lot of people just use it to buld things. Another game: Civ V - you create civilizations, multiple ways to be victorious. People create multiple causal maps of how to be successful in a game.

Most of what's happening in a game community happens outside the game - people collaborate in ways the game environment doesn't allow you to. They create content collaboratively. World of Warcraft is another very popular game - with WoW you can't get very far in the game without working with other people. You have to apprentice with more serious players to get ahead.

It has also been explored how players plaaying warcraft are engaging in scientific practices and discourses, Eg. Steinkuehler & Duncan, 2007, 2008 - use of data and experiments in game play. But the hitch is they are engaging in these practices in make-believe worlds.

As designers we can do better, online environments that are as engaging as actual games, but embedded in real science.

There are some light-wight crowdsourcing games designed by scientists to collect data. Crowdsourcing: was of a group of people work together to solve a problem posed by an interrogator. Eg. Galaxy Zoo - players go to the site, they see a pictureof a galaxy, and they are asked to lable the picture - is it a spiral? Is it clockwise?

But this is an inauthentic scientific process - scientists themselves do not feel this work is worth doing as part of science (but they like getting other people to do it). Also, the players must be strictly independent - they cannot interact, because you don't want to taint the data.

Another game, Foldit. You create protein structures. You can work as teams. Questions: is this a collaboration (with scientists)? Is this a learning enviornment? Popovic (creator of Foldit) says they do learn, but the players gthemselves have no idea what the 'blue and orange thingies' do.

Learning is not the point. These are labour systems, commoditizing the work that the machines cannot yet do. But they don't help you learn, and they're not collaborative systems.

So, there's the Maker Movement. It's this convergence of DIY with open source software and electronics. We have to study this as a distributed activity system, the people have different goals. There's an outreach to bring people in - eg. Make Magazine, Maker Faire, etc. Also, online communities and sites. It happens in communities, and also purpose-built places (eg. Island Asylum) - you paay a fee to access eg. tools and such.

Right now it's the early phase where there's a lot of excitement and little study (this would be a good place to start studying).

So - what could the future of this thing look like. Eg., participatory discovery networks. We see pieces of this in things like citizen science projects - they look at distributions of bugs or count birds in trees. Or after Fukishima, when people built real-time radition monitoring.

So - a hypothetical example - how you might revolutionize medical imaging in developing nations. They need better diagnostic information - they have nobody to read the images. So, how do you enable people to build the hardware, how do you enable them to share it, and how do you have people around the world contribute.

We've been using a tool we developed called 'Blockey Talky' - build software by assembling blocks.

The online tools - eg. a Facebook plugin where you can access a CT scan where people could look at it, argue about what the image is, etc.

Imagine how the public could work together to imporve public health? Could we get people working to make things and at the same time develop enough education to make devices that are useful? Could we build communities aorund this - people doing first-pass analyses of things, which can be passed to people with more experience.

Creating participatory discovery networks that address real problems is something we can explore.



Language Technology for Mass Collaboration in Education
DIPF Frankfurt - UKP Lab
Iryna Gurevych


The motivation for natural laanguage processing in mass collaboraation: these is evidence that learned infromation in collaboraative learning is retained longer. There are instances of computer-supported collaborative learning, eg., discussion boards and wikis, computer-supported argumentation, and community-driven question answering.

These new forms of collaborative leanring bring some challenges along with them. They result in massive amounts of unstructured textual content - people expresisng their opinions, eg., and for humans it is impossible to process all of this content. Especially for learners, it is difficult to process, and difficult to assess for quality.

The current issues can be summarized as:
- knowledge is cattered across multiple ocuments /locations
- difficulty having an overview
- abundance on non-relevant or low-quality content
- platforms for collaboration do not provide intelligent tools supporting users in their information needs

(The specific issue is that learners because they are learners do not have the background knowledge)

Natural language processing is a key technology to address these issues - it enables users to find, extract, analyze, and utilize the information from textual data.

For example, one of the things that can be analyzed are 'edit-turn-pairs' - edits are fine-grained local modifications from a pair of adjaacent revisions to a page, and include deletion, insertion, modification, or relocation (a more detailed taxionomy was created). Turns are single utterances in the associated discussion pages, and again can be given metadata.

We asked: what kind of correspondance can be identified between edits and turns? What are the distincting propertiees of corresponding edits and turns that can be used to show they are correlated. How much knowledge in the article was actually created in the discourse on the discussion page?

(Example of an edit-turn pair)

Ferschke et al (2012) propose explicit performtive turns: 1. turn as explicit suggetsion, 2. turn as explicit reference, 3. turn as explicit commitment to edit in the future, 4. report of performed action. Other turns not part of this set are defined to be 'non-corresponding'. To find corresponding turns, Mechanical Turk was used to select corresponding turns. From 750 randomly selected turns, 128 corresponding turns were found.

Language processing processes were then used to analyze the turns. We find we can detect the non-correlated turns with a rate of 0.9 and correlated turns with a rate of 0.6.

Ivan Habernal (continuation of the same talk)

Argumentation mining is a rich and growing field. It includes stances and reasons and how argumentation is put forward and phenomena typical of argumentative discourse, eg., fallacies. Our research looked at controversial topics within the educational domain, mostly two-sided debated (eg., home-schooling, mainstreaming). The purpose was to enable people to support their personal decision and to give reasons for these decisions.

(Diagram of the whole things) - identification of topic, discovery of relevant posts, extraction of argumentative data, annotated argumentation.

So - we needed to create a corpus with which to feed our machine-learning algorithms - fror example, to identify persuasive on-topic documents (as most documents are not one or the other). This was a binary decision over 990 documents (comments to articles, forrums, posts), which obtained pretty good agreement (0.60).

Next, we are going deeper into the structure of argumentation in the documents. There are different schools to describe arguments from different perspectives. It was inspired by a model proposed by Toulmin (1858) which uses five concepts in logos dimensions: claim, grounds, backing, rebuttal, and refutation. There's also the pathos dimension, and appear to emotions. We wanted to find the corresponding text and associate them with the labels.

Challenges:
- the main message is ofteen not stated
- granularity of the concepts
- the very general challenge of analyzing argumentation that the users are not even aware of using

Results from this study, of 350 documents (sampled from the first phase) with 90K tokens, and found agreement for claims and grounds, less for others; for longer posts (eg. blog posts) we could not find agreement even for claims, grounds. The longer texts heavily rely on narratives, cite reserach articles, etc., and can hardly be captured by Toulmin's model.

Later this year: complex community-based question answering. Factoid questsions can easily answered by computer, but the 'why' questions are much more difficult. We would like to solve it by combining user ratingss, model answers, etc.

Conclusions: these are three examples of NLP technologies that can be used in mass collaboration in eductaion. These could be used to summarize arguments, and help students form their own arguments.


Question on 'indicator words' (or 'discourse markerrs') - they play a role, especially in well-written text. But in the social media discourse, these discourse markers were misused or missing.

Tool used: test classification framework (all open source tools).

Wednesday, May 21, 2014

Workshop on Mass Collaboration - Day One


Introduction to the Workshop - Ulrike Cress


Why a workshop in mass collaboration? Recent mass phenomena: Wikipedia, tagging, blogging, Scratch, massive open online courses and connectivism, citizen science, maker-space

Who is creating these? Nature article on massively collaborative mathematics (see the wiki PolyMath).

How do we describe these phenomena? Is it just aggregation? What role does coordination play? Is it a mass phenomena? Is it an emergent phenomena? Is it collective intelligence? And what are the processes behind this?

In science we need new methods for this. Previously, we would passively observe - but now we want people interacting. We have to try and find what these methods can do.

Can we design mass collaboration? Is it just something that self-starts, or can we create this?

CSCL 2013, we brought together people to talk about this. This led to the larger workshop we are hosting today.


A Brief History of Mass Collaboration
Allan Collins, Northwestern University

Homo sapiens traded with others many miles away, while Neandrathals did not. Trading leads to specialization, which is the first basis for mass collaboration - it leads to people getting better at producing things, via division of labour. This altogether cretaes a virtuous cycle of increasing tradem specialization and learning.

The next major development is the development of cities. Geoffrey West - when creativity is measured by patents, by researchers, etc., a city 10 times larger is 17 times as creative. Examples of hotbed communities include Cremona, Hollywood and Silicon Valley.

Marshall's theory of hot spots: these areas exist for three reasons:
- pooled resources - workers and firms are drawn to these places
- specialized products and services - for example, hairdressers and agents in Hollywood
- 'ideas in the air' - information and skills flow easily between people

Brown and Duguid on Silicon Valley - 'The Social Life of Information' - there are these 'networks of practice' across firms, eg. sofwtare engineers, LAN designers, etc. So knowledge flows across firms to find its most successful niche. Even where there are failures, that seeds different companies in the Valley. People can see what's doing, what's doable, and what's being done.

The third major development was the invention of writing and printing. Writing allows you to share ideas at a great distance and hand down ideas to later generations. It is what makes 'study' possible (Walter Ong) which is critical to science, history and geography. Printing led to the spread of books and universal literacy.

The world scientific community, fourth, created a set of norms and a set of structures. For example, scientific meetings like this foster interaction among scientists. It produced scientific journals to spread findings and ideas. And it produced government support, because the more science you have the more invention you have. Scientific norms include: objectivity, replication, equal standing, and sharing of data.

Clay Shirky, in 'Here Comes Everybody', argues that the internet and the web are making it much more possible for all sorts of new ways to collaborate to occur. Here's the list:
- web communities: Xanga, fan fiction, Scratch
- collaboratories - share tools, data, designs
- digital libraries: videos, satellite data, models
- publis repositories: Flickr, YouTibe, Wikipedia
- collective action: Twitter, Facebook
- crowdfunding: ArtistShare, Kickstarter
- MOOCs: Coursera, Udacity, edX
- Games: Foldit
- Open source: Linus, Wikipedia

Shirky makes the point that weird and risky ideas have a much better chance of taking hold and that 'collaborative entrepreneurs' like Linus Torvolds and Jimmy Wales can succeed. This process, he writes, undermines existing hierarchy - eg., 'Voice of the Faithful' - which responded to the Church's suppression of the priests molesting young people - it is only something that happens in this particular communications world.

I would like to thing the ideas Seymour Papert and Ivan Illich are now more likely to be brought to fruition - for example, the 'Samba Schools' where people teach each other in Brazil. Schooling segregates us, it creates peer culture. Illich wrote of of (a) resources everybody could have a hold of, which is what collaboratories enable; (b) skill exchanges where people who had expertise can share it with others; (c) peer communities.

As you get more and more collaboration, you get a speed-up of innovation. This leads to what Toffler called Future Shock.



Gerhard Fischer, Center for LifeLong learning & Design, University of Colorado, Boulder


There are two types of problem transcending individual human minds or individual disciplines:
- problems of magnitude which individuals and even large teams cannot solve and require a contribution from all stakeholders, for example, Scratch, Wikipedia, 3D-Warehouse
- problems of a systemoic nature requiring the collaboration of many different minds from different backgrounds, for example, approaches to lifelong learning, aging population, etc.

For example, an article from Axel Pentland (MIT) in Der Spiegel asking the question, where does mass collaboration start? At six people, at 60 people? Units with more people than the size of a mid-sized city are difficult to organize with today's technology.

So, what is the research methodology here: from how things are, to how things should be. Marx: philosophers interpret the world in different ways, but what matters is to change it. So, how do we study how things could be? This requirs theoretical frameworks going beyond antidotal (anecdotal?) examples helping to interpret data in order to undertsand the context- and application-specific nature of mass collaboration.

People have employed new media in learning organizations as gift-wrapped around our existing understanding of learning and education. But we need a rethink: "distance education is not learning in a classroom at a distance." We evolve new forms of learning. For example, we define rich multidimensional landscapes of learning. Eg. 'how' - in a classroom the instructionist domain scales well, but problem-based learning does not scale. (Say).

Looking at the shift from consumer cultures to cultures of participation, where everyone can participate in personally meaningful problems. The Encyclopedia Britannica is an example of consumer culture; but Wikipedia is an example of a culture of participation (of course, we should differentiate between different levels of participation). Other examples: iTunesU, YouTibe, Encyclopedia, PatientsLikeMe, Scratch, Stepgreen, SketchUp and 3D Warehouse.

Two different models for knowledhe construction and sharing:
- model-authoritative (you first filter, then you publish - the output filters are not needed because the content is authoritative);
- model-democratic (you publish, and then you filter - you need better output filters to (eg.) find the information).

We had a research project that analyzed the SAP Peer Community Network (SCN). It was designed to help companies know what they know. What is the 'tea time' for mass collaboration networks? Some of the dimensions investigated included:
- responsiveness
- engagement intensity
- role distribution
- reward mehcanisms

Another project: the CreativeIT wiki - in which we learned that most wikis are dead on arrival. You see this a lot - they set up a wiki where everyone contributes, but you go back 6 months, and there's nothing there. Our wiki - we put a lot of effort into it, we seeded it, and it did not take off. So we are studying why this is the case.

There are ecologies of participation - we can find clearly identified roles, from project leadre and core members out to bug reporter, readers and passive users. (SD - and then there are a series of mechanisms designed to move participants up one level of participation).

So we turn to MOOCs - the hype is that MOOCs will revolutionalize higher eductaion. There is both over-hype, but also under-estimation of MOOCs. So what did MOOCs contribute? They generated discussion transcenting the narrow confines of academic circles, they represented an innovative model that shook up models of learning and institutions, and they might force residential research-based universities to focus on core competencies.

But we need frames of reference on MOOCs: many of them are looked at by econonomic (scale, productivity, cost) and technology perspectives. But another perspective: global versuss local. MOOCs can reach out beyond national boundaries. In the US you have miles per gallon, while in Europe it goes litres per 100km. Now if you think about mass collaboration and trying to create a common understanding among people, this is probably a common problem. (SD- yes!)

We worked on the Envisionment and Discovery Collaboratory (EDC) - we created 'reflection spaces' where people could act on what they were reflecting. But we found there were only 12 people around the board - what about mass collaboration? Perhaps a vitual equivalent to the face-to-face meeting? This would fit the mass collaboration paradigm. Then you can have the local one having theirs, but you could collaborate with people in Cosat Rica.

So - what are the open issues.
- can there be a lower limit for the number of participants? Is this number context-dependent?
- is there a difference between collaboration, cooperation, coordination, participation, etc.?
- in MOOCs (often with over 10K people) - does any mass collaboration take place among the paarticipants?
- does any mass collaboration take place in Facebook and Twitter? If no, hat are the future developments to create mass collaboration?
- is there 'participation overload' in the way there is information overload?
- is active participation in whatever form an absolutr prerequiste for mass collaboration
- are there problems society is facing that make mass collaboration an necessity?
- what si the role of personal indiosyncracies?

In summary: mass collaboration and education is an important theme for further reserach.



Mass Collaboration as Co-Evolution of Cognitive and Social Systems
Ulrike Cress

Mass Collaboration is typically presented as an artifact - fro example, Wikipedia. So we see mass collaboration as a conflict of two systems, a cognitive system and a social system. The cognitive system is autopietic - is exists through its own operattions, it operates by a process of meaning, and is operstionally closed; thoughts build on thoughst.

On the other side, the social system is also seen as a system, but it operates not through thinking, but through communication, which entails a reciprocal understanding. It happens between people, but it is stimulated or irritated by its environment. It as a system tries to make meaning - it processes information, some things become central, others things die out - it's the the system that decides over time.

These systems interact - both systems can be an environment for the other, both systems can be irritaed by the other. Each can stimulate the other's development. This co-evolution - both systems develop each other. How do we study these systems?

1. Wikipedia - how it operates, how it builds meaning. For example, the coverage of Fukishima (Daiichi Nuclear Power Plant). Point of reference: the Wikipedia 'norms' - neutral point of view, citations from authority, etc. In the first 9 days: 1200 edits, 213 substantial. 194 had a reference, 19 did not. For the references, some came with a reference, some were added after the fact. There were only 4 deviances from neutral point of view, an these were deleted almost immediately. So the principles were followed.

The quality of the constructed knowledge - by day 9, according to experts, the Wikipedia was a balanced and objective presentation of what happened (even though most authors had no formal knowledge in engineering or nuclear power). So, laypeople wrote the article, but as a collective the social system could make meaning.

2. What triggers co-evolution? The difference between both systems - how they are able to irritate each other. There must be a difference between the personal and the social system. We ran a test where a person and the system had different 'pro' and 'con' arguments for an issue. What we found was that a middle level of incongruity created the most change (and the most learning).

3. Large-scale study of Wikipedia - to confirm this result. Eg., a domain 'traditional and alternative medicine'. We found about 45,000 articles (via machine larning) - these were being modified by a large number of people. We created article profiles to determine whether articles were more or less in favour of alternative medicine. We could also do the same for the authors. We could thus calculate the incongruity between the author and the article.

When authors started working on articles, they were at a middle level of incongruity. So the most productive activity took place at the middle level. It is the incongruity that triggers co-evolution. This created productive heterogeneity. But there's an optimal level of heterogeneity. If a person wants to change the system, he/she much adapt to the system. Hence mass collaboration is not free or not democratic at all. If you bring in an idea that is not accepted by the system you will have no impacted on the system.



Individual vs Collaborative Information Processing
Aileen Oeberst


There is a great deal of literatire about the benefits of collaboration, but from psyhcological literature we know that individual information processing is biased. Does this bias level out in collaboration? Or does this translate to collective bias. Or does iit become mroe extreme?

So, for a bit of research, we took as a question, whether individual biases are mirrored in collaboratively authored Wikipedia articles. Wikipedia has of course strict rules such as verifiability and neutral point of view. These rules are intended to prevent bias, and they're pretty good, but there is a concrete counterexample.

The bias is hindsight bias. We say 'we could have foreseen this'. The bias iss, your perception after the event is different from what they were before the event. In hindesight, the liklihood, inevitability and foreseeability of an event is always increased. Take Fukishima. Or the Turkey coal mine disaster. People try to explain these things, to make maning out of them. So you selectively focus on information that is consistent with the outcome of the event, and ignore the information that would have spoken for a different outcome.

Hindsight bias has beeen repeatedly demonstrated. Once you know about it, you see it repeatedly in newspapers. It's widespread, difficult to avoid, and people usually are not aware of it. So it is reasonable to assume that hindsight bias is shared by Wikipedia authors, and that it enteres into Wikipedia articles. So is there evidence for hindsight distortion in Wikipedia articles?

The method was to find things where an article existed before the event. For example, there was an article about the Fukishima power plant. They were analyzed to ask 'to what extent does the article suggest the event was likely to happen'? The number would be the same if there is no hindsight bias. But there was a significant increase. Of 17 events, 6 or 8 events did not have any tendency at all, while the others demonsrated a range of between factrs of 1 to 7. Eg., before the accident, there's a small 'accidentss' section, after te event, there was a large selection of design issues and risks (mostly from data that existed before the event).

So why select these and not other references. First, because it added to the explanation. But also, there was a selection based on relevance.

Limits: we can't conclude that all of Wikipedia is boased, nor say we've found an overall pattern. There is a substantial number of articles without any tendency.

A second study looked at more events, including both unknown (disastersm etc) and known (eg., elections) events. Catgeories included distastersm decisions, elections, personal decisions, etc. The same mechanism for evaluating the events was used. What we see is that only for disasters do we see the significant hindsight bias.

So: there was no hindsight bias based on whether the event what known in advance or not. Nor is there a general hindsight bias. But there was a hindsight bias for disasters. But - in hindsight - this makes sense. They have considerable impact. You would like to prevent them. So they elicit a particular need to be explained. And this creates a search for antecedents.

Future work: to example whether collaboration increases bias, whether using biased resources increases biases, looking at other biases (eg., ingroup biases, such as distorted representations of their own group - is there a difference of representation in different laanguage versions in Wikipedia? eg. Manypedia analysis of the 6-day war in different languages).



Wai-Tat Fu - Illinois

From Distributed Cognition to Collective Intelligence


Perspectives, from cognitive science, and from HCI/CSCW.

How do we define success in mass collaboration?

How do we define success for a cognitive system? Perhaps in a competition, eg. Deep Blue versus Kasporov. The outtcomes were controversial - Deep Blue did win. But also the process - did the program just do search? A human can evaluate 100 times fewer moves than the machine. But maybe the outcome is not what we like to consider success.

Physical Symbol System - cognitive computations involve operations of symbolic structures in a PSS. How about collaboration? Maybe we can expand from computations inside-the-head to those that involve multiple heads (cf Andy Clark's 'extended mind' theory) (cf also 'the Siemens answer' in connectivism - SD)

So, why does search become important? Cognitive computations *always* involve search. All computations are local in the sense that there's no what what happens here will impact something else. When local computations need more information it needs to know where this information is and how to access it. And local symbol structures make heuristic search possible. What matters most is whether you have enough local symbol structures that make such a search possible.

(By putting these terms all in computational terms that means you can actually implement them.)

So - even though you have local computation, you always have to access the symbol structures from a distance ('distal access'). The crux of th argument is this: local to distal access to information. This has to keep on going until you've found the distal symbol structures that you need. This local to distal symbol processing is they key to intelligence.


Examples:

Chess: from 'Deep Blue' to 'modern' cheww  programs. They represent a shift from searching for massive numbers of moves, to 'intelligent search' - learning from a lot of patterns. 'Success' is defined by how much searching the computer does *not* have to do.

Web information search: based on representations of web data, so you don't have to search every listing. The agent needs 'knowledge' to choose the right action based on local infor,ation, and local information has to have sufficient structures to enable this.

(Video - Aibo robots playing soccer - better 'square robots' actually passing the ball - (Stone and veloso video))

How do robots know how to do that? There must be some kind of rules to tell them what the others are doing. So what sort of rules should we pay attention to? You have to have some kind of model for what the other person's doing.

So - for collaborative systems - what kind of rules do we want people to have, and what kind of rules might we want to impose?

Successful collaboration involves local to distal heuristic search processes to achieve a set of goals. ie., using local information, each agent needs to infer how to find the information.

Eg. in mass collaboration - we look at how multiple minds collectively 'search' for information, where knowledge is a set of symbol structures that allow efficient serach.

Eg. in animals, success depends on how well they can exploit the local-to-distal symbol sytructures.

Eg. cognitive psychology / science - success depends on how well knwoledge is structured.

Eg. sociology - network analysis shows the importance of networks, of nodes and edges, where success depends on crucial network structures.

Anotjer example: Wikipedia. Wikipedia has very good local structures.  The problems with Wikipedia that the structures seem very local - the method for linking is perhaps too stone age - they don't provide much local information for assessing the right distal knowlege. So the challeng is: how to make local structures more coherent and local. Eg., maybe by having individuals provide semantic structures.

An example of such an approach is spocial tagging. These generate local structures, but not enough to help access distal information. And again, the chalenge is, how to improve it.

Hypotheses:
- local-to-distal heuristic search plays a central role in collaboration/cooperation
- social tenchnologies that support collaboration will benefit from design features that facilitate generation of local-distal structures to support distal search

Question: what is 'distal'? We have distal processes, search, structures, etc...
What we have locally is a symbol: the distal is whatever it stands for.
Question: the argument depends on the idea that success is how easy it is to access the thoughts of another person.

Interesting concept: tagging recommendations

Propositoon (from Ulrike): intelligence is 'metaknowledge' (not in the sense of 'knowing how to know' but in the sense of having higher order mental structures or representations (but does mass collaboration require some sort of collective intelligence?))

Friday, May 16, 2014

Foreign Workers and Employment in New Brunswick

Here's David W. Campbell:
This is not an academic question.  It happened this week.
Now, I suspect there would be folks lined up to tell us everything that was wrong about that firm – low wages, not appealing work, etc. but the bottom line is that a manufacturing firm could not find 20 workers or so in an area with a 20% unemployment rate and an employment rate of around 40% (i.e. only 2 out of every five adults has a job).
As always seems to be the case, the real story has nothing to do with either immigration or low wages. Here’s the coverage before it started to be spun for political purposes:
“Cory Guimond, owner of Millennium Marine, says his new operation in the United States will help his company navigate through American red tape… He says federal rules in the United States restrict the entry of many types of boats made in Canada which is a concern, since he exports the bulk of his boats south of the border.” http://www.cbc.ca/news/canada/new-brunswick/escuminac-boat-maker-launches-business-in-maine-1.2639855
So, so much for the ‘real life’ example.

Now for the argument. Two issues are being discussed here, first, the temporary foreign workers program, and second, the issue of low wages.

Regarding the first: while I am a big supporter of immigration, and I believe it is key to building the province’s economy, the Temporary Foreign Workers program is not about immigration. It’s right in the name! “Temporary.”

Employers can hire workers for a maximum two years under the program. They also have to pay a $275 processing fee for each employee. It’s not surprising a company paying minimum wage would balk at the fees. http://www.esdc.gc.ca/eng/jobs/foreign_workers/lower_skilled/index.shtml

The second issue is the size of the wage being offered. We hear over and over how a higher minimum wage results in lower employment. But the opposite is the case.

The ship factory in the example is a case in point. “Guimond also found few people willing to drive 30 minutes from Miramichi for full-time work,” says the article. http://www.cbc.ca/m/touch/canada/newbrunswick/story/1.2642404

The reason for this is, if you are working for minimum wage, the cost of the transportation is greater than the wage being earned. The wages are so low people are better off sitting at home – and this would be true even if there were no Employment Insurance or welfare. In Eastport, a city of 1,300 people on an island in Maine, people can walk to work. It makes a big difference.

And getting rid of the low wage jobs could be exactly what this province needs. People have more money to spend. Businesses look for projects where the business case *can* be made at $20/hour, instead of trying to compete with Bangladesh. http://www.cbc.ca/news/business/raising-minimum-wage-could-rescue-the-economy-don-pittis-1.2516796

The hallmark of a career in New Brunswick is lower wages, whether working in the public sector or working for one of the major provincials employers. Nobody wants to stay here when you get paid that much more for the same work elsewhere (and don’t say the cost of living is lower in New Brunswick – for almost everything, except maybe home ownership, it’s higher).

Frankly, we in New Brunswick have tried the low-wage solution to our economic problems for decades. It's time to try a new approach, because the old approach has demonstrably been a failure.

Wednesday, May 14, 2014

Focus on the Words

I'll return to my other flow of posts in a bit, but in the meantime Dron has responded to my last four, and there are some things worth addressing.

In particular, he complains about my focusing sometimes on single paragraphs and even sentences - "out of context," he says. But from where I sit, you can't just use words as though they have any meaning you want - or no fixed meaning at all - without getting some of this analytical treatment in response. But I'll elaborate.

The Family of Ideas

There is a very large difference between his characterization this time of the 'family of ideas' as compared to what we saw last time.

Last time: "about how to learn in a networked society, all of which adopt a systems view, all of which recognize the distributed nature of knowledge, all of which embrace the role of mediating artefacts, all of which recognize that more is different, all of which adopt a systems perspective, all of which describe or proscribe ways to engage in this new ecology."

This time: "family of theories aligned by a number of common themes (such as connectedness, networks, emergence, distributed intelligence, knowledge in non-human entities, etc) and sharing a common purpose (largely making sense of what that entails)"

These are very different. The first describes (as I pointed out in another post) a list of things very different from what I would consider to be connectivist. The second picks up on themes much more consistent with the concept. But I think this is incidental - for Dron, what seems to be more important is whether connectivism is one theory or a family of theories.

To me, it's an odd position to take. Dron writes, "I have no major problems with it if it (the Downes version of connectivism) is presented as one of a number of relevant theories in the family of connectivist ideas." But if I set as my target the articulation of what is actually the case in learning, and what actually works, then his position is very different.

"I have far greater problems accepting Downes's theory as a definitive account of what ‘connectivism' actually refers to," he says. Instead, he writes, "my (Dron's) intent is to keep the field open, to allow for multiple interpretations and acceptance of alternative perspectives around the central core."

My perspective is this: if Dron wants to have the freedom to assert a bunch of theories I consider wrong, then I think he is free to do that. But that's not what he wants. He wants me to present my theory in such a ways that (a) it includes his theories, whatever they are, and (b) doesn't make a judgment as to the rightness of what I am saying versus the wrongness of what he is saying.

My response to Dron is something like: why do you need this name? Why don't you go out and get your own name? Of course, he suggests that, since George coined the name, then I have no more right to it than he does. If so, then he and George can go fight it out, and leave me out of it. But from my perspective, the name George coined ten years ago described what he and I were doing at the time, which was (and is) largely the same sort of thing, and has not somehow since then become something else.

You might ask, what's in a name? But that's why we get back to precision and what words mean. In different theories words mean different things, and if he doesn't even agree with me about what words mean, it seems unreasonable for him to be saying he is proposing the same theory, or even 'more or less' the same theory.

From my perspective, it's like an advocate of intelligent design being asked to be called an evolutionist, on the basis that we talk about the same sort of thing. My response is, if it's up to me, I'm not going to call you an evolutionist, because you don't actually support evolution.

Embracing and Distorting

Dron says,  "As it has emerged in recent years, it looks as though the word 'connectivism' is acquiring a common usage that embraces but extends (and often distorts) the views of Siemens and Downes. In my opinion this is, in process terms, a good thing, though I do recognize that this is arguable and that this is fundamentally what the argument is about."

Quite so.

But the fact that (say we say) people are "distorting" the theory I originally proposed (or that George originally prposed) does not entail that I should (a) accept that those are a version of`what I originally proposed, (b) accept that they are just as true as what I originally proposed, and (c) call them by the same name as what I originally proposed.

Let's call them what they are: not similar to what was actually proposed, and in fact distortions of the meaning and intent of the term and theory.

Dron argues, " a broader, more inclusive definition means that it is easier to straddle boundaries, cross-pollinate ideas, exploit diversity, and find connection and commonality where there might otherwise be ignorance."

What to me that means is that it makes it easier for people who are not actually connectivists to claim that they are connectivists, and to assume for themselves whatever popularity and support the theory has obtained over the years.

Moreover, you don't need to call two theories by the same name in order to cross-pollinate ideas, exploit diversity, and find connection and commonality. For example, I would say that a lot of this has taken place between connectivists and constructivists. But that doesn't mean we should start calling them the same theory.

Dron makes the point, "The issues are not dissimilar to those surrounding, for example, the word ‘constructivism’, as it is used in an educational context"

Both George Siermens and I have made the point that it makes it pretty much impossible to talk about constructivism. Every time we identify something about constructivism we disagree with, someone comes along and ways "well there's this version of constructivism that doesn't do it that way."

The term "constructivism" has become so broad as to be almost meaningless. There has been no clearly defined theory toward which investigators could get close and closer. Anyone who claimed to be a constructivist was counted as such. At a certain point, there is no theory with which people can say they agree or disagree. It becomes a fuzzy political movement, not science (which is what makes it so easy for wags like Kirschner and Willingham to assail).

Dron says he "posted a rough first-try at making sense of my own understanding of connectivism last week." I applaud him for the effort. But I reserve the right to say he got it wrong. Not because I am some kind of 'arbiter of meaning'. But because I believe that the theory he asserted (whatever he calls it) is wrong.

No argument that people have 'distorted' either the name of the theory or the theory itself compells me to change my stance on that. The only thing that would would be were Dron's characterization of the theory empirically correct, and he has attempted no such defense.

Systems and Networks, Redux

To wit: his various descriptions of networks as 'systems'. Which is exactly the sort of thing I mean. It is, first, not what was ever intended by connectivism, and second, empirically wrong (ie., networks that learn are not systems).

Here's Dron's defense: "He (Downes) has a very different view of the definition of a system than the one that I hold, or the one that people who talk about weather systems, planetary systems, nervous systems and ecosystems hold."

He then attempts a positive account: "My view of systems is that they are concerned with the ways that networked entities (including other systems) interact with and affect one another, and the consequent emergent and/or designed behaviours that we can observe within them. They are concerned with connected parts that affect one or more other connected parts, be they molecules in a cloud, people in a social network, neurons, planets, stars, blood vessels or networked computers."

This is a better version but is again a very different story from the one he gave just a few days ago, using a very different vocabulary.

But there's a looseness - a sloppiness - that makes it impossible to characterize as connectivist. What does it mean to say that "systems are concerned with..."? I'm sure he doesn't mean that systems are sentient entities that have problems, thoughts and concerns. Maybe he means that the term 'system' is coextensive with the term 'networked entities that interact... (etc).' Or maybe not. We just don't know.

He says, "If that doesn't make them pretty firmly and squarely in the centre of a field concerned with how entities affect other entities in a network then it is hard to see what could."

The problem is that connectivism is not "a field concerned with how entities affect other entities in a networ." That's a terrible statement of what connectivism is. It's like saying that, because "connectivism is concerned with learning" and "fascism is concerned with learning," that "connectivism is fascism." Sorry. It's not.

Dron is today saying he wasn't proposing a systems theory of learning, yet just last week, the words, terminology and concepts were all drawn from that theory. What this tells me is that he's coming from a very different perspective. He can deny it all he wants, but his words betray him.

That's why I focus on the words.

Sunday, May 11, 2014

Networks, Information, and Complex Adaptive Systems

Under the heading "A believable theory? Actually, yes it is. But..." Jon Dron offers a longish paragraph in two parts. In the first part, he outlines a view of my theory he thinks is OK, while in the second part he expands on the 'But...'

On this post I will focus only on the first part, and indeed, only on one sentence in the first part.

The first part is designed to be very broad, as a "view of the universe as a self-organizing information system that can, in some sense, be seen to have knowledge and to change through co-adaptation of its parts."

Let's look at what is included in this sentence:
  • It's a 'view of the universe'
  • that is self-organizing
  • that is an information system
  • that can be seen to...
    • have knowledge
    • change through co-adaptation of its parts
Viewed this way we can see how little the statement has to do with connectivism (at least as I see it).

First of all, connectivism isn't a 'view of the universe'. If we have to use this terminology at all, it is 'a way the universe can be viewed'. Connectivism, at least in my view, isn't committed to a particular ontology, isn't committed to a form of realism, isn't a view of something, but is a way of seeing. (George Siemens and I have actually had long debates on this point, and it was a frequent topic of discussion during the CCK online courses.)

The next statement is that 'it' (the universe) is self-organizing. Taken as a simple statement in itself, it is so sweeping as to be meaningless.

The key turn in this sentence occurs in the next phrase, the one depicting the universe as an information system. I'm not sure anything is an information system. I'm pretty sure that the universe is not.

We need to be clear about what we mean by an information system. We need to be clear about what we mean by information. It's easy to be muddled by the concept.

We typically think of information as mediating between data and knowledge; we've often read of the DIKW pyramid (or hierarchy, or chain - pick your metaphor). It is credited to Russell J. Ackoff, who is also a key figure in the study of systems.

According to Ackoff, a system must be either variety-increasing or variety-decreasing. A tuning knob on a radio reduces variety by allowing the user to select one station. The stereo system increases variety by producing sound no single speaker could have produced on its own. To learn, on this model, is to increase the efficiency of these choices relative to goals or outcomes.

On this model, then, information plays a key role as something that enables us to reduce the variety of possible states of affairs. Reducing the variety decreases the likelihood of making wrong choices, and hence, improves learning. This function of information forms the basis of information theory. We think of information as being like 'facts' - but what is a 'fact' other than the resolution of something from anything.

We can read of the formal properties of information in authors like Fred Dretske (whom I've cited on numerous occasions before). Information is the reduction in the number of possible states of affairs relative to a person or observer. A signal ('data', on this model) presents a state of affairs: "Mary's dress is red." If you already knew Mary's dress was red, you received no information; if you knew it was either red or blue, you received some information; if you didn't know what colour it was, you received a lot of information.

So what role does 'information' play in connectivism?

None.

First of all, connectivism does not require the concept of information in order to explain learning and the formation and breaking of connections that result in learning. Information is, at best, an overlay (what we would formally call an 'interpretation') of connectivism.(*) It's a way of talking about what we think the numbers and the processes 'really' represent. But from the perspective of connectivism (again, at least as I see it) the numbers and processes operate independently of representation.

Second, the story of knowledge and learning that draws on information theory in this way requires two separate 'black boxes' or bits of magic in order to work.

It requires, first, a sense of meaning or purpose that operates independently of the network of connected entities. We discussed this in the previous post, on death and dying.

And it requires, second,  independent knowledge of the possible states of affairs in the world in order to identify, measure, and ascribe a causal role to information.

We can see why these are problematic. If the universe is one of these things, we would need a perspective from outside the universe in order to understand how the whole system works at all.

But more practically, it leaves terrible gaps in our understanding of how a person - or anything - learns at all. We need to know - somehow a priori - how our senses actually do present us with information, and we need to know, again a priori, what our meaning and purpose in life is. Otherwise we cannot explain why and how we learn.

I don't see how this story can be a part of connectivism. Connectivism is a story about how networks self-organize in a fashion that does not presume any prior conditions such as knowledge about states of affairs in the world or about the meaning and purpose of life and death. The introduction of these elements simply introduces the sort of intractable problem connectivism was intended to solve.

Finally, let's look at that last clause:  "be seen to have knowledge and to change through co-adaptation of its parts."

In addition to being explicitly an interpretation ('be seen as') this phrase introduces two other elements that are not a part of connectivism.

First, it says we 'have knowledge'. I don't agree (and I don't think George Siemens agrees either) that knowledge is something we 'have'. This is a throwback to the old 'banking theory' of education, where knowledge is something we accumulate, like a possession.

We have repeatedly said that in connectivism, to 'know' is to be organized in a certain way (and that learning is to become organized in that way, by breaking and forming connections). Knowledge is not something we 'have'. Knowledge something we are.

Second, it says we 'change through co-adaptation of its parts'. This is a reference back to systems theory, and in particular, adaptive systems (yes, just like the ones they're trying to build inside learning management systems).

In systems theory, adaptation is another one of those black boxes. Here's Ackoff:   "A system is adaptive of, when there is a change in the environment and/or internal state which reduces its efficienct in pursuing one or more of its goals which define its function(s), it reacts or responds by changing its own state and/or that of its environment so as to increase its efficiency with respect to that goal or goals."

The two unknowns here are, first, how does a system change its own state, and second, how does this change align with a goal or goals? Again, if this is what we think connectivism is, we've introduced many m,ore problems than we've solved.

'Networks' and 'Complex adaptive systems' are two very different things. Saying that the one is the same as the other is, I think, a misrepresentation of both.

(*) I don't want to elide what is an important point. I've talked about the question of interpretations on numerous occasions - my most complete discussions can be found here and here.

Saturday, May 10, 2014

On Death and Dying: Evolution and Networks



As he makes clear in a comment on his recent post, part of Dron's motivation for depicting connectivism as a 'family of ideas' is so that he, too, can be considered a connectivist. In yet another paragraph from that post, he writes,
I can come up with a compelling theory of learning in social networks too, that equally explains learning in brains, and that I think fits with a connectivist account like a glove and that is incommensurate with other theories for much the same reasons. 

Now of course what's interesting is the phrasing 'learning in social networks', which is in one sense vague, and in another sense incomplete. It is vague because he doesn't distinguish (and I don't think he wants to distinguish) between learning by people in social networks, and learning by the social network (ie., by society) in social networks. And it is incomplete because, of course, connectivism isn't simply a theory about 'learning in social networks', it is a theory about learning in networks generally, including most importantly neural networks.

Of course, there is learning that happens by neural networks that happen to be in social networks. The 'connectivism as extended brain' idea speaks to that a bit (in what I recently called the 'Siemens answer'). But this is not the whole of connectivism nor even essential to it. And insofar as it does constitute a part of connectivism it is a result of the underlying theory of how networks learn, and not the explanation for it. But I digress.

Let's look at the 'compelling theory' Dron proposes (we don't know whether this is a serious proposal or something set up as an example, but it doesn't matter).
In 'Out of Control', Kelly makes the thought-provoking observation that, in any system that evolves, death is the best teacher. (I added the link).
Let's linger on the idea of death as a teacher for a moment.

Like most people, I suppose, I have thought long and hard on the prospect of death, mostly about how to avoid it, but at the same time living my life (as Becker suggests) in the denial of death. A big part of anyone's understanding of death is the need to ascribe some sort of purpose or meaning to it. A few years, I came to my own understanding of death and wrote a short post (understated so as not to alarm anyone - and now I can't find it again).

It was this: humans die so that humanity can evolve. If humans did not die, then humankind now would be the same as it was in some indeterminate past. Death does not have to happen - some forms of life do not die - but if entities do not die, they do not evolve. So in a certain sense, my own impending death is the sacrifice I make in order to ensure the continual advancement of the species. In the Darwinian world of tooth and claw, call me a war hero, if you like.

But here's the point: we don't die in order to enable the species to evolve. Quite the opposite. If we look at individual motivations and ambitions, what we observe is that most of us are trying not to die (or, at least, living as though we'll live forever). Death is, most crucially and importantly, involuntary. It's not something we do in order to achieve a result. It's not even something that the species does in order to evolve. It's just what we do. There is no purpose.

I can live with that.

Now let's read what Dron says about death and evolution:
From this perspective, species of organism (or connections in brains, or more complex entities like cities, or memes in social networks, or technologies) 'learn' through natural selection, either evolving to be fit to survive or dying, in a complex web of interactions in which other similar and dissimilar entities are in competition with them and also learning, which leads to higher levels of emergent behaviours, increased complexity and changes to the entire environment in which it all happens. 

This isn't usually what we mean by 'learn' so we're going to have to unravel what is in fact a very dense paragraph.
  • First, at least the following things 'learn': species of organism, connections in brains, complex entities, cities, memes
  • They 'learn' through 'natural selection'
  • 'Natural selection' is 'either evolving to be fit to survive or dying'
Then there is a whole sub-story describing the environment in which this happens, "a complex web of interactions in which":
  • other entities are in competition with them (where 'them' is the 'species that learn')
  • other entities are also learning
All of this leads to:
  • higher levels of emergent behaviours
  • increased complexity
  • changes to the entire environment in which this happens
To say that this is in my opinion somewhat muddled is an understatement. It incorporates what are from my perspective vast swaths of theory from a variety of perspective - and in other senses simply reveals a misunderstanding about what is going on.

To take, for example, the very first point: the list of things that learn: species of organism, connections in brains, complex entities, cities, memes. Now, the core of connectivism is that networks learn. If you don't agree with this minimal point it's hard to see that what you are advocating is a form of connectivism.

So, some of the things Dron lists are networks. Arguably, complex entities and cities are networks. But what about the rest. Is a species a network? We define a species in terms of its essential properties. A network may be composed of members of a single species but the species itself isn't a network. It is a way we categorize (or, literally, create, through arbitrary definitions (unless you're Kripke)) a type of entity.

Is a connection a network? No, it is not (except maybe in the trivial sense of a one-connection network). A connection does not 'learn' - a network learns by means of forming connections. (And here we need to be careful - a connection could be a thing that learns - but the definition of a connection is not as a thing that learns, and probably the bulk of connections in the world are not things that learn).

Is a meme a network? A meme is a "contagious idea that replicates like a virus, passed on from mind to mind." I talk about them here. If we wanted to be technical, we could think of a meme as a type of content that is passed from entity to entity in a network. It is not the sort of thing that learns.

So - maybe Dron is just being loose with his text. Or maybe it's not clear to him what sort of entities are entities that learn. Networks learn. Species, connections and memes do not learn.

I think, maybe, he is also not clear on why networks learn. It has nothing to do with the properties of the entity in question at all, except that it is a network. Indeed, our characterization of any given network as an 'entity' is arbitrary and after the fact (which is what allows, for example, Siemens to talk about a 'person' and an entity that extends beyond a physical form - because what's important is the network that we are talking about, not (say) the fact that it have livers and intestines (or beliefs and desires).

The next sentence: "they learn through natural selection." What can he mean by this?

Maybe he means that species and other types of entities learn through natural selection. But a 'type' is not a thing that learns. It's logically (though admittedly not conceptually) the same as saying 'the colour red is a thing that learns'. Abstract categorizations do not learn. Networks learn.

But what does it mean to say a network learns through natural selection? We can think of this in a macro sense or a micro sense. In the macro sense, we are to think of the network in competition with other (similar?) networks, all red in metaphorical tooth and claw. In the micro sense, we can think of the creation of and breaking of connections as a form of natural selection.

There is probably a story to be told at the micro sense. But 'evolution' isn't that story; it's the specific mechanisms associated with the making and breaking of connections, from the Hebbian 'use or lose it' model to complex cascades of connection-forming and breaking mechanisms.

Unfortunately, Dron appears to be talking about the macro sense, in which he is describing the network as a whole, as an entity, which is in competition with other entities. So the competition between these entities is what in some way causes (or requires?) the network to learn. And so some are 'fit to survive" while others die.

My question is - why would we invoke some external phenomenon - 'evolving' - to explain why we learn. Is it to explain why we die? But the explanation for why we (as a species, as a network) die isn't that we were not "fit". It's the other way around. We evolve because we die; we do not die because we failed to evolve.

The same sort of logic applies to a discussion of learning. The things that learn (qua the things that evolve, in the micro sense) are the things that live. They do not learn in order to live, no more than a species kills off its members in order to evolve. Rather, we live (as a species, as an individual human) because we learn. Why do we survive? It's not because we were better or more fit to live. It is because we are the kind of things that live. The other kinds of things, mostly, that do not learn, that do not evolve, are the kind of things that die. That's why they're not around today.

The same logic is true of humans. We do not learn in order to compete with other humans, or in order to avoid dying, or any of the rest of it. We learn because we are things that learn. And what that means is, we are networks (not 'we have networks' or 'we use networks') (and yes, I'm being a bit fuzzy by what I mean by 'we' here, but we'll let it slide).

In a similar manner, the phrase "higher levels of emergent behaviours" reflects a similar confusion. What is a 'higher level of emergent behaviour'? No doubt there is a whole story here, but it will again belong to the same category as the story about 'fitness to survive' - a confusion of cause and effect. At best, at most, an evaluation of a 'higher level of emergent behaviours' can be nothing more than an interpretation, an abstraction after the fact (a form, as I would say in other writings, a type of recognition of some or another entity or type of entity, embedded  within an artificial counting or value system).

Dron pushes ahead:
Throw in a bit more evolutionary theory such as the need for parcellation between ecosystems along with some limited isthmuses and occasional disruptions, and we are heading towards a theory that sounds to me like a pretty plausible theory of networked learning in both social networks and brains, as well as other self-organizing systems. 
I'm not sure exactly what Dron means by 'parcellation between ecosystems' but from the few sources that can be found associating the two terms it appears to refer to the development of modularization in complex entities. Modularity is described (p.325) as a mechanism necessary to avoid increasing pleiotropy in organizations - 'pleiotroty' occurs when one mutation impacts many unrelated traits.

Modularity is a trait found in many networks. The human neural system, for example, is modular - we can identify distinct parts to the human brain. But does modularity arise out of some sort of evolutionary 'need'? That would pretty much require us to rewrite network theory. In any case, modularity appears to arise from much more mundane circumstances - it's the result of the physical 'cost' to maintaining networks: the energy and resources required to create longer and greater numbers of connections between entities. It's a reflection of the fact that in nature (if not in mathematics) networks are not scale-free.

Once again, a dialogue in terms of 'needs' should actually be represented as a dialogue in terms of networks. Networks do not 'need' modularity, but as they grow, in non-scale-free conditions, they become modular. The result of increased modularity is reduced pleiotroty, which in turn limits the range and impact of mutations (all other things being equal).

Modularity is like death. The networks that are modular tend to survive more. Networks that are not modular tend to survive less. But they are not modular in order to survive; that is not the reason they are modular. Rather, physical constraints on the number and extent of connections provide the explanation, and the entire process can be explained (and predicted!) in network terms, without the 'invisible hand' of evolution to guide us.

It matters how you talk about this. It matters, because network theories explain things in one direction, and non-network theories (which invoke things like wants and needs and desires) explain things in the other direction. But more, it matters because we're really talking about two very different theories here. They are what Kuhn would call 'incommensurable' theories - they don't even agree on what words mean and on what entities exist.

I think that Dron doesn't think it matters. I think that he thinks that these other theories, first of all, are useful, and second, on the basis of that utility (and their occasional references to networks) should be considered to be a part of connectivism. Here's what he writes:
We could use it [evolutionary theory] to describe how ideas come and go, theories develop, arguments resolve and much much more. It works, I think. Others have run with this idea and explored it in greater depth, of course, such as in neuro-evolutionary methods of machine learning and accounts of brain development following a neural Darwinist model
There are two things going on in this paragraph. First is the pragmatist suggestion that it doesn't matter whether or not a theory is true or right, so long as it works, and second, the references in passing to other people who have linked evolution and learning. Of the two, the former point is more important.

Yet I'd like to linger on the latter for a moment. Because what they both have in common is the idea that learning networks are not (entirely) self-organizing networks. In neuro-evolutionary methods of machine learning, a mechanism for adding or removing connections is directly encoded into the neural-network software. The second is a mechanism whereby the selection of neuronal groups to perform specific functions are instructed by the environment.

In both cases, essentially, these amount to changing the learning mechanism by changing the physical substrate in which learning occurs. If you change the physical properties of a neuron - if, say, you make it more or less likely to respond to electrical stimulation -  then you change how the network is going to learn. It will be (at a minimum) more or less difficult for a signal to propagate, for a connection to form, for a structure of network connectivity to result.

I think it is uncontroversial to agree that, if there is an intervention by a third party, whether that party be a computer programmer or the hand of nature, then there will be a change in the way something learns. This is also true with students - if we feed them better, they learn better. If we give them Seroquel, there will be other changes. And we can call this learning, if we want. But it's not clear what we gain from that. Not all change is an instance of learning. The connectivist perspective, as I understand it, describes network-based changes in connectivity to explain learning. True, there is a different sort of learning that is created through the use of a sledge hammer. But that is not the sort of learning we are talking about.

Let me now turn to the point that it does not matter whether or not a theory is true. Dron takes a hard pragmatist stance here:
While probably true at some level, and providing a pretty good and fairly full explanation that is consistent with a connectivist account, this is only of practical import if we can use it to actually help people to learn - a theory of how to learn, not of learning itself. It actually doesn't matter at all even whether it is a full and complete explanation of all learning in all systems or not. 
As we have already established, I think, the evolutionary explanation for learning being proposed in this paragraph is not consistent with the connectivist account.

But for Dron, the defining characteristic of 'connectivism' is (or should be) not whether it is consistent with what I take to be a connectivist account, but rather, whether it can be considered part of a 'family of ideas' that can be used to help people learn - as he says, a theory of how to learn.
 
I think that you're going to answer this very differently depending on your theory of learning. If you are actually a connectivist, and have a basic understanding of human physiology, then you will understand that humans are always learning, because they are always forming and reforming connections, they are always strengthening and weakening neural pathways.

Moreover, there is very little that can be said about how to do this, most of which is outside the domain of education, and usually classified under the heading of physical fitness or foods and nutrition. I think it actually is a useful part of connectivism to point out how physical health impacts learning, precisely because leaning takes place in a physical environment and the formation of connections (as noted previously) requires energy and nutrients.

Almost certainly, however, this is not what Dron means when he says connectivism needs to provide a practical account of 'how to learn'. What he wants is a story, probably told in terms of wants and needs and desires, that talks less about how to learn (that is, less about how to form connective networks) and more about what to learn.

And truth be told, I've played that game too.

And Dron, not without justification, says it begs the question: 
Downes's theory and 'mine' (I make not claims at all that this is novel) both beg the question of how it might help people to learn and make their lives better. These accounts only have legs if we can put them to use, and doing so invokes models and purposes that are prior to the accounts themselves, very notably connectivism as a theory of how to learn itself, so we have not really addressed the issue at all.
I've always been pretty clear that my work has a purpose. I have had for years a vision statement on my website that states, in part,  "I want and visualize and aspire toward a system of society and learning where each person is able to rise to his or her fullest potential without social or financial encumberance," and so on.

But let's address this. Is it true that I want people to have better lives? Yes. Is it true that I engage in (among other activities) learning in order to do this? Yes. Does this account tell me anything about how to learn? No. And does it explain how I learn, or why I learn at all? No.

My statement of purpose is an expression, in avowedly folks psychological terms, of who I am. It is a description of the result of my education and experiences to date. It is an abstraction over some very complex mental states, and as such helps people be able to predict what things I might say or what actions I might undertake in the future. But it is not a cause of my mental states (and certainly not of the mental states that resulted in its production in the first place).

So - why does this matter? Why can't we describe these things in whatever language we want? Why can't we just clump all these theories that sort of talk in similar ways into a 'family of ideas' and call it connectivism (or whatever)?

Well - let's return to the subject of death.

To stay exactly parallel to Dron's point, we can can't talk about 'how to die' without some models and purposes that are prior to the accounts themselves."

But above I stated that from the network perspective, there isn't a purpose to death. It just so happens that species whose members die evolve, and species that don't die, don't.

But we can (and do) talk of how to die a meaningful death. And  if we just use whatever theory we want to explain these things, with no regard to whether they are right or true, then there will be cases in which it is useful (even if literally false) to speak of a meaningful death.

We might even begin to use this language to teach people (if you will) 'how to die'. Of course, they don't really need instruction in how to die - every person will accomplish this feat eventually. But we can with our theory talk about right ways and wrong ways to die. We can talk of a "fitting end" using the same language and metaphors of evolution. We can speak of a person's death being 'meaningful' if this, that or the other condition is attached to it.

This is what Dron is saying about learning. The very idea of teaching someone how to learn presupposes that there must be some purpose to learning. But really, what he means is something like 'how to meaningfully learn'. And meaningful learning presupposes a purpose.

As a theory of living, the theory of 'meaningful death' is internally inconsistent. Yes, you can teach people how to live by teaching them that their death is meaningful. But such teaching can (and often does) result in people seeking death, or risking death, resulting in the ending of their lives. A consistent theory of living would truthfully reflect that a person's death has no meaning, and that the purpose if life (if the concept has any meaning at all) has everything to do with what is done during a lifetime, and very little with the manner in which it ends.

The same is true with a theory of learning.

From where I sit, a theory of learning which tells people 'how to learn' is essentially telling them that the way to learn is to not learn. It is a way of telling them to subsume their own best interests under those stipulated by some third party (where the authority of this third party is inevitably an appear to a black box or magical mechanism).

It is, in the end, a way of saying that learning is not actually a network phenomenon at all.

Now of course we know that actual evolutionary theory is nothing like what has been described in this post at all. Actual evolutionary theory doesn't trade in needs and wants and desires - it doesn't even presuppose on the part of the species a will to live or any such motivations at all. It says, simply, that mutations occur, and that some species that mutate continue to exist, and other species do not continue to exist, and this process is what explains the diversity of life today.

If you just think of 'evolution' as a 'family of ideas' that brings together every thought and theory related to selection and survival and the rest, it's not a leap to start describing things like 'survival of the fittest' as a part of evolution, and not far from that to describing things like eugenics as a socially worthwhile activity. It's the sort of thing that happened to Nietzsche, it's the sort of thing that happened to Darwin, and it shows that actually getting the theory right matters.