Connectivism

Abstract

Connectivism is the thesis that knowledge is constituted of the sets of connections between entities, such that a change in one entity may result in a change in the other entity, and that learning is the growth, development, modification or strengthening of those connections. This paper presents an overview of connectivism, offering a connectivist account of learning and a detailed analysis of how learning occurs in networks. It then offers readers an interpretation of connectivism, that is, a set of mechanisms for talking about and implementing connectivism in learning networks, and finally, pedagogy.

Keywords: connectivism, education, learning, development, networks, pedagogy, assessment

Note: This article appears in Chinese in the February 2022 issue of Distance Education in China. There is an English version published in the Asian Journal of Distance Education.

Introduction

Connectivism is sometimes depicted, as it was by George Siemens (2004) as “a learning theory for a digital age”. And though the rise of digital technology was influential in the formation of connectivism, the theory is not a response to digitization, but rather, a way to use the insights derived from digitization to address long-standing issues in the fields of learning and development.

In this field, students and practitioners are typically presented with a set of ‘learning theories’ including, variously, theories based on behaviour, methods of instruction, transactional distance and interaction, construction of knowledge and meaning, activity theory, motivational theory, and more. Often these theories are presented as tools or ‘lenses’ through which the phenomena being studied may be interpreted.

But throughout there’s no concurrence on what constitutes knowledge and learning beyond a few superficial taxonomies or characterizations, much less a common account of what constitutes ‘successful’ learning. Meanwhile, there is widespread dissatisfaction with teaching programs, testing programs, and education in general. Despite this, people learn; they learn a lot, and this phenomenon remains fundamentally unexplained.

The digital age has revealed the artificiality of traditional theories of instruction. As Watters (2021) documents, despite pretensions of personalized learning and meaningful education, the best even the most advanced educational technology can produce is little better than a mechanized, impersonal, standardized process: teaching machines based on Skinner boxes and behaviourism. Connectivism is offered as a response, not to digitalization, but to the paucity of contemporary theory in education, and offers, not black boxes and metaphors, but an account of knowledge and learning based on the most current understanding of natural and artificial intelligence possible.

In this regard, connectivism does not offer itself as an alternative ‘learning theory’ or ‘lens’ with which to interpret phenomena in the process of doing the same sort of instructional activities teachers and researchers have always done. It instead offers an empirical basis for an understanding of teaching and learning, redefining how we think of knowledge, how learning occurs, what we are trying to do when we learn, and how learning is delivered and assessed.

What is connectivism?

What is learning?

What is connectivism? Connectivism has to do with learning. And we ask, to begin, what is learning? And there have been numerous choices or options or theories presented over the years.

Gagne (1977), for example says learning is a change and human disposition or capability. This is a theory that reflects a behaviorist approach as characterized by, say, Gilbert Ryle (1949). From a more cognitive perspective, Mayer (1982) talks about learning being a change in a person's knowledge. At the cornerstone of Bingham and Connor's (2010) argument is that learning is a transformative process of taking in information. And we also have the sense of learning as acquisition or acquiring knowledge and skills from both Smith (1982) and Brown, et.al. (2014).
I don't think learning is any of that and I think these theories are incorrect in some important ways.

They are what I call “black box theories”. And what I mean by a black box theory is that they don't tell us exactly what is happening when somebody says, say, “learning is a change in disposition.” What that means is that they behave differently after learning than they did before learning. But how does that happen? What makes that happen? We don't know. If somebody says “somebody acquires information”, again, there is something that's happening inside a black box. Did they put information in their head? I don't really think that's the case.

For me, connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks. I've talked about and used that definition many times and today I'll talk quite a bit about what I mean by that.

A connectivist account of learning

So what does it mean then for a connectivist to talk about learning when I say that learning is in the formation of connections in a network? I mean this quite literally. This is not a metaphor. There are a lot of theories of learning which are based on metaphors (we'll talk a bit about that) but this is not a metaphor. When a person learns, or when something learns, a connection is physically created between two nodes or two entities in a network.

And what do I mean by a connection? Again, this is an actual description of a physical event, not a metaphor, not a black box. I say a connection exists between two entities when a change of state in one entity can cause or result in a change of state in the second entity. Learning is a thing that networks do, it's a thing that all networks do, and arguably a thing that only networks do, and it consists of the following:

  • the addition or subtraction of nodes in the network (that is, the entities that are connected to the rest of the network),
  • or the addition or subtraction or strengthening or weakening of the connections between those nodes.
  • And then we can also talk about changes in the properties of the nodes or the connections.

The first two are known collectively as “plasticity” and sometimes you'll hear people talk about “neuroplasticity”, and what they mean is that in the brain we sometimes lose neurons and gain neurons, and the connections form and break between neurons as well.

In the case of the third, this may mean that the strength of a connection can vary. For example, it might take more or less energy for a change of state in one node to result in a change of state in another node. And it may mean that there can be changes in activation functions inside neurons, such that different patterns of energy input may result in different sequences of signals being sent.

Self-Organization

How does ‘connection’ as described above become something we would recognize as learning? It takes place through a process of self-organization. Connected entities that change states in each other can through that fact alone become synchronized or organized in some way.

There is an experiment you can do for yourself to see how this phenomenon may occur. Arrange a set of metronomes on a piece of wood and place the piece of wood on two cans. Start the metronomes randomly, so they are not ticking at the same time. And then, as you watch, they slowly become synchronized, and now if you look at them, they're all ticking at the same time. (Bahraminasab, 2007)


Figure 1 - The Metronome Effect (UCLA, 2013)

And the question is, what's happening here? Each one of these metronomes is connected to the others, and the way they're connected is through that piece of wood sitting on those two pop cans so that each time a metronome goes back and forth it pushes a little bit on the piece of wood, moving it back and forth, influencing the other metronomes, speeding them up or slowing them down until they all tick together.

This is an example of what is called ‘self-organization’, and the idea is that independent things, like metronomes in this case, that are connected can by virtue of that connection alone become organized or synchronized themselves without needing any other intervention. It doesn't need direction, you don't need to organize the synchronization, there isn't a ‘head metronome’, nothing like that. And that's the sort of thing that I have in mind.

The Siemens Model of Connectivism

The term ‘connectivism’ is attributed to George Siemens, who made an important contribution with his paper ‘Connectivism, A learning theory for the digital age’ (Siemens, 2004).

Let's look at George's principles for a bit. George Siemens wrote his paper on connectivism in 2004 and came up with these eight principles:

  • Learning and knowledge rest in a diversity of opinions. And that of course is to state that learning and knowledge exist in networks, not just in one place.
  • Learning is a process of connecting specialized nodes or information sources.
  • Learning may reside in non-human appliances. That's the idea that personal learning and social learning are all one part of one large learning network.
  • [The] capacity to know more is more critical than what is currently known. That's an important point. I won't talk a whole lot about it, but we both agree that learning isn't just the acquisition of content. Learning is about developing a way of seeing and interacting with the world.
  • Nurturing and maintaining connections is needed to facilitate continual learning.
  • [The] ability to see connections between fields, ideas, and concepts is a core skill. Now, I ask, what does that mean? Because I always ask what does that mean? And we'll talk about that later.
  • Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.
  • Decision-making is a learning process. And I'll talk a bit more later on in this presentation about that.

There are different things that can learn because there are different kinds of networks, for example, neural networks (Kasabov, 2014) and social networks (Oddone, 2018). Connectivism is about learning both in neural networks and in social networks.



Figure 2 - Personal Learning and Social Learning 

One of the big differences between George Siemens and myself is the way we interpret learning.
The way George would say it is that neural networks and social networks form one big network, that our knowledge consists of all the connections inside our mind and the way this is connected to everything else that's in the world, and the way it's connected with itself. So, our knowledge is partially in ourselves and partially in Twitter or in WordPress or Pinterest or our network of friends, etc.

By contrast, I keep these two networks separate. I think personal learning (neural network) is one network and social learning (social network) is another network, and they're two separate networks, but that they interact with each other through the process of perception. To me, perception is the way a neural network interacts with the social network, and communication or conversation is the way the social network is able to interact with the neural network. To describe this interface we'll talk a little bit later about the processes of emergence and recognition.

Connectivism As Distinguished From Other Learning Theories

Connectivism can be distinguished from other learning theories in a few important ways:

As non-instructivist: One way is to distinguish it from theories that are based on content, for example, instructivism or transactional distance theory and distance education. Connectivism says that the brain is not a book or a library. It's not an accumulation of facts and sentences and propositions that we bring in and organize and we store like a whole bunch of stuff. The brain does not get full of too much information. There's nothing resembling the pages of a book or the books of the library. There's nothing resembling the text and the sentences inside the brain. If you cut open a brain or if you analyze a brain, you will not see any of that. All you will see is a network and signals that go back and forth between the different entities in the network.

As non-cognitivist: What I mean by that is that therefore connectivism is ‘non-cognitivist’. You've probably seen a lot about knowledge and learning that is based on cognitivist theories of mind or cognitivist theories of learning where they talk about sensory memory and working memory and long-term memory, they talk about encoding and constructing schemas and cognitive load.

All of this is from a metaphor of the computer. According to the metaphor, the brain is like an information processing system, very much like a computer. But the the brain is not like that. There are ways that we could interpret some cognitive phenomena using the metaphors of working memory, for example, or cognitive load, but these are not descriptions of processes, these are not descriptions of actual learning that occurs.

Also, we have theories that tell us that learning is about constructing knowledge or representing reality. A lot of constructivist theories tell us this. But what does that mean? Again, we run into this black box problem.

As non-representational: For the brain to be like this, it must be some kind of representational system, like a language or a logical system, or even a graphical image system, or some other symbol set, some other system of signs where the signs represent objects out there in the world. And there would have to be rules or mechanisms for creating entities and manipulating entities in that representational system.

And there's nothing like that if we look at actual cognition. There's nothing like that that's happening. And it's interesting, because people are using this computer metaphor to talk about human learning when even artificial intelligence doesn't use this model anymore. This model characterizes what we used to call expert systems or symbol-based systems or rule-based systems of artificial intelligence. But in fact, almost all artificial intelligence has moved away from this model and uses the model of neural networks.

Connectivism to me is a non-representational theory. That makes it quite different from other theories. What I mean by that is there's no real concept of transferring knowledge, making knowledge, or building knowledge. Rather learning and knowing are descriptions of physical processes that happen in our brains.

As based on growth: When we learn, when we know, what we're doing is more like growing and developing ourselves the way we might build a muscle. And you don't tell a muscle, “Okay muscle, now you will get bigger.” You don't ‘acquire’ new physical strength. That's not how it works.

And I don't know why people, when they're talking about learning, would think that it's different. We're working with a physical system, the human body, composed of physical properties, and in particular, a neural net that grows and develops based on the experiences it has, in the activities, and results of those activities it undertakes.

How Does Learning Occur?

Overview

How does learning occur? There have been numerous theories over the years, and in fact, there's a whole domain of learning theories about processes, there’s Kolb, for example, there's Dewey's model of experiential learning. They're doing something similar: describing processes that include elements such as concrete experience, observation, theory, and deductive inferences. See Kolb (1984:21), for some examples.
 



 
Figure 3 - Experiential Learning. Kolb (1984)

What these theories have in common is that they’re about the processes that create learning, and in fact, we could go on forever talking about the processes that create learning and talk about whether this process is better than that process, but what we're doing is we're describing the conditions around the person rather than the person themselves. And we're talking about the sorts of activities, like Gagne’s nine events of instruction (Gagne, 1977), talking about the activities that are set up and organized by an instructor or a teacher rather than what learning is like from the perspective of the individual.

Here's what learning is like from the perspective of the individual (Lucy Reading-Ikkanda in Wolchover, 2017):



Figure 4 - Learning Using Neurons
 
Now this is just one of many kinds of neural network, we'll talk about that, but this kind of gives you an idea. Look at the way this network works we have what we might call an input layer of neurons these are connected to a second layer which identifies edges. These are connected to a third layer that combines edges. These are connected to a fourth layer that identifies features.

Now this is the sort of processing, if we can call it processing, that happens in the visual cortex that's located back here (pointing to the back of my head). It comes in through your eyes and sits back here and the visual cortex takes all the input impacting your eyes and detects what Marr (1982) described as the two and a half dimensional sketch, edge detection and all of that.

These are our interpretations of what these neurons are doing. These neurons aren't actually saying, “Oh I'm looking for an edge.” That's not what's happening. These neurons are simply receiving input and then sending output. That's all they're doing. They're not ‘intended to’ or ‘created to’ detect edges. That's the interpretation that we put on. That's what we say that they are doing from our perspective.

Signals, Interaction, Structures

Siemens discusses network characteristics in several places, but the terminology he uses should be more precise. So, for example, when he talks about networks having ‘content’, it is better to talk about networks having ‘signals’, because not all network interactions are meaningful in the way that ‘content’ suggests meaningfulness.

Additionally, Siemens discusses the ‘data’ or the ‘information’ in the brain or in in our mind. These are very technical terms. A datum represents a fact. Information, strictly speaking, if we follow Dretske (1981), is ‘the reduction of possible states of affairs in the world from the point of view of the receiver.’ These terms again say more than we should be saying, and so again it is better to use the word ‘signals’.

As Siemens says, there are ‘interactions’. That is, connections form, and signals are sent through these connections. That's how we get interactivity. One entity sends a signal to another entity, and that signal has the potential of changing the state of that entity. This should be distinguished from the term ‘interactivity’ as it is used in a broader educational context, which is used widely to refer to informational exchanges such as conversations or discussions.

Siemens also refers to static and dynamic knowledge structures. These terms are vague as well, as it may be thought of as referring to cognitive or representational structures, such as models or frameworks, or it may refer to physical structures such as social networks or neural networks. In connectivism as precisely understood, by ‘knowledge’ we are not alluding to cognitive or representational structures , but rather, specifically and only to patterns of connectivity between entities in such physical structures.

Similarly, when we talk about dynamic structures, we are thinking of the way the neural network receives new signals. Siemens talks about ‘new information’ and ‘new data’, but it is more accurate to say ‘new (or incoming) sensory perceptions.’ According to Siemens, the connections within a network are strengthened by emotion, motivation, exposure, patterning, logic, and experience and are influenced by socialization and technology (Melrose, Park and Perry, 2013) and even by ‘self-updating nodes’, (and again, more precision is necessary, as these may refer to internal sensations as Dickens [1843, p. 24] famously said, a dream might be “an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato”), vertigo, sickness, nutrition, or even nodes in recurrent networks that send signals in a loop to themselves.

It is easier to talk in terms of thoughts, beliefs, intentions, and mental contents as though they exist as stand-alone cognitive entities, and are not reducible to physical causes – as Dennett (1987) would say, to take an ‘intentional stance’. And there is nothing wrong with the use of such terminology, provided that a connotation of a term is not imported into the explanation of phenomena the term is used to describe. Just as we don’t really say a computer ‘thinks’ when we say “the computer thinks it is out of memory”, we don’t really say we have acquired new facts, or even that they have received new information, when they receive and respond to perceptual signals.

Learning Theories


There are ways to talk about learning without taking the intentional stance. For example, we can describe the factors that govern the formation and strength of connections between neurons; these, more properly than descriptions of pedagogical practices, may be understood as ‘learning theories.’ We can, for example, talk about four major types of connectivity:

-    Hebbian connectivity, which is basically the principle that “what fires together wires together” (Hebb, 1949). If you have this neuron and this neuron and they both fire and they both stay silent, and they both fired and they both stay silent at the same time, they will eventually grow a connection between each other. That's the simplest form of network formation.

-    Contiguity, which describes cases where entities that are in some way located together. One example might be the ‘pool’ of nodes in an Interactive Activation and Competition network (IAC) which compete against each other and are interconnected with negative weights (McClelland, 1981), or for example the different cells in the eye are arranged beside each other, which informs how they're connected to the different layers of the visual cortex.

-    Back propagation, in which feedback propagates back through a network and adjusts connection weights. It's not clear that back propagation works in human neural networks, although we do say people learn from feedback. Certainly, back propagation is used in artificial neural networks. It was developed by Rumelhart and colleagues in the 1980s (Rumelhart et.al., 1985) and for a long time was the most promising form of neural network learning.

-    Boltzmann connectivity, in which a network tries to achieve the most thermodynamically stable state by adjusting connection weights (Hinton, 2007). Boltzmann processes are stimulated by a process similar to annealing metal by heating it up and cooling it down. A useful metaphor is when a rock is thrown into a pond; atoms of water slosh and jostle but eventually they all settle into a stable flat surface.

These are rough generalizations and today’s artificial neural networks are not classified by these learning theories in particular. A range of factors define the different types of neural networks extant today, including topography, activation functions, and features detected.

These core ideas, nonetheless, can be found in both major categories of artificial intelligence using neural networks: machine learning, and deep learning. Machine learning involves some human intervention to identify or classify the data. Deep learning uses layers of neurons to identify and classify data on their own.

Properties of networks


A key question is, what makes these artificial neural networks work? In the field of artificial intelligence, the answer is arguably trial and error. Researchers develop models to solve specific problems and compete against each other at international conferences. These models allow researchers to vary network parameters, so that they can adjust to meet the sort of output that's desired and are then trained using predefined training sets.

So here, for example is what might happen in one single neuron (Gavrilova, 2020):