Connectivism and Artificial Intelligence in Education

 
Edited transcript of recording by Google Audio Recorder. Slides and audio available here.

There will be audio and video of the presentation available for you later. Oh, this is the audio being recorded. What's neat about that is it's automatically generating a transcript. So, that's our first use of artificial intelligence.

So today, I'm speaking on the topic of connectivism and artificial intelligence in education. This will require some background about what I mean by connectivism. And of course it'll involve talking about artificial intelligence. And then we'll bring these two together and talk about how they apply to learning and especially online learning. If you want to get more background about connectivism, I have a long post under the title Connectivism and the link is on the first slide.

So, when people talk about artificial intelligence in education, they often talk about using artificial intelligence to deliver education as though they were a teacher. This is something they've been talking about for years. Some examples here. Knewton from 2015, for example. Pearson as well with a personalized learning application. Or Khanmigo, which exists now, that came out of Khan Academy.

There's been a lot of criticism of these, and I think justifiably so. The robot tutor, as it sometimes called, can do a few things. For example, artificial intelligence can today write fairly good instructional materials. It can also answer questions for you. It can be used to evaluate student work. It can be used to create adaptive instruction. And it can even administrate learning, used, for example, in learning analytics.

But for many applications, it's not really being used, and I link here to an article by Tony Bates. He points out that it is not being used to offer alternatives or get students to ask questions. It's not being used to foster critical thinking or personal reflection. And indeed, we have to ask, how could it be used to do these things? It wasn't really designed to do these things. Artificial intelligence isn't going to replace a conversation between two of us.

In my own work. In the past, I've drawn a distinction between, what, in what is called personal and personalized learning? The picture people supporting artificial intelligence give us is one of personalized learning. It's almost like they want artificial intelligence to do the learning for you. The AI or the instructor or the school defines the knowledge that you need to have. They create learning requirements. And in the end, they test you against those requirements. That sounds very similar to the sort of instruction that might happen where the teacher at the front of the classroom.

But the kind of learning that creates reflection and creates critical thinking is different. It's a kind of learning each learner has to do for themselves. It's based on their own interests and needs. The role of the instructor, or in this case, the artificial intelligence is to support you. It's not there to teach you and definitely not there to do the learning for you.

This idea of personal learning begins with learners defining their own projects, needs or interests. You learn from practice, not by remembering what you were told. The role of the teacher is to help and support not to evaluate. The tools, especially online, are designed to build capacity and help you, not to teach you.



My colleague, George Siemens, and I combined these ideas to create the first massive open online course. You may have heard of the massive open online course. The way we did it, the course was massive in the sense that it could support many people. It was also open. It was open to anyone who was interested, and there were no fees. It was online, not a blended course, not a course wrapped around in-person instruction. And of course, it was a course.

The idea of the MOOC came from the idea of self-organizing learning communities. This was the core idea of Connectivism. One of the major insights of connectivism is that a self-organizing network is a perceptual system and a reasoning system combined. I'll talk about that.

When we built the MOOC we adopted four design principles from connectivism. I don't have time to talk about these in detail.They are based on how a network functions best, how a network can learn, how a network can shape itself. The four principles, as you can see on the slide, are autonomy, openness, diversity, and interactivity. To me, these inform the core design requirements for an online learning system. This includes any system that uses artificial intelligence.

So, what is connectivism? It is the thesis that knowledge is distributed across a network of connections, and therefore, that learning consists of the ability to build and use those networks. 

You might think of it this way. To know something is to be organized a certain way, to have a certain pattern of neural connectivity in your brain. This is not a metaphor. I mean, this literally actual connections between neurons. Our knowledge is physically instantiated in our brain. These connections grow and develop through practice and learning. So, to learn is to become organized in this way.

Another insight of connectivism is that any network can learn. And it's important to distinguish between personal learning inside a human brain and social learning, which takes place in a society. Just as a human learns by creating connections in the brain, a society or a community learns by creating connections between people and objects.


The theory of learning? Is essentially the theory of how these connections are formed. We've talked about different theories of learning over the years, different ways of creating connections between neurons. We see examples of these both in psychology and computer science, not to mention philosophy and physiology.

A common rule is known as a 'hebbian' rule named after Donald Hebb. It is 'what fires together wires together?' Two neurons that fire together will tend to form a connection between them. Another famous example from computer science, back-propagation, is based on feedback where 'corrections create connections.' Again, not enough time to talk about this in detail.

What's interesting about the way a network can have knowledge is that the knowledge is distributed. Look at this example on the slide. We see the picture of a tree. The concept of a tree is represented by the red lines connecting different neurons. It's not located in one particular neuron. And the concept of a dog also is represented in this case by green lines. It uses the very same network.


So, these concepts are basically patterns in the network like this. Cat is a pattern. Dog is a pattern. It's not a word. It's not a picture. Just a pattern. These patterns are made up by connections between neurons.

We can have artificial neurons in software or human neurons in brains. You should be thinking right now that the way a human learns can be the way a computer learns. And the way a computer learns can be the way a human learns. It's not going to be exactly the same. But we can learn a lot of lessons.

So? In computer science, there is a theory called connectionism. All the artificial intelligence we are reading about today is based on connectionism. In other words, it's based on artificial neural networks. These neural networks do characteristic things that networks do, for example, regression, which is like finding a line in data, detection of features in complex presentations, organizing into categories or clustering, and prediction. These are also core functions of humans, when we look at how we learn and how we think these are the sorts of things we do.

There is a lot of technical explanation about how exactly these neurons are organized and form connections. Obviously, there isn't time to cover all of that today. But the emphasis again is that the learning theory describes how connections are formed.

So, what does it mean? Practically, when I say that knowledge is a set of connections in the brain, it's the same as saying knowledge is pattern recognition.It's not based on sentences. It's not based on pictures. It's based on this pattern of activation.

Imagine you see your mother at the bus station? There's a hundred people at the bus station. You're able to pick out your mother from this crowd correctly, every time. Not by some rule. Not because somebody gave you a good description. You immediately recognize her. 

What that means is the pattern of neurons, the network of neurons, representing your mother become activated when you are presented with her in your view. So, in learning, we want to create these patterns, and we know we have learned when we successfully create these patterns and can recognize things. To know is to recognize.

What's important here is recognition depends on the recipient. Only you recognize your mother, or maybe some of your close friends. A stranger will not recognize your mother. Why would they? They have no experience of her. For something to be recognized, we have to be ready to recognize it. I discussed that much more in the link on this slide.

We can play a game. Think about it, and maybe if you have thoughts type them in the chat. It's a fun game. What comes next? 

Bacon and...? Most people say eggs. But not everyone. We have different experiences. The meaning of the word bacon? Is based on how it is embedded in our Network. We all associate it with something. But it might be different. I associate it with eggs. Here's a harder one:

Wayne...? Okay, we got Bruce Wayne. Okay, good. Bruce. Bruce everybody's gonna be Bruce? A dubious Bruce. Okay, now that's because you're all Mexican! In Canada we would say Wayne Gretzky. Yeah, and Jose, I see you see nothing. There is nothing. Yeah, you don't have an experience of Wayne so you don't associate Wayne with anything. So, the next one:

American...? We have one person said dream. We have a Green Day fan saying, idiot. We have dream, dream, football, continent, Dream. I picked Idol. It's a TV show here in Canada and the US. Okay:

Justin...? Yeah, some people say Bieber. Especially in Latin America. As people say, Bieber. I said Trudeau. Justin Timberlake yeah, very good. And finally, the colloquial expression in English:

Tried and...? And not. Yeah, see, it's an English colloquial expression. People aren't so likely to get it. Tried and win, tried and failed, no. In English, the expression is tried and true, and it refers to something that is reliable.

What we just did here is exactly what a large language model does. And when they created chat GPT they were able to make the sentences longer. In other words, the computer would pay attention. I'm pointing to my screen, the computer would pay attention to a longer phrase.

So? What does all that mean for us?

We are familiar with two types of traditional knowledge: quantitative, which is to count things; qualitative, which is to describe things; and the third type of knowledge is connective knowledge,  which is knowledge created in these networks.

We say that connective knowledge is emergent. You see a pattern in the world and it's like that pattern emerges from the data. If you look at the pattern of the woman, it consists of round dots. And only humans have the experience of women, so only humans perceive the woman in that pattern. We perceive it by recognizing it, by having previously trained our neural network.


So our detailed knowledge of artificial intelligence can teach us a lot about how humans learn. So, knowledge, which is the organizing of neural connections, comes from experience. We could learn if people tell us things, but not as much as we learn from practical experience. This is well known. Kolb talked about this in 1984.

The kind of learning we get from experience is exactly the kind of teaching artificial intelligence does not do: active learning, for example; hands-on participation in actual practical activities; our own reflection and thoughts about our experiences; our own evaluation of whether or not to believe something; our actual practice of trial and error in the world.

Artificial intelligence does none of these things for us. In order to learn, we have to do them ourselves. It's the only way to create a neural network. Often, we create this kind of learning through learning communities. This was described in the philosophy of science by people like Thomas Kuhn. And the domain of education, Etienne Wenger-Traynor talked about communities of practice.

The aspects of communities of practice are the same sorts of things we find in experiential learning: engaging in activities, building relationships, creating resources. We are doing as a community the same thing a brain does as a collection of neurons. We are creating a social network. It's a core aspect of connectivism that participation in a social network helps create a neural network.

So? Here are some aspects of communities of practice that Wenger trainer talked about. We also find our friend Tony Bates talking about these. These are some of the practical day-to-day things that we can do in learning to stimulate neural networks: sharing expertise, for example; working on projects together; participating in group discussion,

Notice that the role of artificial intelligence here cannot be to instruct us. We are looking for support, not instruction. We are thinking of empowering communities using artificial intelligence. Some examples: co-designing an AI system with local communities; collecting community data using artificial intelligence; or even things like automated translation. It's not ready for prime time yet. It is not ready to be used just yet. But it will be.

Following from massive open online courses, and knowing what we know about experience and personal learning, people in the online learning developed the concept of personal learning environments (PLE) The idea of a personal learning environment is to create a way for us to build our own network both to organize our knowledge and our friends and our tools and also to connect to other people more widely in a manner that supports autonomy and diversity in a way that social networks like Twitter do not.


This is the prototypical model of a personal learning environment developed by Scott Wilson. The individual learner is at the center and is connected to services and people. It's like the individual person is a neuron in a network. And what we're trying to do is create and navigate through these connections.


This is a model of my own approach. The logic moves from left to right. The input is from social network blogs, articles, and other sources. There's a place for me to work and create and develop things. And then ways that I can share what I have created with people in the network.



So this is a screenshot of the application I have been building. And my use of artificial intelligence here. Is to support different functions of the personal learning environment.

For example, on the left hand. On the left hand side are links to social networks I belong to. I can also search and I can use artificial intelligence to help me search. And artificial intelligence also translates automatically for me, the contents of articles I read. So, I'm globally connected.

In the second screen, we can see can see full conversations. I have an artificial intelligence function that can summarize those conversations.

This is my writing area in the third pane. I have the option here to use artificial intelligence to create a content template based on my description. This helps me structure my thoughts and reflect critically on what I've been reading.

Then, finally, on the right hand side. This is where I share what I've created with friends in my network. All together, this personal learning environment is helping me make connections with people. I even have an option in my creation window to co-create with other people. So, it's helping me build, it's helping us work together to create practical things.

And this is just touching the surface of what's possible here. I'm sure you can think of different places I can get data on the left, different things I could do to create new types of content in the middle to send to the right. For example, I could connect to open scientific data, my artificial intelligence could draw a graph of that data, I can insert that graph into an article I'm writing with my friends, And then we can publish it in a journal or magazine. Many, many more possibilities.

So, when we're thinking about artificial intelligence in education? We should be thinking not about what artificial intelligence can do for us so much as we should be thinking about the new kind of skills we need in order to learn from experience. 

Some people have talked about data literacy and AI literacy, for example. This is other work that I've been working on: different models of data or AI literacy based on based on stewardship of data, for example, or information literacy (which would include critical thinking), or social engagement as a data literacy.

Again, there are many avenues of Investigation here. So as you go forward thinking about artificial intelligence and learning, what does connectivism teach us? It teaches things like considering the broader uses of artificial intelligence. It teaches things like having our own voice as we talk to each other. It gives us tools for play and for experimentation. And it gives us a way to try new things and test things out.

The main thing is that artificial intelligence is not a tool to teach people.It is a set of tools we can use in order to learn, and our role as educators in the future will shift from teaching to helping people learn for themselves.

And that's everything I have. Thank you.


 

Comments

Popular Posts