On this post I will focus only on the first part, and indeed, only on one sentence in the first part.
The first part is designed to be very broad, as a "view of the universe as a self-organizing information system that can, in some sense, be seen to have knowledge and to change through co-adaptation of its parts."
Let's look at what is included in this sentence:
- It's a 'view of the universe'
- that is self-organizing
- that is an information system
- that can be seen to...
- have knowledge
- change through co-adaptation of its parts
First of all, connectivism isn't a 'view of the universe'. If we have to use this terminology at all, it is 'a way the universe can be viewed'. Connectivism, at least in my view, isn't committed to a particular ontology, isn't committed to a form of realism, isn't a view of something, but is a way of seeing. (George Siemens and I have actually had long debates on this point, and it was a frequent topic of discussion during the CCK online courses.)
The next statement is that 'it' (the universe) is self-organizing. Taken as a simple statement in itself, it is so sweeping as to be meaningless.
The key turn in this sentence occurs in the next phrase, the one depicting the universe as an information system. I'm not sure anything is an information system. I'm pretty sure that the universe is not.
We need to be clear about what we mean by an information system. We need to be clear about what we mean by information. It's easy to be muddled by the concept.
We typically think of information as mediating between data and knowledge; we've often read of the DIKW pyramid (or hierarchy, or chain - pick your metaphor). It is credited to Russell J. Ackoff, who is also a key figure in the study of systems.
According to Ackoff, a system must be either variety-increasing or variety-decreasing. A tuning knob on a radio reduces variety by allowing the user to select one station. The stereo system increases variety by producing sound no single speaker could have produced on its own. To learn, on this model, is to increase the efficiency of these choices relative to goals or outcomes.
On this model, then, information plays a key role as something that enables us to reduce the variety of possible states of affairs. Reducing the variety decreases the likelihood of making wrong choices, and hence, improves learning. This function of information forms the basis of information theory. We think of information as being like 'facts' - but what is a 'fact' other than the resolution of something from anything.
We can read of the formal properties of information in authors like Fred Dretske (whom I've cited on numerous occasions before). Information is the reduction in the number of possible states of affairs relative to a person or observer. A signal ('data', on this model) presents a state of affairs: "Mary's dress is red." If you already knew Mary's dress was red, you received no information; if you knew it was either red or blue, you received some information; if you didn't know what colour it was, you received a lot of information.
So what role does 'information' play in connectivism?
First of all, connectivism does not require the concept of information in order to explain learning and the formation and breaking of connections that result in learning. Information is, at best, an overlay (what we would formally call an 'interpretation') of connectivism.(*) It's a way of talking about what we think the numbers and the processes 'really' represent. But from the perspective of connectivism (again, at least as I see it) the numbers and processes operate independently of representation.
Second, the story of knowledge and learning that draws on information theory in this way requires two separate 'black boxes' or bits of magic in order to work.
It requires, first, a sense of meaning or purpose that operates independently of the network of connected entities. We discussed this in the previous post, on death and dying.
And it requires, second, independent knowledge of the possible states of affairs in the world in order to identify, measure, and ascribe a causal role to information.
We can see why these are problematic. If the universe is one of these things, we would need a perspective from outside the universe in order to understand how the whole system works at all.
But more practically, it leaves terrible gaps in our understanding of how a person - or anything - learns at all. We need to know - somehow a priori - how our senses actually do present us with information, and we need to know, again a priori, what our meaning and purpose in life is. Otherwise we cannot explain why and how we learn.
I don't see how this story can be a part of connectivism. Connectivism is a story about how networks self-organize in a fashion that does not presume any prior conditions such as knowledge about states of affairs in the world or about the meaning and purpose of life and death. The introduction of these elements simply introduces the sort of intractable problem connectivism was intended to solve.
Finally, let's look at that last clause: "be seen to have knowledge and to change through co-adaptation of its parts."
In addition to being explicitly an interpretation ('be seen as') this phrase introduces two other elements that are not a part of connectivism.
First, it says we 'have knowledge'. I don't agree (and I don't think George Siemens agrees either) that knowledge is something we 'have'. This is a throwback to the old 'banking theory' of education, where knowledge is something we accumulate, like a possession.
We have repeatedly said that in connectivism, to 'know' is to be organized in a certain way (and that learning is to become organized in that way, by breaking and forming connections). Knowledge is not something we 'have'. Knowledge something we are.
Second, it says we 'change through co-adaptation of its parts'. This is a reference back to systems theory, and in particular, adaptive systems (yes, just like the ones they're trying to build inside learning management systems).
In systems theory, adaptation is another one of those black boxes. Here's Ackoff: "A system is adaptive of, when there is a change in the environment and/or internal state which reduces its efficienct in pursuing one or more of its goals which define its function(s), it reacts or responds by changing its own state and/or that of its environment so as to increase its efficiency with respect to that goal or goals."
The two unknowns here are, first, how does a system change its own state, and second, how does this change align with a goal or goals? Again, if this is what we think connectivism is, we've introduced many m,ore problems than we've solved.
'Networks' and 'Complex adaptive systems' are two very different things. Saying that the one is the same as the other is, I think, a misrepresentation of both.
(*) I don't want to elide what is an important point. I've talked about the question of interpretations on numerous occasions - my most complete discussions can be found here and here.