On Death and Dying: Evolution and Networks
As he makes clear in a comment on his recent post, part of Dron's motivation for depicting connectivism as a 'family of ideas' is so that he, too, can be considered a connectivist. In yet another paragraph from that post, he writes,
I can come up with a compelling theory of learning in social networks too, that equally explains learning in brains, and that I think fits with a connectivist account like a glove and that is incommensurate with other theories for much the same reasons.
Now of course what's interesting is the phrasing 'learning in social networks', which is in one sense vague, and in another sense incomplete. It is vague because he doesn't distinguish (and I don't think he wants to distinguish) between learning by people in social networks, and learning by the social network (ie., by society) in social networks. And it is incomplete because, of course, connectivism isn't simply a theory about 'learning in social networks', it is a theory about learning in networks generally, including most importantly neural networks.
Of course, there is learning that happens by neural networks that happen to be in social networks. The 'connectivism as extended brain' idea speaks to that a bit (in what I recently called the 'Siemens answer'). But this is not the whole of connectivism nor even essential to it. And insofar as it does constitute a part of connectivism it is a result of the underlying theory of how networks learn, and not the explanation for it. But I digress.
Let's look at the 'compelling theory' Dron proposes (we don't know whether this is a serious proposal or something set up as an example, but it doesn't matter).
In 'Out of Control', Kelly makes the thought-provoking observation that, in any system that evolves, death is the best teacher. (I added the link).Let's linger on the idea of death as a teacher for a moment.
Like most people, I suppose, I have thought long and hard on the prospect of death, mostly about how to avoid it, but at the same time living my life (as Becker suggests) in the denial of death. A big part of anyone's understanding of death is the need to ascribe some sort of purpose or meaning to it. A few years, I came to my own understanding of death and wrote a short post (understated so as not to alarm anyone - and now I can't find it again).
It was this: humans die so that humanity can evolve. If humans did not die, then humankind now would be the same as it was in some indeterminate past. Death does not have to happen - some forms of life do not die - but if entities do not die, they do not evolve. So in a certain sense, my own impending death is the sacrifice I make in order to ensure the continual advancement of the species. In the Darwinian world of tooth and claw, call me a war hero, if you like.
But here's the point: we don't die in order to enable the species to evolve. Quite the opposite. If we look at individual motivations and ambitions, what we observe is that most of us are trying not to die (or, at least, living as though we'll live forever). Death is, most crucially and importantly, involuntary. It's not something we do in order to achieve a result. It's not even something that the species does in order to evolve. It's just what we do. There is no purpose.
I can live with that.
Now let's read what Dron says about death and evolution:
From this perspective, species of organism (or connections in brains, or more complex entities like cities, or memes in social networks, or technologies) 'learn' through natural selection, either evolving to be fit to survive or dying, in a complex web of interactions in which other similar and dissimilar entities are in competition with them and also learning, which leads to higher levels of emergent behaviours, increased complexity and changes to the entire environment in which it all happens.
This isn't usually what we mean by 'learn' so we're going to have to unravel what is in fact a very dense paragraph.
- First, at least the following things 'learn': species of organism, connections in brains, complex entities, cities, memes
- They 'learn' through 'natural selection'
- 'Natural selection' is 'either evolving to be fit to survive or dying'
- other entities are in competition with them (where 'them' is the 'species that learn')
- other entities are also learning
- higher levels of emergent behaviours
- increased complexity
- changes to the entire environment in which this happens
To take, for example, the very first point: the list of things that learn: species of organism, connections in brains, complex entities, cities, memes. Now, the core of connectivism is that networks learn. If you don't agree with this minimal point it's hard to see that what you are advocating is a form of connectivism.
So, some of the things Dron lists are networks. Arguably, complex entities and cities are networks. But what about the rest. Is a species a network? We define a species in terms of its essential properties. A network may be composed of members of a single species but the species itself isn't a network. It is a way we categorize (or, literally, create, through arbitrary definitions (unless you're Kripke)) a type of entity.
Is a connection a network? No, it is not (except maybe in the trivial sense of a one-connection network). A connection does not 'learn' - a network learns by means of forming connections. (And here we need to be careful - a connection could be a thing that learns - but the definition of a connection is not as a thing that learns, and probably the bulk of connections in the world are not things that learn).
Is a meme a network? A meme is a "contagious idea that replicates like a virus, passed on from mind to mind." I talk about them here. If we wanted to be technical, we could think of a meme as a type of content that is passed from entity to entity in a network. It is not the sort of thing that learns.
So - maybe Dron is just being loose with his text. Or maybe it's not clear to him what sort of entities are entities that learn. Networks learn. Species, connections and memes do not learn.
I think, maybe, he is also not clear on why networks learn. It has nothing to do with the properties of the entity in question at all, except that it is a network. Indeed, our characterization of any given network as an 'entity' is arbitrary and after the fact (which is what allows, for example, Siemens to talk about a 'person' and an entity that extends beyond a physical form - because what's important is the network that we are talking about, not (say) the fact that it have livers and intestines (or beliefs and desires).
The next sentence: "they learn through natural selection." What can he mean by this?
Maybe he means that species and other types of entities learn through natural selection. But a 'type' is not a thing that learns. It's logically (though admittedly not conceptually) the same as saying 'the colour red is a thing that learns'. Abstract categorizations do not learn. Networks learn.
But what does it mean to say a network learns through natural selection? We can think of this in a macro sense or a micro sense. In the macro sense, we are to think of the network in competition with other (similar?) networks, all red in metaphorical tooth and claw. In the micro sense, we can think of the creation of and breaking of connections as a form of natural selection.
There is probably a story to be told at the micro sense. But 'evolution' isn't that story; it's the specific mechanisms associated with the making and breaking of connections, from the Hebbian 'use or lose it' model to complex cascades of connection-forming and breaking mechanisms.
Unfortunately, Dron appears to be talking about the macro sense, in which he is describing the network as a whole, as an entity, which is in competition with other entities. So the competition between these entities is what in some way causes (or requires?) the network to learn. And so some are 'fit to survive" while others die.
My question is - why would we invoke some external phenomenon - 'evolving' - to explain why we learn. Is it to explain why we die? But the explanation for why we (as a species, as a network) die isn't that we were not "fit". It's the other way around. We evolve because we die; we do not die because we failed to evolve.
The same sort of logic applies to a discussion of learning. The things that learn (qua the things that evolve, in the micro sense) are the things that live. They do not learn in order to live, no more than a species kills off its members in order to evolve. Rather, we live (as a species, as an individual human) because we learn. Why do we survive? It's not because we were better or more fit to live. It is because we are the kind of things that live. The other kinds of things, mostly, that do not learn, that do not evolve, are the kind of things that die. That's why they're not around today.
The same logic is true of humans. We do not learn in order to compete with other humans, or in order to avoid dying, or any of the rest of it. We learn because we are things that learn. And what that means is, we are networks (not 'we have networks' or 'we use networks') (and yes, I'm being a bit fuzzy by what I mean by 'we' here, but we'll let it slide).
In a similar manner, the phrase "higher levels of emergent behaviours" reflects a similar confusion. What is a 'higher level of emergent behaviour'? No doubt there is a whole story here, but it will again belong to the same category as the story about 'fitness to survive' - a confusion of cause and effect. At best, at most, an evaluation of a 'higher level of emergent behaviours' can be nothing more than an interpretation, an abstraction after the fact (a form, as I would say in other writings, a type of recognition of some or another entity or type of entity, embedded within an artificial counting or value system).
Dron pushes ahead:
Throw in a bit more evolutionary theory such as the need for parcellation between ecosystems along with some limited isthmuses and occasional disruptions, and we are heading towards a theory that sounds to me like a pretty plausible theory of networked learning in both social networks and brains, as well as other self-organizing systems.I'm not sure exactly what Dron means by 'parcellation between ecosystems' but from the few sources that can be found associating the two terms it appears to refer to the development of modularization in complex entities. Modularity is described (p.325) as a mechanism necessary to avoid increasing pleiotropy in organizations - 'pleiotroty' occurs when one mutation impacts many unrelated traits.
Modularity is a trait found in many networks. The human neural system, for example, is modular - we can identify distinct parts to the human brain. But does modularity arise out of some sort of evolutionary 'need'? That would pretty much require us to rewrite network theory. In any case, modularity appears to arise from much more mundane circumstances - it's the result of the physical 'cost' to maintaining networks: the energy and resources required to create longer and greater numbers of connections between entities. It's a reflection of the fact that in nature (if not in mathematics) networks are not scale-free.
Once again, a dialogue in terms of 'needs' should actually be represented as a dialogue in terms of networks. Networks do not 'need' modularity, but as they grow, in non-scale-free conditions, they become modular. The result of increased modularity is reduced pleiotroty, which in turn limits the range and impact of mutations (all other things being equal).
Modularity is like death. The networks that are modular tend to survive more. Networks that are not modular tend to survive less. But they are not modular in order to survive; that is not the reason they are modular. Rather, physical constraints on the number and extent of connections provide the explanation, and the entire process can be explained (and predicted!) in network terms, without the 'invisible hand' of evolution to guide us.
It matters how you talk about this. It matters, because network theories explain things in one direction, and non-network theories (which invoke things like wants and needs and desires) explain things in the other direction. But more, it matters because we're really talking about two very different theories here. They are what Kuhn would call 'incommensurable' theories - they don't even agree on what words mean and on what entities exist.
I think that Dron doesn't think it matters. I think that he thinks that these other theories, first of all, are useful, and second, on the basis of that utility (and their occasional references to networks) should be considered to be a part of connectivism. Here's what he writes:
We could use it [evolutionary theory] to describe how ideas come and go, theories develop, arguments resolve and much much more. It works, I think. Others have run with this idea and explored it in greater depth, of course, such as in neuro-evolutionary methods of machine learning and accounts of brain development following a neural Darwinist model.There are two things going on in this paragraph. First is the pragmatist suggestion that it doesn't matter whether or not a theory is true or right, so long as it works, and second, the references in passing to other people who have linked evolution and learning. Of the two, the former point is more important.
Yet I'd like to linger on the latter for a moment. Because what they both have in common is the idea that learning networks are not (entirely) self-organizing networks. In neuro-evolutionary methods of machine learning, a mechanism for adding or removing connections is directly encoded into the neural-network software. The second is a mechanism whereby the selection of neuronal groups to perform specific functions are instructed by the environment.
In both cases, essentially, these amount to changing the learning mechanism by changing the physical substrate in which learning occurs. If you change the physical properties of a neuron - if, say, you make it more or less likely to respond to electrical stimulation - then you change how the network is going to learn. It will be (at a minimum) more or less difficult for a signal to propagate, for a connection to form, for a structure of network connectivity to result.
I think it is uncontroversial to agree that, if there is an intervention by a third party, whether that party be a computer programmer or the hand of nature, then there will be a change in the way something learns. This is also true with students - if we feed them better, they learn better. If we give them Seroquel, there will be other changes. And we can call this learning, if we want. But it's not clear what we gain from that. Not all change is an instance of learning. The connectivist perspective, as I understand it, describes network-based changes in connectivity to explain learning. True, there is a different sort of learning that is created through the use of a sledge hammer. But that is not the sort of learning we are talking about.
Let me now turn to the point that it does not matter whether or not a theory is true. Dron takes a hard pragmatist stance here:
While probably true at some level, and providing a pretty good and fairly full explanation that is consistent with a connectivist account, this is only of practical import if we can use it to actually help people to learn - a theory of how to learn, not of learning itself. It actually doesn't matter at all even whether it is a full and complete explanation of all learning in all systems or not.As we have already established, I think, the evolutionary explanation for learning being proposed in this paragraph is not consistent with the connectivist account.
But for Dron, the defining characteristic of 'connectivism' is (or should be) not whether it is consistent with what I take to be a connectivist account, but rather, whether it can be considered part of a 'family of ideas' that can be used to help people learn - as he says, a theory of how to learn.
I think that you're going to answer this very differently depending on your theory of learning. If you are actually a connectivist, and have a basic understanding of human physiology, then you will understand that humans are always learning, because they are always forming and reforming connections, they are always strengthening and weakening neural pathways.
Moreover, there is very little that can be said about how to do this, most of which is outside the domain of education, and usually classified under the heading of physical fitness or foods and nutrition. I think it actually is a useful part of connectivism to point out how physical health impacts learning, precisely because leaning takes place in a physical environment and the formation of connections (as noted previously) requires energy and nutrients.
Almost certainly, however, this is not what Dron means when he says connectivism needs to provide a practical account of 'how to learn'. What he wants is a story, probably told in terms of wants and needs and desires, that talks less about how to learn (that is, less about how to form connective networks) and more about what to learn.
And truth be told, I've played that game too.
And Dron, not without justification, says it begs the question:
Downes's theory and 'mine' (I make not claims at all that this is novel) both beg the question of how it might help people to learn and make their lives better. These accounts only have legs if we can put them to use, and doing so invokes models and purposes that are prior to the accounts themselves, very notably connectivism as a theory of how to learn itself, so we have not really addressed the issue at all.I've always been pretty clear that my work has a purpose. I have had for years a vision statement on my website that states, in part, "I want and visualize and aspire toward a system of society and learning where each person is able to rise to his or her fullest potential without social or financial encumberance," and so on.
But let's address this. Is it true that I want people to have better lives? Yes. Is it true that I engage in (among other activities) learning in order to do this? Yes. Does this account tell me anything about how to learn? No. And does it explain how I learn, or why I learn at all? No.
My statement of purpose is an expression, in avowedly folks psychological terms, of who I am. It is a description of the result of my education and experiences to date. It is an abstraction over some very complex mental states, and as such helps people be able to predict what things I might say or what actions I might undertake in the future. But it is not a cause of my mental states (and certainly not of the mental states that resulted in its production in the first place).
So - why does this matter? Why can't we describe these things in whatever language we want? Why can't we just clump all these theories that sort of talk in similar ways into a 'family of ideas' and call it connectivism (or whatever)?
Well - let's return to the subject of death.
To stay exactly parallel to Dron's point, we can can't talk about 'how to die' without some models and purposes that are prior to the accounts themselves."
But above I stated that from the network perspective, there isn't a purpose to death. It just so happens that species whose members die evolve, and species that don't die, don't.
But we can (and do) talk of how to die a meaningful death. And if we just use whatever theory we want to explain these things, with no regard to whether they are right or true, then there will be cases in which it is useful (even if literally false) to speak of a meaningful death.
We might even begin to use this language to teach people (if you will) 'how to die'. Of course, they don't really need instruction in how to die - every person will accomplish this feat eventually. But we can with our theory talk about right ways and wrong ways to die. We can talk of a "fitting end" using the same language and metaphors of evolution. We can speak of a person's death being 'meaningful' if this, that or the other condition is attached to it.
This is what Dron is saying about learning. The very idea of teaching someone how to learn presupposes that there must be some purpose to learning. But really, what he means is something like 'how to meaningfully learn'. And meaningful learning presupposes a purpose.
As a theory of living, the theory of 'meaningful death' is internally inconsistent. Yes, you can teach people how to live by teaching them that their death is meaningful. But such teaching can (and often does) result in people seeking death, or risking death, resulting in the ending of their lives. A consistent theory of living would truthfully reflect that a person's death has no meaning, and that the purpose if life (if the concept has any meaning at all) has everything to do with what is done during a lifetime, and very little with the manner in which it ends.
The same is true with a theory of learning.
From where I sit, a theory of learning which tells people 'how to learn' is essentially telling them that the way to learn is to not learn. It is a way of telling them to subsume their own best interests under those stipulated by some third party (where the authority of this third party is inevitably an appear to a black box or magical mechanism).
It is, in the end, a way of saying that learning is not actually a network phenomenon at all.
Now of course we know that actual evolutionary theory is nothing like what has been described in this post at all. Actual evolutionary theory doesn't trade in needs and wants and desires - it doesn't even presuppose on the part of the species a will to live or any such motivations at all. It says, simply, that mutations occur, and that some species that mutate continue to exist, and other species do not continue to exist, and this process is what explains the diversity of life today.
If you just think of 'evolution' as a 'family of ideas' that brings together every thought and theory related to selection and survival and the rest, it's not a leap to start describing things like 'survival of the fittest' as a part of evolution, and not far from that to describing things like eugenics as a socially worthwhile activity. It's the sort of thing that happened to Nietzsche, it's the sort of thing that happened to Darwin, and it shows that actually getting the theory right matters.