Wednesday, March 14, 2007

The Mind = Computer Myth

Responding to Norm Friesen:

If you were to read all of my work (not that I would wish that on anyone) you would find a sustained attack on two major concepts:

1. The 'information-theoretic' or 'communications theoretic' theory of learning, and

2. The cognitivist 'information processing' physical symbol system model of the mind

These are precisely the two 'myths' that you are attacking, so I am sympathetic.

That said, I think you have the cause-and-effect a bit backwards. You are depicting these as technological theories. And they are, indeed, models of technology.

However, as models, these both precede the technology.

Both of these concepts are aspects of the same general philosophy of mind and epistemology. The idea that the human mind received content from the external world and processed this content in a linguistic rule-based way is at least as old as Descartes, though I would say that it has more of a recent history in the logical positivist theory of mind. Certainly, people like Russell, Carnap and even Quine would be very comfortable with the assumptions inherent in this approach.

Arguably - and I would argue - the design of computers followed from this theory. Computers began as binary processors - ways of manipulating ones and zeros. Little wonder that macro structures of these - logical statements - emulated the dominant theory of reasoning at the time. Computers were thought to emulate the black box of the human mind, because what else would that black box contain?

Now that said, it seems to me that there can't really be any denying that there is at least some transmission and reception happening. We know that human sensations result from external stimuli - sight from photons, hearing from waves of compression, and so on. We know that, once the sensation occurs, there is a propagation of signals from one neural layer to the next. Some of these propagations have been mapped out in detail.

It is reasonable to say that these signals contain information. Not information in the propositional sense. But information in the sense that the sensations are 'something' rather than 'something else'. Blue, say, rather than red. High pitched, say, rather than low pitched. And it has been a philosophical theory long before the advent of photography (it dates to people like Locke and Hume, minimally) that the impressions these perceptions create in the mind are reflections of the sensations that caused them - pictures, if you will, of the perception.

To say that 'the mind is like a photograph' is again an anticipation of the technology, rather than a reaction to it. We have the idea of creating photographs because it seems to us that we have similar sorts of entities in our mind. A picture of the experience we had.

In a similar manner, we will see future technologies increasingly modeled on newer theories of mind. The 'neural nets' of connectionist systems are exactly that. The presumption on the part of people like Minsky and Papert is that a computer network will in some sense be able to emulate some human cognition - and in particular things like pattern recognition. Even Quine was headed in that direction, realizing that, minimally, we embody a 'web of belief'.

For my own part, I was writing about networks and similarity and pattern recognition long before the internet was anything more than a gleam in my eye. The theory of technology that I have follows from my epistemology and philosophy of mind. This is why I got into trouble in my PhD years - because I was rejecting the cognitivism of Fodor, Dretske and Pylyshyn, and concordantly, rejecting the physical symbol system hypothesis advanced by people like Newell and Simon.

I am happy, therefore, to regard 'communication' as something other than 'transmission of information' - because, although a transmission of information does occur (we hear noises, we see marks on paper) the information transmitted does not map semantically into the propositions ecoded in those transmissions. The information we receive when somebody talks to us is not the same as that contained in the sentence they said (otherwise, we could never misinterpret what it was that they said).

That's why I also reject interpretations, such as the idea of 'thought as dialogue' or communication as 'speech acts' or even something essentially understood as 'social interaction'. When we communicate, I would venture to say, we are reacting, we are behaving, way may even thing we are 'meaning' something - but this does not correspond to any (externally defined) propositional understanding of the action.

1 comment:

  1. Hi Stephen.

    If you really want to see what computer as mind will look like, check this out.

    http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=mazzagatti&s2=unisys&OS=mazzagatti+AND+unisys&RS=mazzagatti+AND+unisys


    There are others describing how this works that you can get if you are interested through the EPO patent search site using this inventors name.

    Basically it starts from the premises worked out by Charles Peirce (friend of Babbage - http://www.peirce.org/) a long time ago and notices that you need to have a start and end node to make a thought complete. That way you can build a structure to support the triads that represent the universe of knowledge (Phaneron) in a computer memory. The follow on stuff shows some of how to construct and manage it in efficient ways.

    ReplyDelete

I welcome your comments - I'm really sorry about the moderation, but Google's filters are basically ineffective.