Thursday, June 28, 2007

You work in a community, not a company

Responding to Jay Cross.

"It doesn’t work to take one from column A and one from column B, e.g. secrecy and transparency are opposites. Competition and collaboration are the same deal."

Ah ha! I remember saying something like this on this very blog, not so long ago. :)

"What should a person do if they find themselves in a non-believing, ice-age organization?"

Make your own rules, make your own job. Work not just in your organization but in your sector, your community. Carve out the appropriate niche for yourself no matter where you are employed. Move on if your employers don't recognize your value.

Look at anybody who is a leader is this space, or any space. It is not a person who did their job. It is a person who *changed* their job by either redefining their existing responsibilities or creating a new position (or company) entirely.

"What’s the most enlightened thing to do here? I’ll post this issue to the Internet Time Community in case the discussion grows lengthy."

Again - understand that while you may work for a company, your work environment isn't defined by - or limited by - the company. You work in a community, not a company. You may be paid by the company but your job is defined by the community and, if you're doing it well, you're serving the community.

Remember that you don't work for the company, you work for yourself. The company is merely your largest (and perhaps only) client. Keep in mind that the company will not hesitate to terminate your position, redefine your role, or do any number of things that will not be in your best interest. You have to watch out for yourself.

In the meantime, the company will watch out for itself. It doesn't need a whole lot from you, beyond what you've promised to deliver to it. What the company does is up to the company. You aren't going to change the company - it will have to change itself (that is, the owners or executives will have to reach their own change of heart and attitude on their own).

The best you can do is to show what your (newly defined) work and (personally defined) attitude can bring to the company. As publicly as possible, document and record, should you ever need it for a promotion case (or job interview).

Wednesday, June 27, 2007

eBooks and Books Online

Responding to Richard Nantel:

Just to respond to the comments, I think there is a very big difference between eBooks and books that are online.

Most books that are online are plain text (as in the Gutenberg collection) or RTF (as are most of the Burgomeister books). These are, of course, free and open access books (whether legal or not).

What these types of books manifestly don't need is a 'reader'. They show up just fine in web browsers, and can even be stored and viewed on MP3 players (eg. Samsung or iPod).

The other types of books online are the scans. This is typically what you would see on sites like megaupload and rapidshare. We are told "All that anyone has to do is go to Google, and type www.rapidshare.com ....the search result will tell you where this file is located to be download" but what you get when you do that is what you get when you search for MP3s - a huge pile of spam sites.

But this leads me to my main point. Mostly, there's no reason to share scans or awkward file formats, like PDFs. Formats that are designed for reading on paper are notoriously harder to read on the web.

Look at the Adobe Max sample, in the image above. Only about a fifth of the screen is devoted to actual text. Were that displayed on my screen (a very nice MacBook Pro, no cheap display) the text would be about 5 point in size, almost impossible to read. Sure, you can magnify it - but at the cost of having lines that extend off the screen to the left or right.

Online, text needs to flow into the available space. This is something PDF and related file formats (not to mention scans) simply don't do. This means you must have very large screens or learn to live with shifting your 'page' up and down, back and forth, a lot.

Text in plain text, RTF and HTML, by contrast, flows into the display window. Narrow the window, and the lines narrow. If there is overflow, it is always at the bottom of the page, allowing a nice smooth scroll (or occasional PgDn buttons).

Above, i read, "For one, it manages all your e-books, sort of iTunes-like, allowing you to easily switch between reading one e-book and another. (Wouldn’t it be cool to have all the learning materials related to a subject accessible through one easy interface?)"

I already have that. It's called the web. I access it using Firefox. I don't even need to store the files if I don't want to (if I want to, I can - and there are numerous content management systems that allow me to manage my text files, or I could simply use the Windows or Mac operating system, which is designed for precisely that).

What we are seeing online is not an increase in the popularity of eBooks per se. We are seeing an increased abundance of free and open content. This is something very different from what the publishers envision.

Book publishers would like readers to use an iTunes-like system - proprietary interface, non-portable content formats, no free or noncommercial content. It's not going to happen, except in some very captive markets. And even in these markets (university and government libraries, for example) the questions are being asked. Why are we paying for this?

This should be noted: "the cost of copying a book and having it put on the internet now runs to about $20 - $30 per." That's not per copy per reader. That's per book. Which makes the per reader cost about 10 cents.

From where I sit, the future of books - properly so-called - online lies not with eBooks, which do badly what HTML does very well, but rather specialty sites like Lulu.com

The book - properly so-called - will remain a print publication, with the bulk of the expense (and the price) being for the paper, ink, binding and distribution. They will be printed for paper libraries (which are still a very convenient place to store information, especially if you have a searchable text version of each printed book), keepsakes and souvenirs.

Monday, June 25, 2007

The Authorities Speaking

Responding to Robert McHenry:

If this is the authorities speaking, bring on the amateurs.

I mean, really. What sort of understanding of Web 2.0 is this column intended to represent?

Perhaps the understanding represented in the Britannica article? "In particular, many of the most vocal advocates of the Web 2.0 concept have an almost messianic view of harnessing social networking for business goals."

Um.... what?

The article reviews Web 2.0 without managing to touch on any of its essential features, things such as AJAX, APIs and REST. Neither does it manage to address things like tagging and folksonomy. And it barely mentions things like social networking, user generated content and the wisdom of crowds.

Perhaps if McHenry had some idea *what* he was looking for he would be in a position to talk about it. Instead we get a criticism so generic it can be applied to things as diverse as Gleem and the 60s.

If that were all, I would have just left this. But we are treated in addition to an offensive and ignorant stereotyping of people in South America and New Guinea. Perhaps McHenry is relying on Britannica's 1890 edition for information about the dark continents.

And we are finally treated to pseudo-sociology. "Most people seem to behave most of the time as though they are confident that someone else is in charge."

I would like to see the study that supports this (of course there is none; he made it up).

It is likely that they think someone else is in charge, because - empirically - someone else *is* in charge. Very few of us are Prime Ministers or CEOs, which means that very few of us are actually in charge.

Whether they have confidence in those people is a different matter. Mostly these people are not interested so much in leading us as in looting us.

I sincerely doubt that Adler was "a more thoroughgoing democrat" than I, or at least, this argument fails completely to convince me of that.

I am one of those who "gabble" about the gatekeepers. I rail against them precisely because I believe everybody is educable. But where I differ is, by 'educable' I do not mean 'can be like Adler'. I believe people can choose their own way and their own culture, without having some self-proclaimed experts telling them what is literature and what is crap.

Which returns us to Web 2.0.

On the basis of, well, nothing, McHenry has decided that Web 2.0 belongs to the category of 'crap' rather than 'literature'.

As though - what? Literature could only be produced through things that are not Web 2.0? That literature could only be produced by the elite, not the "educable" masses?

McHenry huffs about "Western ideals... especially the ones about liberty and democracy and consent of the governed and all that sort of thing."

If Web 2.0 is genuinely a means of allowing the "educable" masses to express themselves, or even to govern themselves, then it is the *instantiation* of those ideals. Hardly the antithesis.

I think that McHenry would quiver in terror at the thought of those "educable" masses actually seizing the reins of power without first being subject to an appropriate indoctrination program.

For otherwise, he might face the terrible prospect of being governed by people with a current and detailed knowledge of culture, technology, politics, law and sociology (ie.: who’s on “American Idol,” what’s the deal with the iPhone, will Fred Thompson declare, should Scooter Libby be pardoned, and, yes, whither Web 2.0).

The horror! The horror!

Update (June 26): this response, which remained on the Britannica site for most of yesterday, has been removed from the site today.

Update (July 26): I received this in my email yesterday: "We just noticed this post on your blog (as a result of the post by Rita Kop that mentioned it). I've talked to all of our blog administrators, and none of them recalls seeing your comment at any point, so I don't know what happened here. We almost never take down a comment once it's gone through moderation, and only then if there's found to be something highly objectional about it. If the text of your comment was the same as the post on your blog, there's no reason why we would have blocked it. If you'd like to resend it please do, and I'll make sure it gets posted promptly. Best, Tom Panelas
Encyclopaedia Britannica, Inc."

Update (August 3):
Okay, it's up there:
By the way, by any chance did you compose the original comment in Word and paste it in? We've discovered one or two other instances of disappearing comments, and we're trying to troubleshoot it. Corrupted characters orginating in MS Word *may* be at the root of those, though like evolution, at the moment that's only a theory.
We of course want to make it so you can compose in anything you want and paste it in, so we're working on that.
Thanks,
Tom

Friday, June 22, 2007

How Do You Know? (2)

These are hard questions.

What I described in my paper is my best answer to the question 'how do I know', that is, it tries to explain how I (in fact) know things. It is therefore not a description of the criteria I should use to distinguish truth from falsity, nor how one person can convince another person of something.

Indeed, viewed as a system for determining the truth of something, the paper seems pretty ridiculous. Wealth of experience? Why should anyone trust that! Why is my wealth of experience any better than anyone else?

The problem is, the description of how we in fact learn things does not carry with it any sort of guarantee that what we've learned is true. But without such a guarantee, there can be no telling for ourselves what to believe or not to believe, no way to convince other people. It's like we're leaves blown about by the by the breeze, with no way to sway the natural forces that affect us.

Moreover, the problem is:
- the is no guarantee
- yet we do distinguish between true and false (and believe we have a method for doing so)
- and we do want to be able to sway other people

What complicates the matter - and the point where I deliberately stopped in the other paper - is that not everybody is honest about what they know and what they don't know. Sometimes there genuinely are charlatans, and they want to fool us. Sometimes they are simply mistaken.

There's not going to be a simple way to step through this.

I went immediately from the British Council talk, where I was trying to foster a point of view, to a session inside Second Life, where I played the role of the sceptic. Not that I think that the people promoting Second Life are charlatans. But I do think they are mistaken, and I do think some of the statements they make are false.

The fact is, even though there are no guarantees, we will nonetheless make judgments about truth and falsehood. It is these judgments - and the manner of making these judgments - that will sway the opinions of other people.

You can't tell people things, you can only show them.

Now even this statement needs to be understood carefully. It is true that we can tell people things, eg., that 'Paris is the capital of France' that they will remember - but it does not follow that they know this; they will need to see independent evidence (such as, say, newscasts from 'the capital of France'). Telling produces memory. Showing (and experience-producing processes in general) produces belief.

But now - even this needs to be qualified. Because if you tell something to somebody enough times, it becomes a type of proxy experience. So - strictly speaking - you can produce belief by telling - but not by 'telling' as we ordinarily think of it, but by a repeated and constant telling.

Additionally, we can make 'telling' seem more like experience when we isolate the person from other experiences. When the 'telling' is the only experience a person has, it becomes the proxy for experience.

It is worth noting that we consider these to be illicit means of persuasion. The former is propaganda, the latter is indoctrination. Neither (admittedly) is a guaranteed way of changing a person's mind. But it is reliable enough, as a causal process, that it has been identified and described as an illicit means of persuasion.

Let me return now to how we distinguish truth from falsehood.

This is not the same as the process of coming to know, because this process has no such mechanism built into it. The way we come to know things is distinct from the way we distinguish between truth and falsehood.

This may seem counter-intuitive, but I've seen it a lot. I may be arguing with someone, for example, and they follow my argument. "I agree," they say, "Um hm, um hm." But then I get to the conclusion, and they look at me and say, "But no..." It's this phenomenon that gives people the feeling they've been tricked, that I've played some sort of semantic game.

So there are processes through which we distinguish truth and falsity. Processes through which we (if you will) construct the knowledge we have. We see the qualities of things. We count things. We recognize patterns in complex phenomena. These all lead us, through a cognitive process, to assert that this or that is true or false.

Usually, this cognitive process accords with our experience. For example, we say that the ball is red because we saw that the ball was red. We say that there were four lights because we saw four lights. It is close enough that we saw that we came to know that the statement was true because of the experience. But - again - the process of knowing is separate from the process of distinguishing truth from falsehood.

There are general principles of cognition. These are well known - the propositional logic, mathematics, categorical logic, the rest (and if you want to see the separation between 'knowing' and 'distinguishing truth from falsity' then look at some of these advanced forms of logic - deontic logic, for example. We can use some such process to say that some statement is true, but because the process is so arcane to us, the statement never becomes something we 'know' - we would certainly hesitate before acting on it, for example).

There are also well known fallacies of cognition. I have documented (many of) these on my fallacies site. It is interesting to note that these are for the most part fallacies of distraction. What they do is focus your attention on something that has nothing to do with the proposition in question while suggesting that there is a cognitive link between the two. You come to 'know' something that isn't true, because you have had the experience.

Consider the fallacy: "If the plant was polluting the river, we would see the pollution. And look - we can see the pollution." We look, and we see the pollution. It becomes part of our experience. It becomes the reason we 'know' that the plant is polluting the river. No amount of argument - no amount of 'telling' (except, say, indoctrination) will convince us otherwise. We have to actually go to the plant and see that it is not polluting the river in order to understand - to know - that we were the victim of a fallacy.

There is a constant back-and-forth being waged in all of us, between what we 'know' and the things we say are 'true and false'.

That is why I say you can't 'tell' a person something. Merely convincing them (even if you can) to agree that 'this is true' is a long way from getting them to know it - getting them to believe it, to act on it, to make wagers on it.

So - convincing a person comes down to showing them something.

Often this 'showing' will be accompanied with a line of reasoning - a patter - designed to lead them to the 'truth' of what they are being shown. But the knowledge comes from the showing, not the patter.

Even with showing, there are no guarantees. 'You can lead a horse to water...' Even the experience may not be sufficient to convince a person. Any experience is being balanced against the combined weight of other experiences (perhaps the 'patter' is sufficient to sway people in some touch-and-go cases, by offering a coherence with other experiences - an easy path for belief to follow).

A great deal depends on the nature of the experience. Experiences can be vivid, can force themselves on us. They can be shocking or disturbing. Images of violence capture our attention; images having to do with sex capture our attention. Our attention, even, can be swayed by prior experiences - a person who has spent a lifetime around tame tigers will react very differently on seeing a tiger than a person who has only known them to be dangerous carnivores.

'Convincing' becomes a process of pointing, a process of showing. Sometimes what a person is told can direct a person where to look (in this piece of writing I am encouraging to look at how you come to have your own knowledge, to see how it is the result of a separate track from how you come to see things are true and false). Sometimes the experiences can be contrived - as, say, in a simulation - or the senses fooled. Some media - especially visual media - can stand as substitutes for experience.

We can have experiences of abstract things - the weight of experience just is a way of accomplishing this. The logical fallacies, for example - on being shown a sufficient number of fallacies, and on seeing the fallaciousness of them, we can come to have a knowledge of the fallacies - such that, when we experience a similar phenomenon in the future, we experience it as fallacious.

Convincing becomes a matter of showing, showing not just states of affairs in the world, but processes of reason and inference. If I can show actual instances of inference, how a person comes to believe, comes to know, this or that, then it becomes known, and not merely believed, by the viewer. If I can show my reasoning process, then this process can b known (after being experienced and practiced any number of times) by the learner.

'Expert knowledge' is when a person not only remembers something, but when a person has come to know it, has come to know the processes surrounding a discipline.

Such knowledge is often ineffable - the knower can't even enumerate the (true or false) statements that constitute the knowledge, or that led to the knowledge. What a person knows is distinct from what a person says is true or false.

It is not truth that guarantees knowledge. It is knowledge that guarantees truth.

Tuesday, June 19, 2007

How Do You Know?

I have just finished a presentation to the British Council consisting of a video and a short discussion. I'm not happy with the result - partially because the process of producing the video seemed to be cursed (including one crash that wiped out hours of work - Camtasia has no autosave! who knew?) and partially because I didn't feel comfortable with the discourse.

The video production is one thing, and I can live with a more or less proficient video because it's part of the ongoing process of learning a new way to communicate. But I'm less sanguine about the discourse. I have a sense of what went wrong with it - I even talked about that a bit during the session - but still it nags at me with deeper issues still unresolved.

We weren't very far into the discussion when I made the comment that "if you're just presenting information, online is better than the traditional classroom." The point I was trying to make was that the unique advantage of the classroom is that it enables face-to-face interaction, and that it should be used for that, leaving other things to other people.

And so, of course, someone asked me, "How do you know?" Which stopped me - not because I don't know - but because of the utter impossibility of answering the question.

There are so many differences in community - the different vocabularies we use and the different assumptions we share, for example. For me to express point A in such a way that it will be understood the way I understand it, I need to work through a fair amount of background. But in a session like this - a 20 minute video and a few seconds of discussion - there was no way I was going to be able to accomplish that.

And this carries over to differences in epistemology. The question of 'How do you know' means different things to different people. In some cases, it's not even appropriate - if a football coach instructs a player, the player doesn't say "How do you know" because he knows that the coach isn't set up to answer questions of that sort (he'll say, "I depend on my experience" or some such thing, offering a statement that has no more credibility than the original assertion). In other cases, some sort of process or set of conditions is assumed - and this varies from discipline to discipline, community to community.

In this particular instance I was speaking at a conference on blended learning. So there's a certain perspective that has already been adopted, one that already says that the classroom should not be abandoned. Indeed, the classroom is like the baseline reference, and the role of ICT is to support by being what the classroom cannot be - being available at home, for example, or at midnight, or around the world. ICT is about enhancing learning, in the blended learning model. And this picture couldn't be further from my own model if it tried. For me, it felt like going to a prayer meeting and talking about the role atheism could play in the devotee's life.

You see, from where I sit, blended learning is a bit like intelligent design. It's a way for people to keep hold of their traditional beliefs, to maintain the primacy of the classroom, the primacy of authority in education, the primacy of the information-transfer model of learning, and at the same time (because it's blended, you see) to appear as advocates of new learning technologies, including (as was the subject of the conference) Web 2.0. It's faith pretending to be science. While in my world, there is basically no role for the classroom at all. It's irrelevant.

To their credit, they were willing to let me have that, giving me room to reinvent the face-to-face interaction (which I do believe in) to allow full and proper play for Web 2.0 and ICT in general. But I am still faced with the fundamental questions: how do I explain what I mean, and how do I know (or show I know) it is true?

To take a case in point: I said "if you're just presenting information, online is better than the traditional classroom." What I thought I was making was a straight-forward assertion about the properties of the traditional classroom and the online presentation of information. I wanted to bring this out but found that I didn't have the words.

For example, information is transmitted online at much greater bandwidth than in a classroom. This is partially because a person standing at the front of the room can only speak at a certain speed. The words only come out so fast - and at a fraction of the speed they can be read (at least by most people). And in a classroom the instructor must attend to the needs of all students, which means there will be periods of 'dead air', where one student is being addressed at the expense of everyone else, who must sit and wait.

I wanted to say this, but I couldn't say this, because the audience must already know this - and yet, despite this knowledge, will still favour classroom delivery, which is why what I thought was a statement of fact - that "if you're just presenting information, online is better than the traditional classroom" - became a statement of opinion, that needed some sort of evidence. From my perspective, it was as though I had said "the sky is blue" and someone (who apparently believe there was no sky) asked my how I knew. How do you explain? How do you argue?

What could 'better' even mean in such a context?

Because my own statement - that "if you're just presenting information, online is better than the traditional classroom" - doesn't even make sense in the context of my own theory, because I do not support an information-transfer theory of education. I'm in the position where I'm trying to discuss the relative advantages of online and in-class learning, and trying to place myself into the context of the existing discussion, which works to a certain point, but which vaporizes when pressed in certain ways.

How do I know it is better? Well in this world there are certain outcomes to be expected, and means of measuring those outcomes, so that the relative efficacy of classroom instruction and online instruction could be compared, by conducting pretests and post-tests against standardized evaluations, using standardized curricula. And the best I could say, under such conditions, is that there is no difference, based on 40 years of studies. Which they must know about, right?

All this is going through my mind as I seek to answer the question.

I consider the possibility that by 'better' he means 'more efficient'. Because here I could argue (with some caveats about production methods and delivery, the sort of things I outline in Learning Objects) that the use of online delivery methods is much cheaper than the very labour-intensive methodology of the classroom. That we are paying, for example, research professors (who don't even want to teach) very high salaries to accomplish something that could be as well done using multimedia.

So I concluded that he was looking for evidence of the usual sort - studies that showed knowledge was more reliably transferred (or at the very least, implanted) using ICTs than in classroom instruction. Probably such studies exist (you can find a study to support almost anything these days). But I am again hitting the two-fold dilemma.

First, our conception of the task is different. I had just come from reading and writing about associative learning. "The result in the brain is strengthening or weakening of a set of neural connections, a relatively slow process." It's not about content transfer, it's about repeated exposure (preferably where it is highly salient, as this impacts the strength of the neural connection). The classroom plays almost no role in this; at best it focuses the student's attention, so that subsequent exposure to a phenomenon will be more salient.

This is (as so often happens) abutted directly against corporate or institutional objectives. The fact that trainers and teachers have certain things that they need to teach their students, and that this is generally non-negotiable (to me, this is a lot like the Senate legislating that the value of Pi is 3, but I digress). That evidently, and by all evidence, these objectives can be accomplished using classroom instruction, and that moreover, they might not be using ICTs.

The evidence, of course, is the set of successful exam results. One would think, with the experience of No Child Left Behind behind us, that we would be sensitive to the numerous and multifarious means of manipulating such results. I have written before about how such tests can' be trusted. About how the proposition that there can be (so-called) evidence-based policy should not be believed. And I've linked to the misconceptions people carry with them about this. But I can't shake in people that belief that there is, after all is said and done, some way to measure whether one or the other is better.

The thing is, there is no definition of 'better' that we could define the parameters for such a measurement, and even if there were, the determinates of 'better' are multiple and complex. A person's score on a test, for example, is subject to multiple and mutually dependent factors, such that you cannot control for one variable while testing for the others. Any such measurement will build into its methodology the outcome it is looking for.

The problem is - according to everything we seem to know - unless there is some way of measuring the difference, there is no way to know the difference. Even if we don't believe that "if it can't be measured, it doesn't exist," it must be that measurements give us some sense of what is better and what is not - that they can at least approximate reality, if not nail it down precisely. I don't agree - the wrong measurement can suggest that you are succeeding, when you are failing. Sometimes these wrong measurements are deliberately constructed - the phenomenon of greenhouse gas intensity is a case in point.

At a minimum, this position takes a good deal of background and analysis to establish. At worst, attempting to maintain such a position leaves open the charge of 'charlatan'. Responses like this: "Each time I read a student's paper containing 'I think, I feel, I believe,' I am aggravated, acerbically critical, and given to outbursts of invective: 'Why do I care what you feel?' I write, roaring with claw-like red pen. 'This is not an emotional experience. Believe? Why would you think you can base an argument on unsubstantiated belief? You don't know enough to believe much of anything. Think? You don't think at all. This is mental masturbation. Without evidence you have said exactly nothing!'"

Am I a charlatan when I say things like "if you're just presenting information, online is better than the traditional classroom?" Even if I have nothing to personally gain from such statements, am I leading people down the garden path? It is very difficult, in the face of things like the British Council presentation, to suppose people are thinking anything else. "It's a nice line," they think to themselves as I stumble in front of them, attempting lamely to justify my lack of evidence, "but there's no reason I should believe it."

Which raises the question - why do I believe it?

I have made decisions in my own life. I have chosen this way of studying over that. I have chosen this way of communicating over that. I didn't conduct a study of which way to learn and which way to communicate. I operated by feel. There's no way of knowing whether I might not have been more successful if, say, I had stayed in the academic mainstream, published books and papers, assigned my copyrights to publishers, learned through classes and conferences and papers and lectures.

But, of course, that was never the decision I made. At no point did I sit down and say, I will eschew traditional academia, I will learn informally, through RSS and Gog-knows-what Web 2.0 technology, and (while I'm at it) I will embrace Creative Commons and lock publishers out of the loop. Indeed, I don't think I could have imagined all of that, were we to suppose some fateful day when such a decision would have been made. I made the decision one small step at a time, one small adjustment at a time, as though I were surfing a wave, cutting, chipping, driving forward, each decision a minute adjustment, each characterized not by measurement, not by adherence to principle, but by feel, by reaction, by recognition.

This is important. George Siemens says that knowledge is distributed across the network, and it is, but how we know is irreducibly personal.

What does that mean? Well, part of what it means is that when we are actually making decisions, we do not in fact consult principles, best practices, statistics or measurements. Indeed, it is even with some effort that we refrain from playing the hunch, in cases where we (cognitively) know that it's a bad bet (and we walk away (and I've had this feeling) saying, "I know the horse lost, but I still should have bet on the gray," as if that would have made the difference).

Malcom Gladwell says, make snap decisions. Trust our instincts. What this means is very precisely an abandonment of principle, an abandonment of measurement, in the making of decisions. It's the same sort of thing. My 'knowing' is the culumation of a lifetime of such decisions. I have come to 'know' that "if you're just presenting information, online is better than the traditional classroom" in this way - even though the statement is, in the contex of my own theories, counterfactual. I know it in the same way I know that 'brakeless trains are dangerous' - not by any principle, not my any evaluations of actual brakeless trains, but because I have come to know, to recognize, the nature (and danger) of brakeless trains.

We sometimes call this 'the weight of experience'. And this is why my 'knowledge' differs from yours. Not because one of us, or the other, has failed to take into account the evidence. But because the weight of our respective experiences differs.

This gets back to the question of why 'presenting information' will not be 'successful' (let alone 'better') in my view. Recall that I said that the wrong measurement can suggest that you are succeeding, when you are failing. We can present information, and then test students to see if the remember that information. If they are successful on the test, then we say that they 'know' that information.

My experiences with my presentations is different. I can make a presentation - such as, say, today to the British Council - and walk away feeling that while the audience heard me, and while they could probably pass a test (I am a good presenter, after all, even on my bad days, and they are smart people, with exceptional memories), I would not say that they 'know' what I taught them. Wittgenstein says, "Somebody demonstrates that he knows that the ice is safe, by walking on it." These participants may leave the conference being able to repeat the words, but scarce any of them will change their practice, eschew the classroom, embrace the world of Web 2.0.

How can I say that they know my position, if all they do (all they can do?) is repeat the words? If they 'knew' my position, they would change their practice - wouldn't they? If they had the same knowledge I had - which would have the same weight of experience I had - they they would naturally, without the need for convincing (or even training) make the same decisions I did. Without needing even to think about it. That's what Dreyfus and Dreyfus call 'expert knowledge'. "He does not solve problems. He does not even think. He just does what normally works and, of course, it normally works." And it can't be obtained by measurement, it can't be expressed in principles, it can't be taught as a body of knowledge, and it can't be measured by answers on a test.

A presentation such as the one I gave at British Council this morning (or at CADE a month ago) isn't a transfer of information. People may acquire some words and expressions from me, but they won't acquire knowledge, because even if my presentation were perfect, it could not perform the repetition of instances required in order to create a weight of experience on a certain subject. The best I could do is to repeat a word or phrase over and over, in different ways and slightly different contexts, the way advertising does, or the comedian that kept repeating 'Via' ("Veeeeeee.... ahhhhhhh").

A presentation is a performance. It is a demonstration of the presenter's expertise. The idea is that, through this modeling - through facility with the terminology, through demonstration of a methodology, through the definition of a domain of discourse (which will be reinforced by many other presentations on the same subject - if you hear Wittgenstein's name often enough, you come to believe he's a genius) - you learn what it is to be 'expert'.

A lecture won't impart new knowledge on older, more experienced listeners at all - it acquires the status of gossip, serving mainly to fill people in on who has been saying what recently, what are the latest 'in' theories or terms. The point of a talk on 'Web 2.0' is to allow people to talk about it, not to result in their 'knowing' it. With younger participants (interestingly the least represented at academic conferences, lest they be swayed by people other than their own professors) the inspiring demonstration of academic expertise serves as a point of departure for a lifetime of similar practices that will, in a generation, result in similar expertise (people did not become disciples of Wittgenstein because they believed him - it is very unlikely that they even understood him - but by the fact that he could (with a glance, it seemed) utterly demolish the giants in the field of mathematical philosophy).

I have spoken elsewhere about what sort of knowledge this is. It is - as I have characterized it elsewhere - emergent knowledge, which may be known by the fact that it is not perceived (ie., it is not sensory, the way 'red' or 'salty' are sensory) and it is not measured, but by the fact that it is recognized. It is a 'snapping to' of awareness, the way we see a duck (or a rabbit) or suddenly discover Waldo.

'Recognition', in turn, amounts to the exciting of a set of connections, one that is (relevantly) similar to the current content of perception. It is a network phenomenon - the activation of a 'concept' (and its related and attendant expectations) given a certain (set of) input condition(s). When we present certain phenomena to the network, in the form of a set of activations at an 'input layer' of neurons, then based on the set of existing connections in the network, some neurons (and corresponding connections) are activated, while others remain silent; this present experience (sometimes) produces a response, and (in every case) contributes to the set of future connections (one connection is subtly strengthened, another subtly weakened).

When presented with a certain set of input phenomena, you can remember - to certain degree. If given sufficient motivation, you can associate certain noises (or certain shapes) with each other. On being told, I can remember that 'Paris' is the 'capital' of 'France', and even repeat that information on a test (and moreover, remember who said it to me, and when, and under what circumstances), but I cannot be said to know unless I demonstrate (a disposition?) that if I want to see the President of France, that then I go to Paris. And this is not the sort of thing that is on a test - it is a sort of thing that allows a person to have 'learned' that Sydney is in Australia, and even how to book an airline ticket to Sydney, and not notice that they are traveling to Canada.

How do I know? Because - by virtue of my experiences with traditional and online settings - if I were trying to support knowledge in a person, I would not turn to the classroom, but rather, some sort of practice, and even if I were (because of policy or the demands of corporate managers) trying to support remembering in a person, I would contrive to have it presented to them, ovr and over, in the most efficient and ubiquitous means possible, which today is via ICTs.

How do you know whether to believe me?

You don't. Or, more accurately, there is nothing I can provide you that will convince you to believe me if you are not already predisposed to believe me. The best I can do is to suggest a course of action (ie., a set of experiences that you can give yourself) such that, after these experiences, you will come to see the world in the same way I do. That is why my talk to the British Council (and to many other audiences) described just that, a set of practices, and not a set of theorems, or experimental results, or the like.

The practices I presented constitute (one way of describing) the practices I undertake in my own learning and development. The evidence, then, of whether these practices is in whether you believe that I have demonstrated my expertise. This, in turn, depends on your own sense of recognition - some people will recognize that I have achieved a certain degree of expertise, while others will leave the room with the verdict of 'charlatan'.

And what follows is a subtle dance - the connectivism George Siemens talks about - where you demonstrate your expertise and I demonstrate mine - and where each of us adopts some of the practices of the others (or rejects them, as the case may be) and where the connections between people with similar practices is reinforced, and knowledge demonstrated in such a community not by what it says (hence the fate of critical theory) but by what it does. This is the process (and I have explained elsewhere the properties of the network that will grant the process some degree of reliability).

Concepts and the Brain

Posted to IFETS, June 19, 2007.

From: Gary Woodill

One reference that supports that contention that concepts are instantiated
in the brain is Manfred Spitzer's book The Mind within the Net: Models of
Learning, Thinking, and Acting. Spitzer spells out how this takes place. For
a brief review of this book see my April 10, 2007 blog entry entitled The Importance of Learning Slowly.

The Synaptic Self: How our brains become who we are by Joseph LeDoux covers
much of the same ground. Nobel laureate Eric Kandel outlines a model of how
learning is recorded in the brain in his easy to read In Search of Memory:
the Emergence of a New Science of Man.

I second these points and especially the recommendation of The Synaptic Self, which is a heady yet cogent description of the mind as (partially structured) neural network. Readers interested in the computational theory behind neural networks are recommended Rumelhart and McClelland's two volume Parallel Distributed Processing.

That said, the statement 'concepts are instantiated in the brain' depends crucially on what we take concepts to be. Typically we think of a concept as the idea expressed by a sentence, phrase, or proposition. But if so, then there are some concepts (argue opponents of connectionism) that cannot be instantiated in the brain (at least, not in a brain thought of as essentially (and only) neural networks).

For example, consider concepts expressing universal principles, such as 2+2=4. While we can represent the individual elements of this concept, and even the statement that expresses it, in a neural network, what we cannot express is what we know about this statement, that it is universally true, that it is true not only now and in the past and the future, but in all possible worlds, that it is a logical necessity. Neural networks acquire concepts through the mechanisms of association, but association only produces contingent, and not necessary, propositional knowledge.

There are two responses to this position. Either we can say that associationist mechanisms do enable the knowledge of universals, or the concepts that we traditionally depict as universals are not in fact as we depict them. The former response runs up against the problem of induction, and is (I would say) generally thought to be not solvable.

The latter response, and the response that I would mostly endorse, is that what we call 'universals' (and, indeed, a class of related concepts) are most properly thought of as fictions, that is to say, the sentences expressing the proposition are shorthand for masses of empirical data, and do not actually represent what their words connote, do not actually represent universal or necessary truths. Such is the approach taken by David Hume, in his account of custom and habit, by John Stuart Mill, in his treatment of universals, even by Nelson Goodman, in his 'dissolution' of the problem of induction by means of 'projectability'.

If we regard the meanings of words as fixed and accurate, therefore, and if we regard concepts to be the idea expressed by those words, then concepts cannot be instantiated in the brain, at least, not in a brain thought of as a neural network. If we allow, however, that some words do not mean what we take them to mean, that they are in fact 'fictions' (even if sometimes taken to be 'fact') then concepts can be instantiated in neural networks.

Saturday, June 16, 2007

Learning and Ownership

Responding to Tony Karrer:

Good post and a useful summary.

First...

You write, "Clearly a corporation has a reasonable expectation that work done while they are paying you should be done on their behalf. They should have rights to the end work product."

It's not so simple as this.

Someone pays me to produce x, and they expect to obtain the rights to x. OK. But when I pay somebody to produce a newspaper, then why don't I get the rights when I buy it?

Mere payment does not confer transfer of rights, and therefore, the fact of such payment does not denote a certain type of ownership.

Moreover...

Like many employees, the work I produce belongs to my employer whether or not it is produced at the office. If I have an idea in the shower, and it relates to my work, then my employer owns it.

Yet my employer requires that I fill out time cards (AP Sigma time recording, actually, one of the most useless applications ever deployed inside a corporate firewall). Thus, my employer is very explicitly not paying me while I am a home taking a shower.

Hence, non-payment does not denote non-ownership either.

'Ownership' is a legal construct, not a causal one. People can come into, and out of, ownership of various things - including their own ideas - for a variety of reasons. Payment is only one factor, and is neither a necessary nor a sufficient condition.

Second...

There are different things that can be owned, in the context of our current discussion.

We can talk about the ownership of specific entities, such as blog posts.

Or we could talk about the ownership of the ideas contained in those blog posts.

Some things, no matter how produced, are not owned by the employer.

I state, for example, on my blog that "I am a socialist." This thought is not owned by the employer (though the post in which it is expressed may be owned, even though my employer may wish it wasn't).

I built bookshelves for my dining room. Even if some of the work was done on employer time, or with employer tools, they employer does not own my bookshelves, because it is the wrong type of thing (note this changes if I am employed as a carpenter, not a researcher).

Third...

Some things cannot be owned.

Owning humans, for example, is illegal. The ownership of a human is called 'slavery', and even if a substantial sum of money is paid, no ownership can be exerted in this way (the closest you can come is the 'personal services contract', of Wayne Gretzky fame).

Can parts of humans, therefore, be owned? In some cases, they evidently can. If a person steals a kidney from a hospital, it is considered theft', which can exist only if the kidney were owned.

Can one person 'own' another person's learning? That is the crux of the debate here.

It does not follow that, simply because the employer is paying the employee, that the employer has a claim to 'own' the person's learning.

Indeed, if a person's learning is 'personal', an aspect of the self, then there is a strong argument, rooted in the argument against slavery, that an employer cannot own a person's learning, for a person's learning is inseparable from the self.

But against that, it ma be that a person's learning is more like a kidney, part of the self, but separable and transferable.

Fourth...

What is it to 'own learning'? Thre are several distinct possibilities...

On the one hand, it may be to own the products of learning - the notebooks and tests and other artifacts.

On the other hand, it may be to own the knowledge or IP that was (if you will) 'transferred' (if you are a constructivist you need to depict the learning as the product of 'work for hire').

Moreover, it may be to own the 'process' of learning - that is, to dictate and determine the manner in which learning will take place, whether it be by reading, taking a class, watching a video, and so forth.

It may also be to own the 'content' of learning, that is to say, to be able to determine what will be, and what will not be, learned. My employer, for example, may require that I study Adam Smith and not Karl Marx (lest I become an employer-owned socialist).

And finally, it may be to own the learning environment - the classroom, the books, the learning environment, and the rest (though, manifestly, not the teachers).

Fifth...

Let me ask: if the self is produced via (the content of and the process of) learning, then isn't the ownership of (the content of and the process of) learning the same as ownership of the self?

I am not talking about the ownership of the artifacts - of the output of leaning, or the materials used for learning. I am talking about the learning itself.

By analogy: if 'you are what yu eat', then isn't the employer attempting to own what you 'are' by controlling what you eat?

We (mostly) wouldn't tolerate this, would we? If McDonalds required that its staff eat Big Macs for lunch, we would consider this abuse, would we not?

If my employer attempts to force me to learn Adam Smith, and not Karl Marx, isn't that the same as my employer forcing me to eat Big Macs, and not whole wheat bread? Isn't this my employer trying to own who I am?

The mere fact that my employer is paying me does not (automatically) entitle my employer to ownership over my learning.

There are (to my knowledge) no legal constructs granting ownership over 'learning' (in the sense of 'what is learned' and 'how it is learned'). That's why I can keep reading Karl Marx, even though my employer doesn't like it.

Sixth...

The 'personal learning environment' (as a concept) is an explicit assertion that learning (as opposed to the artifacts of learning) is owned by the person, not the employer.

To define learning is to define the self, which is why learning must be personal, and manifestly, must never be owned by the employer.

This is why the attempt to define the personal learning environment as something provided by, and owned by, the employer, is contrary to the concept of the PLE.

It is an attempt to create a legal construct in which a new type of ownership is created, ownership over one's learning.

The very tool - the PLE - that is intended to liberate us, could be used instead to enslave us.

There is to me a very clear line of demarcation here.

On the one hand, there is a perspective that is essentially supportive of personal learning, that supports learning, that supports personal development.

And on the other hand, there is a perspective that is essentially supportive of employer ownership, one that is essentially opposed to personal learning, one that views persons as employees to be shaped and molded according to corporate objectives.

It's a question of ownership; there isn't a middle ground. A person's learning can be owned by the person, or the employer, but not both, for should there ever occur a dispute - whether or not to study Karl Marx, say - one, or the other, must prevail.

Seventh...

Can we remain silent on the question of ownership? Can we not describe the PLE as a list of features only, the way we could (say) describe a word processor?

No. Because the list of features that characterizes a PLE is inseparable from the question of ownership.

For example: one feature of the PLE is that 'the person can choose which learning materials (or learning feeds) to subscribe to).

If the person cannot choose - if ownership over this function is instead vested in the employer, then it is not a PLE.

An analogy: we cannot describe a set of behaviours as 'driving a car' if the function of 'steering' is controlled by some other person.

(It is worth noting that these considerations apply equally in the world of formal learning. If a person is not allowed (by the college or by the school board) to access certain learning materials, then the tool they are using is not a PLE (it is an LMS)).

Virtual and Physical

I wrote,

The difference between the physical and the virtual is illusory - it is a distinction that has been marketed hard by companies that want to keep sellig you paper. But the virtual is the physical - the people online are real, the computers are real, the impact of your words is real, and it all happens in the physical world to people with physical bodies

to which Dave Snowden replied,

Now there is a very basic (and so basic its surprising) error here. Yes the participants in a virtual community are real (well I suspect one exception). However the environment and the nature of their interactions is radically different from a physical environment. All sensory stimulation is more limited in nature, but as a counter it is also possible to have anonymity, and asynchronous interaction in the virtual. I could go on but there are multiple differences some good some bad. The basic point is that they are different and its perverse to argue otherwise.

and I commented as follows:

You may well make the case that the two realms are different (if not contrary, which is what you would need to make the case for 'balancing'), but you haven't done so in this post.

You argue...

1. "All sensory stimulation is more limited in nature....", presumably in the online, but this isn't true. We pay partial attention to people and events in the physical as well, and our perception of (say) an instructor in a 300 person classroom is arguably no more detailed than one opnline.

2. "...but as a counter it is also possible to have anonymity..." Interestingly, anonymity is the hallmark of the physical, not the virtual, which is why you can purchase a newspaper at a 7-Eleven without registering.

3. "...and asynchronous interaction in the virtual." This is also a hallmark of the physical, being characterized by books and magazines, letters, bulletin boards, telephone answering machines, and more.

I don't aim the 'punditry' remark at anyone in particular, but rather, as a catch-all to cover what has amounted to a somewhat less than thorough style of analysis. It's pretty easy to fall into lazy coverage - I do it myself - and it is this that I target.

The 'balancing the virtual and the physical' (at least you don't say 'real') metaphor is a cliche, one that is not rooted in a thoughtful analysis of the phenomenon, and it does play into the hands of those who wish to protect and preserve traditional content publishers.

I think we can make the case that the use of digital media engenders greater capacity, and hence empowerment, on those who use it, but from this it does not follow that any sort of 'balancing' with media that disempower us is required.

Thursday, June 07, 2007

A Simple Definition of Knowledge?

Responding to George Siemens's A Simple Definition of Knowledge. It is currently pending approval over there.

Um... no.

I don't want to be antagonistic, but this account is not satisfactory.

> information is a node which can be connected

So what, then, a neuron is information? No, that makes no sense - because then we would have the same information, unchanged, day in and day out, in our brains.

At the very least - information has to refer to a neural *state*. A nodal *state*. At its simplest, a neuron can be 'off' or 'on' (actual neurons have more complex states, of course). A given neural state might be a bit of information - a sequence of neural states or a collection of neural states 'information'.

Even then, we may want to rstrict our attention to certain states, and not all states. Taking an information-theoretic approach, for example (cf Dretske) we might want to limit our attention to neural states that are reflective of (caused by, representative of) states of affairs in the world. This is the distinction between 'signal' and 'noise'.

There's a lot more to be said here. because now we night want to say that the information isn't the actual state, but rather, it is the (description of, proposition describing, etc) state of affairs represented by the neural state. Because the actual neuron doesn't matter, does it? If we switched the current neuron out for a different one, it would still be the same infomration, wouldn't it?

> When connected, it becomes knowledge (i.e. it possesses some type of context and is situated in relation to other elements).

The traditional definition of knowledge is 'justified true belief'. There are many problems with that definition, but it does point to the fact that we think of 'knowledge' as being something broadly mental and propositional. Knowledge, in other words, is a macro phenomenon, like an entire set of connections, and not a micro phenomenon, like a single connection.

But there's also more at work here. Is knowledge the actual physical se of connections? Is it the pattern represented by the connections, that could be instantiated physically by any number of systems? Is it tantemount to the state of affairs that caused the set of connections to exist? Is the connective state representational? Referential?

Simply saying 'knowledge is a connection' answers none of these. It offers no account of the relation between the brain and the world, if any. It doesn't account for the relation between, say, 'knowledge' and 'belief'. I am sympathetic to the non-representational picture of knowledge suggested by the definition - but if knowledge is non-representational, then what is it? Saying that it's some physical thing, like a connection, is about as useful as saying that it is a brick.

> understanding is an emergent property of the network

Which means... what?

To put this bluntly: is understanding an epistemological state - that is, it it some kind of super-knowledge, perhaps context-aware knowledge? Kind of like wisdom?

Or is it a perceptual state? Is 'understanding' what it *feels* like to know?

Is *any* emergent property of a network an 'understanding'? We could imagine a digital video camera that records the 'face on Mars'. So we have some emergent property of the networks of sensors. Is this emergent property 'understanding'?

One would assume that there would be, at a minimum, some requirement of recognition. That is, it doesn't get to be 'understanding' unless it is 'recognized' as being the face of Jesus on Mars. But this means it's not just the emergent property - it's a relation between some emergent property and some perceptual system.

Additionally - there is not really a face of Jesus on Mars. It's just an illusion. Does it count as 'understanding' if it's an illusion, a mirage, or some other misperception? If not, what process distinguishes some recognitions of emergent properties from others?

I don't mean to be antagonistic here. I am sympathetic with the intent of this post. But it is so far from being an adequate account of these terms it was almost a duty, a responsibility, that I post this correction.

I understand that I owe an alternative account of these phenomena. I have attempted a beginning of such an account in my Connective Knowledge paper. But it si clear to me that I need to offer something that is both significantly clearer and significantly more detailed.

Wednesday, June 06, 2007

Open Source Assessment

This has come up in a couple of places lately, and I'd like to get the concept down on paper (as it were) so people has a sense of what I mean when I talk about 'open source assessment'.

The conversation comes up in the context of open educational resources (OERs). When posed the question in Winnipeg regarding what I thought the ideal open online course would look like, my eventual response was that it would not look like a course at all, just the assessment.

The reasoning was this: were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.

Certainly, educational institutions could continue to offer guidance and support - professors, for example, could post guides and resources - but these would not constitute any sort of required reading, and could indeed be taken by students and incorporated into their own support materials.

This is the sort of system I have been talking about when I talk about open educational resources. Instead of envisioning a system that focuses on producers (such as universities and publishers) who produce resources that consumers (students and other learners) consume, we think of a system where communities produce and consume their own resources.

So far so good. But where does this leave assessment? It remains a barrier for students. Even where assessment-only processes are in place, it costs quite a bit to access them, in the form of examination fees. So should knowledge be available to everyone, and credentials only to those who can afford them? That doesn't sound like a very good solution.

In Holland I encountered a person from an organization that does nothing but test students. This is the sort of thing I long ago predicted (in my 1998 Future of Online Learning) so I wasn't that surprised. But when I pressed the discussion the gulf between different models of assessment became apparent.

Designers of learning resources, for example, have only the vaguest of indication of what will be on the test. They have a general idea of the subject area and recommendations for reading resources. Why not list the exact questions, I asked? Because they would just memorize the answers, I was told. I was unsure how this varied from the current system, except for the amount of stuff that must be memorized.

As I think about it, I realize that what we have in assessment is now an exact analogy to what we have in software or learning content. We have proprietary tests or examinations, the content of which is held to be secret by the publishers. You cannot share the contents of these tests (at least, not openly). Only specially licensed institutions can offer the tests. The tests cost money.

There is a range here. Because a widespread test like the SAT is hard to keep secret, various training materials and programs exist. The commercial packages give students who can afford them an advantage. Other tests, which are more specialized, are much more jealously guarded.

There are several things at work here:

- first, the openness of the tests. Without a public examination of the questions, how can we be sure they are reliable? We are forced to rely on 'peer reviews' or similar closed and expert-based evaluation mechanisms.

- second, the fact that they are tests. It is not clear that offering tests is the best way to evaluate learning. Just like teaching has for generations depended on the lecture, so also assessment has for generations depended on the test. If the system were opened up, would we see better post-industrial mechanisms of assessment?

- third, there is the question of who is doing the assessing. Again, the people (or machines) that grade the assessments work in secret. It is expert-based, which creates a resource bottleneck. The criteria they use are not always apparent (and there is no shortage of literature pointing to the randomness of the grading). There is an analogy here with peer-review processes (as compared to recommender system processes).

- fourth, the testing industry is a closed market. Universities and colleges have a virtual monopoly over degrees. Other certifications are similarly based on a closed network of providers. This creates what might be considered an artificial scarcity, driving up the cost.

The proposition here is that, if the assessment of learning becomes an open, and community, enterprise, rather than closed and proprietary, then the cost of assessment would be reduced and the quality (and fairness) of assessment would be increased, thus making credentialing accessible.

We now turn to the question of what such a system would look like. Here I want to point to a line of demarcation that will characterize future debate in the field.

What constitutes achievement in a field? What constitutes, for example, 'being a physicist'? As I discussed a few days ago, it is not reducible to a set of necessary and sufficient conditions (we can't find a list of competences, for example, or course outcomes, etc., that will define a physicist).

This is important, of course, because there is a whole movement in development today around the question of competences. The idea here is that accomplishment in specific disciplines - first-year math, say - can be characterized as mastery of a set of competences.

This is a reductive theory of assessment. It is the theory that the assessment of a big thing can be reduced to the assessment of a set of (necessary and sufficient) little things. It is a standards-based theory of assessment. It suggests that we can measure accomplishment by testing for accomplishment of a predefined set of learning objectives.

Left to its own devices, though, an open system of assessment is more likely to become non-reductive and non-standards based. Even if we consider the mastery of a subject or field of study to consist of the accomplishment of smaller components, there will be no widespread agreement on what those components are, much less how to measure them or how to test for them.

Consequently, instead of very specific forms of evaluation, intended to measure particular competences, a wide variety of assessment methods will be devised. Assessment in such an environment might not even be subject-related. We won't think of, say, a person who has mastered 'physics'. Rather, we might say that they 'know how to use a scanning electron microscope' or 'developed a foundational idea'.

While assessment in a standards-based system depends on measurement, in a non-reductive system accomplishment in a discipline is recognized. The process is not one of counting achievements but rather of seeing that a person has mastered a discipline.

We are certainly familiar with the use of recognition, rather than measurement, as a means of evaluating achievement. Ludwig Wittgenstein is 'recognized' as a great philosopher, for example. He didn't pass a series of tests to prove this. Mahatma Gandhi is 'recognized' as a great leader. We didn't count successful election results or measure his economic output to determine his stature.

In a more mundane manner, professors typically 'recognize' an A paper. They don't measure the number of salient points made nor do they count spelling errors. This is the purpose of an oral exam at the end of a higher degree program. Everything else is used to create hurdles for the student to pass. But this final process involves one of 'seeing' that a person is the master of the field they profess to be.

What we can expect in an open system of assessment is that achievement will be in some way 'recognized' by a community. This removes assessment from the hands of 'experts' who continue to 'measure' achievement. And it places assessment into the hands of the wider community. Individuals will be accorded credentials as they are recognized, by the community, to deserve them.

How does this happen? It beaks down into two parts:

- first, a mechanism whereby a person's accomplishments may be displayed and observed.

- second, a mechanism which constitutes the actual recognition of those accomplishments.

We have already seen quite a bit of work devoted to the first part. We have seen, for example, describe the creation of e-portfolios, intended a place where a person can showcase their best work.

The concept of the portfolio is drawn from the artistic community and will typically be applied in cases where the accomplishments are creative and content-based. In other disciplines, where the accomplishments resemble more the development of skills rather than of creations, accomplishments will resemble more the completion of tasks, like 'quests' or 'levels' in online games, say.

Eventually, over time, a person will accumulate a 'profile' (much as described in 'Resource Profiles'). We can see this already in systems like Yahoo Games, where an individual's profile lists the games they play and the tournaments they've won.

For the most part, recognition will be informal rather than formal. People can look at the individual's profile and make a direct assessment of the person's credentials. This direct assessment may well replace the short-hand we use today, in the form of degrees.

In other cases, the evaluation of achievement will resemble more a reputation system. Through some combination of inputs, from a more or less define community, a person may achieve a composite score called a 'reputation'. This will vary from community to community. The score will never be the final word (especially so long as such systems can be gamed) but can be used to identify leaders in a field. Technorati's 'authority' system is a very crude and overly global attempt to accomplish such a thing.

In still other cases, organizations - such as universities, professional associations, governments and companies - may grant specific credentials. In such cases, the person may put forward their portfolios and profiles for consideration for the credential. This will be a public process, with everyone able to view the presentation. Institutions will be called to account for what the public may view to be fair or unfair assessments. Institutions will, over time, accumulate their own reputations. The value of a degree will not be based on its cost, as is the case currently, but on the level of achievement required.

Most of the latter part of this post consists of speculation, based on models we have already seen implemented on the web. But the speculation nonetheless point to a credible alternative to proprietary testing systems. Specifically:

- credentials are not reduced to necessary and sufficient conditions (competences). Any body of achievement may be posited as evidence for a credential.

- these bodies of achievement - profiles and portfolios - result from interactions with a wide range of agencies and represent a person's creative and skill-based capacities

- considerations of these achievements for credentials are open, that is, the public at large may view the profiles and portfolios being accepted, and rejected, for given credentials

- there is no monopoly on the offering of credentials; any agency may offer credentials, and credibility of the agency will be based on the fairness of the process and the difficulty of the achievement

Yes, this is a very different picture of assessment than we have today. It replaces a system in which a single set of standards was applied to the population as a whole. This was an appropriate system when it was not possible for people to view, and assess, a person's accomplishments directly. No such limitation will exist in the future, and hence, there is no need to continue to judge humans as 'grade A', 'grade B' and 'grade C'.

Monday, June 04, 2007

Recognizing Learning

Responding to Rob Wall, who says:

"Literacy, of any type, is about pattern recognition, about seeing how art is like physics is like literature is like dance is like architecture is like …Literacy is not about knowing where the dots are. Literacy is not about finding dots about which you may not know. Literacy is about connecting the dots and seeing the big picture that emerges."

Yes. Exactly. This is a very key point.

Put this in context (this came up in a discussion in Den Bosch a few days ago)...

When we think about 'what being a physicist is' or 'how we know a person is a qualified physicist':

- these are (crucially) *not* redicible to a set of necessary and sufficient conditions (we can't find a list of copentencies, for example, or course outcomes, etc., thast will define a physicist).

- the way an examiner knows whether a students is a qualified physicist is not by *measuring* whether they have succeeded, but rather in *recognizing* that they have succeeded.

... and the reason for this is that the measurement is an inaccurate abstraction - it consists in identifying a few (salient) features of 'being a physicist' and elevating these to the position of *defining* being a physicist.

But this abstraction:
- is not the same as 'being a physicist' - it will typically include things that (in certain contexts) are unimportant, and leave out things that are important
- is not an *objective* account of 'being a physicist' - it reflects a skewed perspective that reflects the biases and prejudices of the person doiung the defining (this is especially apparent in a rapidly changing field, where a person may be 'recognized' as being an authority even though he/she does not satisfy traditional 'criteria' (competences, outcomes) defining an 'authority'

(That's why we do not want to collapse the individual data points in a 'team' - why we don't want to define a 'common goal' - because this obscures the pattern in the team membership, and prevents us from *recognizing* things that are important resulting from the interactions of the members).

Saturday, June 02, 2007

The Team

Sent to the OER Mailing List

The problem I see with 'the team' is that it subsumes the ideas and opinions of its members under some fiction, which is represented as the ideas and opinions of the whole. This fiction in fact represents the ideas and opinions of a member or subset of the team. It is held out as a condition for being a member of the team; affection, affiliation, acknowledgment and recognition, exchange of ideas and personal self-worth are granted to putative members only on the condition that they declare their allegiance to these ideas and opinions, whatever their own state of mind.