Friday, October 26, 2007

Disinterested

Responding to Matthew K. Tabor

Honestly, from a leftists’ perspective, I can’t get over how much whining and caterwauling there is from the right about leftist teachers, professors, whatever.

If people want more right-wing teachers, there’s a really simple way to do it: pay them more.

That way, you’ll get teachers who are motivated by the money passing on capitalist values rather than people who are motivated by social service talking about cooperating and sharing, about rights and diversity.

Honestly, the right wing lot is so tiresome. If you don’t like what you’re getting, in either teachers or professors or whatever, go out and spend your money and buy some. That’s how the market works, isn’t it?

Fergoodness sakes, I just wish the right would quit whining about how hard done by they are, how the left is discriminating against them, about how they never get a break. Sheesh. It rings so hollow after, you know, eight years of Republicans running the country into the ground…


---

You say “I want disinterested teachers and scholars who don’t inject their personal politics into subjects like history… The point is that there’s a dearth of intellectual diversity or, at the least, an imbalance that’s keeping us from providing the very best education possible.”

Same point holds… if that’s what you want, don’t complain about it, pay for it.

You can’t force teachers into (what you call) ‘disinterested’ or ‘intellectually diverse’ (what I read as ‘right wing’ - these words you use are just euphemisms). You have to hire people who are already right wing - and since they are motivated by money, not service, it costs more.

---

I know what the word 'disinterested' means.

But don't forget, I come from a political tradition that includes things like Marx saying, "everything is politics."

So it's is neither unusual nor surprising for someone like me to say that, when you take the politics *out* of something, you are left with (what might be called) conservatism or right wing thought.

But another way of saying the same thing is to say that the 'disinterested' stance is just another political stance (think, by analogy, of the way people who are religious argue that atheism is just another religion, or that it is, at the very least, a religious stance).

From my perspective, to take the personal interpretation and personal interest out of a discipline - whether it be history or mathematics or computer science - is to change that discipline, to present it as a sterile unreal abstract.

The fact is, facts - even scientific and mathematical facts, much less historical facts - are based in and founded on interest. Why would we care whether 2+2=4 were we not grounded in a system of counting and measurement? Why would counting and measurement be important were we in a society ruled by abundance rather than scarcity?

Or philosophically, to understand what numbers mean, what things like probability mean. Is probability, as F.P. Ramsay argued, a matter of subjective interpretation, based on (say) how much we would *bet* on a proposition?

In the same way, telling students where, say, milk comes from, carries with it just as much in the way of interpretation and value.

To take out the interpretation is to take out the left-wing. And all that remains is the right-wing. Which is why I think that 'disinterested' is a euphemism for very very political.

---

Let me also address this:

“If we’ve got a problem with our professoriate, there’s no reason we can’t take different approaches to hiring, firing and tenure. It’s not as if professors are born into a role - someone hired them, oversees them and continues to pay them.”

What sort of ‘different approach’ are you suggesting? Some sort of political screening?

See, the problem is, publicly funded institutions have evolved according to the determinations of elected representatives, over time, regarding how they should be run.

And one of the fundamental determinations they have made is that professors get to choose their own political affiliations.

Let’s say (for the sake of argument) that this has resulted in a generally left-wing skew (keep in mind that, compared to me, and from my frame of reference, they’re more right wing than left wing - especially in view of survey results like this).

This simply is the result of highly educated professionals electing to support one political perspective over another.

What sort of mechanism will you use to change this?

Well - what you *could* do is simply get the public out of the university business. Then you could go about hiring whomever you want.

But that would cost a LOT of money - and the public (who actually pays for the system) would never support it. Yes, you can create a few elite right-wing institutions, but for the general run of educational institutions, you’re stuck with what the public wants - and what the public wants is a politically independent professoriate.

I’m not sure there’s enough money in the world to convince left-wing professors to become right-wingers, or to get the public to support a policy of hiring for politics over merit.

But - it seems to me - if capitalism were right, then the money would be there to hire a more diverse lot of professors, IF that’s what the people who pay for it want.

But the right won’t pay - and simply wants to force the people who DO pay to pay for the things the RIGHT wants.


Thursday, October 18, 2007

How the Net Works

Originally published in CEGSA RAMpage Magazine, and as a summary of my talk given to CEGSA, 'How the Net Works'.


The purpose of this paper is to describe how network learning works and to show how an understanding of network learning can inform the design and evaluation of online learning applications.


1. Models

The title of this paper does not refer to the Internet or Internet technologies specifically, but rather, at the use of networks and network theory generally in support of teaching and learning.

The network approach to learning is perhaps best contrasted with what might be called the transmission model of learning. According to the translmission model, teaching consists essentially of the transfer of educational content from experts to learners. This creates a distance that must be bridge by pedagogical practices. Such a model informs, for example, Moore's transactional distance theory.

Most educators conform to the transmission model. In a startling study, Melissa Engleman and Mary Schmidt found that 85 percent of teachers surveyed fall into the 'SJ' category of the Myers-Briggs Temperament Indicator. While there is certainly room to question both the measure and the measurement, it is nonetheless illustrative that almost all teachers would select responses that indicate a preference for learning through identifying and memorizing facts and procedures, step-by-step presentation of material, and consistent, clearly defined procedures, order and structure.

It is the transmission model that has informed much development of learning technologies to date. As Norm Friesen illustrates, the existing paradigm is to assemble units of learning, called 'Sharable Content Objects' (and later, 'Learning Objects') in a learning management system into sequences of learning. These would then be broadcast by various means into students' minds. "The end result of this approach," writes Friesen, "is to understand training and the technologies that support it as a means of 'engineering' and maximizing the performance of the human components of a larger system."

But learning is not accomplished merely by transferring information from sender to receiver. Learning is not merely the remembering of information. We can see this clearly by reflecting on cases where something has been remembered, but not learned:

- in language, for example, people can remember nonsense terms (such as a line from a Lewis Carroll poem, "Twas brillig..."), and people can remember (and attempt to use) words without knowing what they mean.

- in mathematics, for example, people can learn how to add and multiply, and yet fail to appreciate quantities; consequently, the retail industry has developed a skill, 'counting change', to prevent simple mathematical errors.

Rather than being a process of acquiring something, as commonly depicted, learning is in fact a process of becoming something. Learners do not 'receive' information which they then 'store', they gain experiences which, over time, result in the formation of neural structures. To learn is to instantiate patterns of connectivity in the brain. These connections form as a result practice and experience. They are not constructed; a student does not 'make meaning' or 'construct meaning', as sometimes depicted in the literature. Connections are grown, not created; meaning is, therefore, grown, not constructed. (Some quick examples; I also recommend Joseph LeDoux, The Synaptic Self, for a detailed discussion of this point)

Knowing how we learn is important because it tells us a lot about what we learn. And this, again, gives us evidence showing that learning is not merely the acquisition of knowledge and information. It is not, because there isn't anything that can stand on its own as an instance of 'knowledge' or 'information' to begin with. We sometimes think of knowledge as structured, ordered, and sentential. 'Paris is the capital of France,' for example, might be an instance of knowledge. But this is not in fact what we learn. We may use the same sentence to communicate, but what was in your mind and what is in my mind is very different.

Specifically:

- a great deal of knowledge - possible most of what we know - is 'tacit'. That means it is 'ineffible'. It cannot be expressed in words at all. As Michael Polanyi describes in Personal Knowledge, our knowing how to ride a bicycle cannot be expressed in words.

- knowledge is also irreducibly personal. What something means depends on the context in which it is understood. Context infuses all levels of language and communication, from the meaning of a given word to scientific explanations and attributions of cause. What something means depends crucially on what else it *could* be, and this is not a matter of fact, but rather of one's beliefs and opinions. A good way to see this is to think of the 'meaning' of a painting. The meaning of words works in a similar way.


2. Learning

To understand what learning is, it is necessary first to understand what knowledge is. As stated above, knowledge is *not* the accumulation of a set of propositions. Rather, it is the development of a pattern of connectivity in the brain. These patterns of connectivity correspond to the skills, abilities, intuitions and habits that we develop over time. A good example - and a good way to understand how knowledge characteristically works - is the process of *recognition*. When we see something, we say we 'know what it is' when we recognize it. What has happened is that a phenomenon in front of us, a tiger, say, has stimulated an appropriate pattern of connectivity in the brain - a different pattern for each person, depending on what their previous experiences of tigers (and things related to tigers) has been.

Learning, on this model, is perception. It is the having of the experiences that lead to the formation of a certain pattern of connections in the mind. It is the growing of new patterns of connectivity through repeated exposure to certain phenomena or the repeated performance of certain activities. Learning is thus very similar to exercise. At first it's awkward and you don't know it very well. But with repeated use and practice, it becomes instinct. Habitual. Expert, as described by the Dreyfus model. (See., eg., Dreyfus, H. (2001) On the Internet) and elsewhere.

The 'knowledge' we have is, in essence, the patterns of connectivity we have in our mind. Or, we might say, the knowledge *is* the network. What does this mean? It means that what we think of as 'knowledge' has changed:

- we used to think of knowledge as governed by rules, principles and universals - statements like 'all ducks are animals' or 'rain is caused by evaporation'

- but knowledge actually consists of - and should be described in terms of - patterns and similarities. Knowledge consists of being able to recognize ducks, for example, or to be able to recognize when it is likely to rain. (To really get this, compare section 4.1.2, 'The semantics of similarity' with Tarski semantics.)

When we think of knowledge as 'recognition', we can think of numerous cases where we've seen it in operation before. 'Knowing' is like 'snapping to attention'. Like when you find 'Waldo' in Where's Waldo, for example. Or when you recognize a duck-rabbit image as a duck or a rabbit (again, notice how context and personal variability plays a role here). Or any of the numerous 'out of the blue' experiences described by Tom Haskins.

The way networks learn is the way people learn. Network learning is the same thing as personal learning.


3. Personal Learning

By 'personal learning' we mean learning conducted by oneself, for oneself, what Jay Cross means by 'informal learning'. Probably the best indicator of what works in informal e-learning is what works on the web in general. After all, this is where much informal learning is already taking place. And the web is a medium that supports informal, random-access on-the-job training.

Looking at successful websites in general (and looking at usability, information architecture, and other design documents) we can identify three major criteria: interaction, usability and relevance.

By 'interaction' what we mean is the capacity to communicate with other people interested in the same topic or using the same online resource. In a learning environment, interaction means the capacity to speak with your fellow students or your instructor. Of course, online, such roles are not so distinct - your student at one moment may be your instructor the next, depending on the subject.

Interaction is important for two major reasons. First, it helps us understand that there are people out there, that we aren't merely communicating with a machine - what Terry Anderson would call 'presence'. We need presence to held devlop cognitive skills and to feel the supportive environment that supports growth. As any user of one of those automatic telephone answering services can attest, when you want to be heard there is little else more frustrating that speaking to a device that cannot understand you.

But more than the human contact, interaction fosters the development of human content. A bundled training program can give a learner a lay of the land. But even the best designers cannot create lessons for every contingency (and even the best learners are unlikely to sit through them all). This is why stories are so important in learning and so frequently found on internet bulletin boards.

By 'usability' we mean the ease with which desired objectives may be satisfied using an application or appliance. For example, is a site is a search site, 'usability' refers to how easy it is to successfully locate a desired search result. Probably the most usable websites on the internet are Google and Yahoo. And between the two sites, designers have hit on what are probably the two essential elements of usability: consistency and simplicity.

Simplicity is the feature that strikes the user first. Many of us probably recall Google's debut on the web. At that time, it was little more than a text form and a submit button. Results listings were unadorned and easy to follow. Simplicity has long been the path to online success. Amazon made buying books online simple. eBay made hosting an online auction simple. Blogger made authoring your own website simple. Bloglines made reading RSS simple. The web itself is actually the simplification of earlier, more arcane technologies like Gopher, Archie and Veronica.

Consistency is less well understood but we can get an idea by looking at the links on both Yahoo!'s and Google's cureent sites. What you won't find are things like dropdown menus, fancy icons, image maps and the other arcania of the typical website. Links on both Yahoo! and Google are not only simple, they are consistent: they are the same colour and the same type throughout the site, for the most part unadorned. They use the ultimate standard of consistency: words - a system of reference with which readers are already familiar.

By 'relevance' we mean the principle that learners should get what they want, when they want it, and where they want it. What learners want is typically the answer to a current problem or enquiry. This is what drives the use of search engines forward, as web users attempt to specify and work through results lists in an effort to state precisely what it is they are looking for. This is what drives the users of community and hobby groups on Yahoo! Groups and other discussion boards to pose increasingly detailed statements of exactly what it is they are trying to learn.

Placing relevant content in to exactly the right context at the right time is an art. It involves both aspects of effective content design and aspects of dynamic search and placement. Information needs are not static - they will change with both the situation and the changing capacities of the learner. Placement depends on the precise nature of the request sent by a piece of software or tool, and the ability of a piece of content to respond to that success. Game designers understand this - the game presents different information to users at different points of the game where it will be useful - and usable - by the player.


4. Network Learning

By 'network learning' we mean the principles that inform the development of new connections in a network - in other words, how networks learn. These principles are informed partially through the study of neuroscience and partially throught he development of networks in computer science, an approach called 'connectionism'.

Though there are various ways networks can form sets of connections among entities, there are three major types of network learning that are informative in this discussion:

- Simple (or 'Hebbian') associationism - this is the principle that if two nodes in a network are activated at the same time, a connection will form between those nodes. Thus, for example, we recognize similar things (like tigers) by seeing them over and over again.

- Backpropagation - this is the principle that allows the output of a network to be corrected by the sending of a signal back through the network instucting it to either strengthen or weaken the connections that produced the output. For example, a person might receive feedback - positive or negative - on their performance.

- Boltzmann - this is a principle that allows connections to strengthen or weaken by 'settling' into thermodynamically stable configurations (much the way water will settle to a level surface in a pond), and a mechanism (called 'annealing') that disrupts the network of connections, to prompt the settlement into the most stable configuration possible. (See Hinton, G. E. and Sejnowski, T. J. Learning and relearning in Boltzmann machines. In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition.)

Most people don't think of themselves as associating, back propagating of settling. But the theory of learning described by these mechanisms is in fact relatively commonplace, and can be described (in slogan form) as follows:

To *teach* is to model and demonstrate, and to "learn* is to practice and reflect. To teach is, essentially, to provide or to make possible the having of experiences by students. These models and demonstrations, by virtue of their structural similarities with other models and demonstrations, allow students to form relevant networks of connections. Students then actively begin to learn by practicing - first by imitating, then later by creating something novel. The point of practize is to improve performance by receiving feedback. They then reflect on what they have experience and practiced - this is (somewhat) analagous to the Boltzmann mechanism.


5. Reliability

Both personal learning and network learning are characterized by dynamic patterns of interactivity in a networked environment. The same principles are at work in each case. But can this process be trusted? Is it reliable?

Networks can be trusted, as James Surowiecki shows in The Wisdom of Crowds. "Many cognitive, coordination and cooperation problems are best solved by canvassing groups (the larger the better) of reasonably informed, unbiased, engaged people. The group's answer is almost invariably much better than any individual expert's answer, even better than the best answer of the experts in the group." It is this wisdom we see not only in the sudience picking the right answer in "Who Wants to Be a Millionaire" but also in picking stocks in the stock market and picking governments in elections.

However, not just any network can be trusted. Networks can sometimes run away with themselves - for example, if one person in a community catches a fatal virus, it can spread to every other member, and kill the entire community. Such phenomena are known as cascade phenomena. In the realm of information networks (such as the brain, or a community) these are known as informational cascades. They are like 'jumping to a conclusion' or 'groupthink'.

Networks avoid informational cascades - and hence, are reliable - only if they satisfy the following four criteria (known collectively as 'the semantic condition'):

- Diversity - Did the process involve the widest possible spectrum of points of view? Did people who interpret the matter one way, and from one set of background assumptions, interact with people who approach the matter from a different perspective?

- Autonomy - Were the individual knowers contributing to the interaction of their own accord, according to their own knowledge, values and decisions, or were they acting at the behest of some external agency seeking to magnify a certain point of view through quantity rather than reason and reflection?

- Openness - Is there a mechanism that allows a given perspective to be entered into the system, to be heard and interacted with by others?

- Connectivity - Is the knowledge being produced the product of an interaction between the members, or is it a (mere) aggregation of the members' perspectives? A different type of knowledge is produced one way as opposed to the other. Just as the human mind does not determine what is seen in front of it by merely counting pixels, nor either does a process intended to create public knowledge.


6. Examples

How does the discussion above help us understand about and design learning technologies? They show us not only what to design but also help us understand what would be a better (or worse) design.

We begin with the principle, 'To *teach* is to model and demonstrate, and to "learn* is to practice and reflect.' This gives us a set of four types of things to create:

- Things that model - such as the wiki, concept maps, diagram tools such as gliffy, video / 2L 3D representation, and the like

- Things that demonstrate - such as code libraries, image samples, articles describing thought processes, case studies and stories

- Things that help us practice - such as games, sandboxes, job aides, simulations and ebnvrionments

- Things that help us reflect - such as presentations and seminars, blogs, wikis, discussion groups, and other ways of sharing and communicating

For any given application in each of the four categories, we can apply the remaining principles to provide an assessment of it likely effectiveness.

For example, consider the wiki. Does it support network learning? Yes - it provides examples to follow, allows correction and criticism, and rethinking and rewriting. Does it support personal learning? Yes, it engages interaction. It supports a genuine voice, experiences, opinions. It is a simple and consistent interface. It is (mostly) accessible where and when I need it.

Is the wiki reliable? Do I have diversity of sources? Yes - but only if there is a threshold number of users. Are the sources autonomous? They can be. And wikis support connectedness with links, etc, and can be open to a large number of contributors. These considerations argues against closed or private wikis, but suggest that wikis can be useful for large groups.

As another example, consider image libraries. They provide examples to follow, but our study suggests that image libraries should have (like Flickr communication channels, ratings and reviews, and ways to link images, such as tags. And an image library will be 'reliable' if it allows contributions from numerous photographers. We also see that we want people to have individual identities on Flickr, rather than just contributing to a pool, to preserve autonomy and diversity.

As a third example, consider Second Life. We can see why people are attracted to it. It allows us to create examples to follow, corrections and criticisms. It engages interaction and supports a genuine voice. But we also see weaknesses. Is Second Life a good place for reflection? There are limits on reusing what other people have created. It is also semantically weak. There is only one world, not a large number of diverse worlds. Autonomy is limited - you can't even pick your own name - and there are questions about governance. There is connectedness, through slurls, but it is not clear that it is an open platform.


7. Concluding Remarks

The purpose of this paper was to describe how network learning works and to show how an understanding of network learning can inform the design and evaluation of online learning applications.

Admittedly, there is room for debate and discussion regarding the nature and precise statement of the principles. What remains, however, is that the model of learning as a personal and a network activity provides us with concrete insights into the sort of learning environments that are most likely to be successful online.

Sunday, October 14, 2007

Understanding Me

Responding to David Wiley, Misunderstanding Stephen (and a chance to use a McLuhan title as a post title).

1. Mixing Licenses

David writes, "When a group of learners who are in no way affiliated with a company or any other for-profit organization are prevented from remixing OERs by the copyleft provisions in the GFDL and the CC-By-SA or the CC-By-NC-SA, how is it that this is only a problem for commercial exploiters of open content?"

Leaving aside all the presumption packed implicitly into this statement with phrases like 'group of learners'... Here is my counterexample: http://halfanhour.blogspot.com/2007/10/mixing-content.html

"Are you saying it’s ok for these learners to violate the license terms, because no one will care since they’re not making any money?"

Yes. Because there is no reasonable interpretation of those terms that would see them applied against individual students creating their own learning content for their own personal (and sometimes shared) use.

Now note, that this is *very* different from the corporate learning we see, eg., on a university campus. If the 'group of students' has enrolled in a class, and paid their thousands of dollars of tuition, and is being required to perform such actions, all bets are off. One of the license holders *might* sue the university.

And let me put this point even more forcefully: has there *ever* been a lawsuit over remixing GFDL materials with any CC materials licensed with the SA clause? Has there ever been such a lawsuit that did *not* involve a commercial entity? Should such a lawsuit ever be filed, can you see it not involving a commercial entity?

One (just one) of the places we find ourselves in disagreement is with what appears to be the presumption that you have accepted, that the prohibitions that govern corporate and commercial conduct also apply, and apply with equal force and sanction, to private and noncommercial behaviour. This is a doctrine copyright holders have worked very hard to gt pundits to accept, and they have done so by repeating it over and over for decades. But it remains not true - but I am, it appears, the only voice in your world that says otherwise, which renders the point not merely controversial but quite literally incomprehensible to you.

OK, new post, because I don't want to get the separate points confused with each other...


2. The Principle of Use

Next issue: "And, while I’m asking Stephen to explain things to me, why is it that he seems to think 'open' should really mean 'closed?' In other words, why is it that “open” should mean 'open to everyone except for some people' - specifically, companies? Why should we exclude anyone from what we’re trying to do?"

First, some preliminaries, to lay out some important groundwork.

(a) Companies are not people. I know that there is a legal fiction, the 'corporate person', but it nonetheless remains the fact that, if I bar a corporation from using something, it does not follow that I have barred any person from doing something.

(b) The prohibition of s certain type of use is not the prohibition of any use. The statement being made is the non-commercial clause *excludes* people from using the content. This is flat-out false. People - even people in corporations - can use the content. They do so all the time - they read wikipedia, they cut and paste CC photos, the works. What is prohibited is not 'use' but rather, a certain type of use.

(Now the usual dodge here is to throw out some pseudo-jurisprudence here ('pseudo' because I don't think it has ever been tested in court) to the effect that 'corporate use' prohibits, not a certain typ of use (specifically, 'commercial use') but use by a certain typ of entity, specifically, a corporation - but even so, this does not prohibit use by pople who work for a corporation, it prohibits only use while thy are performing their corporate duties - which is, again, a prohibition of a type of use).

So I am saying that 'NC' prohibits a certain type of use. And we ask, well, what type of use is that?

And my answer is very simple and very obvious:

It is the type of use that consists completely and solely of blocking other people from using the same thing.

EVERY thing a corporation does with content is consistent with non-commercial use up to the point where they block access to it and start charging money for it. The ONLY thing that differentiates commercial and noncommercial use is that commercial use consists of blocking people from using things.

What this means is that, except with respect to commercial use, GFDL and the CC-By-SA or the CC-By-NC-SA are logically identical. They are both statements to the effect that 'you shall not block anyone from using this content'.

To take this even a step further, strictly speaking, open licenses that allow commercial use embody a contradiction. They are saying "'you shall not block anyone from using this content' and 'you may block some people from using this content'". Anything follows from a contradiction, which is why it is so easy to make it look like GFDL and CC-BY-SA are inconsistent.

The presumption behind a license like GFDL is that the blocking use (ie., charging money for access) will not eliminate the (contradictory) open use. They can charge a fee for the content, but the people can always find the content for free and use it that way. You can charge people for a Linux distro, but people can always use it for free.

Except... except...

It never works out that way. The commercial publishers always find a way to make sure that the *only* way you can get access to the free content is by paying for it. The only original is in a museum, which charges admision, and prohibits photos. The only access is through an online repository, that carries commercial copies of things only. The material is part of a university course, for which you must py tuition to attend. Unrelated laws exist, that require payment for free content (patents on the statement saying 'this content is free' for example). The list goes on.

Dave continues, "Of course, this hasn't been the case for RedHat and Linux, and it won’t be the case for open content and commercial publishers who get involved in it."

I think this is a very interesting test case to watch. We've seen a number of companies now try to create a business model by creating exclusivity around open source content. Red hat. Suse. Ubuntu. My projection is that as these companies consolidate, it will become harder and harder to find free versions of the software. You will (for example) be able to download the software, but only if you agree to pay for a 'service agreement'. Or you will always have to pay for media, such as the plastic CD (or the magazine to which it has been attached). And of course, stores will offer the boxed version for $49.95 - but will never be giving out the free version.

OK, onto the next post...


3. Essentialism and Pragmatacism

Next bit...

"These statements are probably both very true. But neither of these lobbying activities will increase or decrease in correlation to commercial companies’ abilities to distribute open content, and so these are completely irrelevant."

The point, of course, is not that lobbying will increase or decrease, but rather, that even though they could use all of the BBC's content for free themselves, what they *really* want is to block access to content. Because their business model depends on blocking access.

But this points (very indirectly, so don't worry about the connection) to a fundamental distinction between David Wiley and myself.

Specifically (to use really bad lables): Wiley is an essentialist while I am a pragmatacist (NOT a pragmatist).

What does that mean? I can illustrate the distinction by talking about how we determine the meanings of words.

An essentialist will say that the meaning of the word is in the word itself; that the word is, fully and completely, its own meaning. That the truth of a sentence containing the word will be determined in one and only one way, with a fixed result. An example of essentialist thinking is Tarski's theory of truth: "The sentence 'snow is white' is true if and only if snow is white." An example of an essentialist theory of naming is Saul Kripke's "Naming and Necessity".

A pragmatacist, on the other hand, believes that meaning varies with, and is determined by, use. In other words, the meaning of the word isn't in the word at all, but is rather determined soley by how we use it. How do we know, for example, what the word 'safe' means? A person says 'the ice is safe' and then proceeds to walk on the ice across the lake. Wittgenstein's work, especially Philosophical Investigations, describes the theory of meaning as use. (And you may want to look at Kripke's "Wittgenstein on Rules and Private Language").

Now, I am not saying that David is an essentialist with respect to the meaning of words. He is a very sophisticated thinker, and is well aware of the way meanings can vary and fluctuate over time. He will have, at a minimum, an account that explains this fluctuation, even if he does not believe that it is the basis of meaning.

But he is an essentialist in other areas. I will name two. Now of course only Wiley can explain Wiley, so I am subject to correction here. But this is what I am seeing at the moment.

1. Where is the 'learning' in learning objects? An essentialist will argue, that in order for something to be a learning object, it must contain certain specific features. What these features amount to vary dependingv on the person talking, but they will say (for example) that the learning object must contain learning objectives, it must embody pedagogical theory, it must contain assessment... whatever.

But I respond, what makes a learning object a 'learning' object is not the nature of an object, but rather, how it is used. On my view, anything can be a learning object - even (to cite a famous example) a scrap of tissue paper. But as soon as it is ued in a learning context, it *acquires* the property of being a learning object.

That's a core distinction between an essentialist and a pragmatacist theory. To the essentialist, a thing is what it is. But to a pragmatacist, a thing can change what it is over time, even if it never changes internally or physically.

There is a core consequence to the two approaches as well. Essentialists often believe that categorization and segmentation are fundamental. An essentialist will believe, for example, that we should have 'learning object repositories' and not just 'object repositories'. That there should be 'learning games' or 'serious games' as opposed to 'games we play for fun'.

The creation of taxonomies is a common tool for the essentialist. Taxonomies - categorizations - reveal (it is presumed) something underlying about the nature of the things being studied - whether it is resources, processes, or people.

2. Where is the 'legal' in law? An essentialist will argue that the word of the law is what defines 'legal' and 'illegal'. He will agree that this can be clarified and contextualized by jurisprudence and common law, but not that actions can be variously legal or illegal under a single extant statement of law (ie., under a single text of the law, plus text of interpretations and decisions).

A pragmatacist will argue, however, that what makes something legal or not legal has much more to do with conditions and circumstances than it has to do with the text of the law. For one thing, the text underdetermines possible resolutions, which is why we need to go with intangibles like 'intent'. And for another, a law is composed of two major components: the text of the law, and the enforcement (or th 'use') of the law.

Nowhere is this more evident than in Biblical law. As has been often pointed out by pundits (including my own post, 'The 57 Commandments' http://www.downes.ca/post/113 ) the law, as stated in the Bible, contains constraints that are not enforced today, not even by the most devout. People are no longer put to death for cursing their father. We no longer have slavery and bride-prices. law - even Biblical law - is interpreted. It varies. It depends on context, even though the words remain the same. And is is, it is intended, not to be followed, but to be used.

Now - again - the only person who can say David Wiley is an essentialist is David Wiley. But from where I sit, this is where our divisions lie. When we talk about how learning ought to be organized, how open educational resources ought to be conceived, and developed, and delivered, how licenses ought to be understood and interpreted, Wiley will fall on the side of the nature of the thing itself, while I will fall on the side of use and consequences. Such, at least, is my take on the situation (which contains enormous room for error).


4. Trademarks

Fourth comment...

On trademarks, David (interestingly) takes the 'use' position. So it appears: "(Trademarking) only prevents you and I from using them on the cover of our own technology books, which we would likely do to try to confuse the public about the origin of the books."

Quite so. But we would then be mistaken, wouldn't we, if we depicted this as an issue about the content. Because there are two ways to look at a violation of trademark law:

1. As the use of a certain type of content in a certain situation ('on the cover of a textbook, as David represented it).

2. As an intent to fraudulently represent yourself as someone else (usually for the purpose of commerce).

We can see quite clearly that what is prohibited in the second case has nothing to do with the actual content of the name or symbol, but rather to the criminal use to which it is put.

But it is pretty easy, even in this light, to see the sense of the first interpretation. Because when one of the O'Reilly diagrams is placed on a technology textbook, it is not really reasonable to interpret such use as anything other than fraudulent.

Why would I make such a fine-grained distinction?

Because of the interpretation. In the first case, O'Reilly *owns* a certain right, specifically, the use of certain content in certain circumstances. It has become O'Reilly's property. But in the second, O'Reilly doesn't come to own anything. The enforcement of the law has nothing to do with O'Reilly - it has everything to do with the fraudulent intent of the other party.

All of that said, let me dispense with some of the argumentation on trademarks:

David writes, "O'Reilly puts a lot of brain power and resources into their marketing. Why should I be able to come along and both (a) confuse the consumer and (b) ruin their reputation with a shoddy book that looks like they produced it?"

I certainly agree that one person (or company, for that matter) should not be able to fraudulently misrepresent itself as another.

But - and David should know this - the amount of work and resources they put into their marketing have nothing to do with this. A person could put an equal amount of resources and brainpower into misrepresenting themselves as someone else, but this investment doesn't somehow buy them immunity from fraud laws.

I have put very little effort into creating my own 'brand' - absolutely no marketing dollars, for example. The same is true of most people. But it is still illegal to fraudulently represent yourself as me, or as anyone else.

David mentions the incentives argument in this context. I have nothing against incentives. People should be paid for work (the BIG question, in my mind, is whether people should work to be paid - is our society based on extortion, in which you MUST work, or else you die, or is it based on freedom, where each person has a right to a certain share of the wealth - but that's WAY of topic).

I also have nothing against ownership over something you create. I built a set of bookshelves in my dining room - those are mine, you can't simply walk in and take them. I wrote this paragraph. It is not only mine, in the sense that you can't claim ownership over it, but it is mine, in th sense that you cannot represent yourself as having written it.

But I make a strong distinction between claiming ownership over what you have created, and claiming ownership over what you have not created. And MOST of the paragraph above is not my own creation. None of the words were created by me. The grammar and syntax I use wasn't created by me. Numerous turns of phrases are not original to me. The ideas have been expressed by others as well. According to Google, the sentence "I wrote this paragraph" has been written 11,500 times previously - what gall to claim that it is mine!

When we look at what was created, in any given creation, such a minute portion of it was actually created by the originator, that the assertion of ownership over it, ought to be thought of as the exception, rather than the rule. And - again - our understanding of copyright ought to be not the *ownership* by one over some content or property, but rather, the intent and USE of the other, to fraudulently misrepresent themselves.

The 'ownership' of a copyright has nothing to do with the content. It has everything to do with the USE of the content by others. That's why we can avoid questions of what part of the content is actually original, what part is 'essentially' owned, and focus instead on how other people use that content. If the other person is attempting to fraudulently misrepresent themselves, then copyright has been violated.

We used to have pretty clear understanding of this. We used to understand that, while a company couldn't copy songs and sell them in stores, it was perfectly OK for you and your friends to share songs taped from the radio among yourselves. Somewhere along the line, 'commercial use' and 'personal use' got confused, as though they were the same thing, because the focus shifted from use to 'content' and 'law'. And that's when they started suing grandmothers and infants and college students.

This statement of options simply misrepresents the situation:

"There are three possible choices when it comes to using a public domain work as a symbol for your product - whether for-profit or otherwise (let’s keep in mind that not-for-profits trademark slogans and artwork and other things as well). One - no one should ever be allowed to do so, regardless of what benefit might be realized. Two - first person to do so should receive some protection against masqueraders. Three - there should be no restrictions at all with regard to this specific use of public domain works, regardless of what harm may occur."

This statement of options treats the symbol as something that can be something that can come to be owned. But in fact, what we ought to be saying is that anyone can USE any symbol, but that no person (or corporation) can MISREPRESENT themselves as another person (or corporation).

This raises another point where we have disagreed in the past (and it's related to the essentialist pragmaticist distinction I drew above). On Wiley's view, there is some set of rules that we can draft that governs behaviour (and what is legal) in a certain domain. But on my view, the distinction between 'allowed' and 'not allowed' cannot be expressed as a set of rules (and that, indeed, drafting more rules makes it MORE likely, not less likely, that the intent of our rules will be subverted).

David concludes this bit, "Stephen will likely disagree, but I believe situation Two (which happens to be the current situation) is the most reasonable."

In fact I actually go along with situation two, but my reading of it is completely different. There is no sense in which being the 'first' to use a symbol confers some sort of right or ownership over that symbol (if this were true, many of the brand names, images and slogans we know today would be illegal - there is NO WAY these companies were the first to use these o market themselves - the world didn't begin twenty years ago).

The second option is the one that is preferred because it is the one that comes closest to making the statement that "You should not misrepresent yourself as someone else." But this statement is contrary to the intent of Wiley's three-part distinction. He is talking about how we ought to allocate property. I am talking about what constitutes illegal conduct.

Two very very different worlds. No wonder he finds me incomprehensible.


- I'm tired -
- more to come -

Mixing Content

Here is some OCW Content:

A powerful programming language is more than just a means for instructing a computer to perform tasks. The language also serves as a framework within which we organize our ideas about processes. Thus, when we describe a language, we should pay particular attention to the means that the language provides for combining simple ideas to form more complex ideas. Every powerful language has three mechanisms for accomplishing this:

  • primitive expressions, which represent the simplest entities the language is concerned with,

  • means of combination, by which compound elements are built from simpler ones, and

  • means of abstraction, by which compound elements can be named and manipulated as units.

In programming, we deal with two kinds of elements: procedures and data. (Later we will discover that they are really not so distinct.) Informally, data is ``stuff'' that we want to manipulate, and procedures are descriptions of the rules for manipulating the data. Thus, any powerful programming language should be able to describe primitive data and primitive procedures and should have methods for combining and abstracting procedures and data.

In this chapter we will deal only with simple numerical data so that we can focus on the rules for building procedures.4 In later chapters we will see that these same rules allow us to build procedures to manipulate compound data as well.

And here is some Wikipedia content:

Good style, being a subjective matter, is difficult to concretely categorize; however, there are several elements common to a large number of programming styles. The issues usually considered as part of programming style include the layout of the source code, including indentation; the use of white space around operators and keywords; the capitalization or otherwise of keywords and variable names; the style and spelling of user-defined identifiers, such as function, procedure and variable names; the use and style of comments; and the use or avoidance of programming constructs themselves (such as GOTO statements).

Code appearance

Programming styles commonly deal with the appearance of source code, with the goal of improving the readability of the program. However, with the advent of software that formats source code automatically, the focus on appearance will likely yield to a greater focus on naming, logic, and higher techniques. As a practical point, using a computer to format source code saves time, and it is possible to then enforce company-wide standards without debates.

Indenting

Indent styles assist in identifying control flow and blocks of code. In programming languages that use indentation to delimit logical blocks of code, indentation style directly affects the behaviour of the resulting program. In other languages, such as those that use brackets to delimit code blocks, the indentation style does not directly affect the product. Instead, using a logical and consistent indentation style makes code more readable. Compare:

They are mixed. And posted online.

Now if David Wiley is right, the heavens should open up and rain on me.

Waiting...

See, it doesn't matter if I take two bits of open content and mix them in this very obviously bloggish small-scale non-commerical way. Nobody on either license cares.

The license isn't just the text. It's in the intention, the interpretation, and the enforcement of the text.


Saturday, October 13, 2007

Staying on Message

Responding to Dave Cormier:

Good post.

“We are constantly bombarded by subtle media signs that are trying to use our desire for belonging to get us to buy things, to get us to do things… If we aren’t careful, we do things to belong.”

Absolutely right. And ’staying on message’ is a huge part of this. To get us to say the same things, to believe the same things (and hence, to buy the same things, vote for the same things). Which ultimately… hurt us.

The only way to survive is to get to the root of the rot. To define clearly and for yourself what counts as ‘getting ahead’. Are the rewards they offer you enough to convince you to mouth words you know are false? Is the threat of loss of livelihood sufficient to force you to comply to the corporate myth?

OPLC serves a noble cause, but it is not benign. It is an instantiation of a certain myth - one that might be titled “we produce, you consume” - but which is supported by media manipulation. We never read of other mini-computer initiatives. We never see an explanation of why places like MIT goe their own way - on MediaMOO, on OPLC, on Sakai, on DSpace - instead of supporting the international community that *already* exists. We don’t hear why UNESCO supports (Sun’s project) Curriki, instead of Wikiversity. We don’t read about open source mobile phone hardware and peer-to-peer communications networks. We see no discussion of why ‘personal pages’ (ie., pages that are non-commercial) are subjected to blanket filtering, as a class. Because “we produce, you consume”. And the wealth continues to flow in one direction.

When you start ’staying on message’ to appease your employers and your funders, you begin to support this message, this one-way flow of wealth, this undermining of your your own independence, your own livlihood, our own freedom.

You can’t make me ’stay on message’ because it costs too much. The minute somebody realizes they can take away your freedom - they do. And nothing you believe or own is yours again.

Thursday, October 11, 2007

Homophily and Association

Responding to Artichoke:
I’ve been trying to find posts of critical analysis on the ULearn07 conference many of our teachers attended in Auckland during the school holidays. I wanted to read any critique of the new learning on offer. So it was disconcerting to read through the 427 Ulearn07 Hitchhkr links and find so little analysis and so much flocking sentiment. If I was reliant upon Hitchhikr alone for feedback on the conference I’d be tempted to conclude that ULearn07 attracted educators of such similar minds that they shared the same emotional response to all the experiences on offer - or perhaps I must conclude that blogging about an educational conference induces a Josie Fraser described homophily in educators.

What we are seeing in these communities is classic 'group' behaviour. Groups are characterized by emotional attachment to an idea or cause. Hence the 'me too' posts, as posts consisting of statements of loyalty to the group will be most valued by the group.

Group behaviour common accompanies homophily because groups are created - and defined by - similarity and identity. What's important in a group is that everybody be in some way relevantly the same. Thus it becomes important to obtain statements of conformity (in the case of hitchhkr tags) and to define boundaries.

(It is interesting to compare hitchhkr, which, because it used Technorati, demanded explicit affiliation to a group, with the conference feeds created by Edu_RSS, which, because it harvested RSS feeds directly, required no affiliation - in Edu_RSS you tend to get more criticisms and "outsiders'" perspectives).

What should be kept in mind is that homophily is only one of several means of creating associations between entities (and hence, clusters of those entities, aka 'communities').

Homophily is, essentially, simply Hebbian associationism. When neurons fire at the same time - that is, when they are stimulated by all and only the same sort of thing - they tend to become connected.

But there are other principles of association. I would like to list four (usually I list three, but I think that the fourth should become part of this picture). I'll give brief examples of each:

1. Hebbian associationism. People are connected by common interests. Affinity groups, religions, communities of practice - these are all examples of similarity-based association.

2. Accidental, or proximity-based, associationism. People who are proximate (have fewer hops between them) are connected. You may have nothing to do with your neighbour, but you're connected. The mind associates cause and effect because one follows the other (Hume). Retinal cells that are beside each other become associated through common connections.

3. Back-propagation. Existing structures of association are modified through feedback. Complain about the 'me too' posts, for example, and they decline in number. Adversity creates connections.

4. Boltzmann Associationism. Connections are created which reflect the most naturally stable configuration. The way ripples in a pond smooth out. This is how opposites can attract - they are most comfortable with each other. Or, people making alliances of convenience.

Two of these forms are qualitative. They are based on direct experience. They are not critical or evaluative. They tend to lead to groups.

The other two - Back Propagation and Boltzmann associationism - are reflective. They are created through a process of interaction, and not simply through experience. They are critical or evaluative. They tend to lead to networks.

It has been said, by way of criticism of my other work on this subject, that we need the elements of both groups and networks. That may be true. But the problem is, they cancel each other out.

Groups are based on conformity, networks are created out of diversity. Groups are based on compliance, networks are based on autonomy. Groups are closed, networks are open. Groups communicate inwardly, networks communicate outwardly.

Most social networks to date have focused on groups (indeed, they are explicitly about creating groups) and hence, on Hebbian and Accidental association. It's easy to find similarities. But the similarities are so broad (as Fraser says, sex springs to mind) the groups thus defined are formless, and when you define the similarities more narrowly, the members of the group have nothing to say to each other (other than to chant the slogans back and forth at each other).

Finding reflective connections is more difficult. We do not have automated back-propagation and Boltzmann mechanisms on the internet - it's possible that we won't be able to. Right now, the only mechanisms we have are messy things like conferences and chat rooms and discussion lists and blogs. And the connections have to be made, not by machine, but by autonomous reflective individuals.

A Truly Distributed Creative System

Posted to idc, October 11, 2007

John Hopkins wrote, on idc:

You cannot have a truly distributed creative system without there
being open channels between (all) nodes.
I don't think this is true.

Imagine an idealized communications system, where links were created directly from person to person. If all channels were open at any given time, we would be communicating simultaneously with 6 billion people. We do not have the capacity to process this communication, so it has the net effect of being nothing but noise and static. Call this the congestion problem.

This point was first made to me by Francisco Valera in a talk at the University of Alberta Hospital in 1987 or so. He was describing the connectivity between elements of the immune system, and showed that most effective communication between nodes was obtained at less than maximal connection, a mid-way point between zero connectivity and total connectivity. Similarly, in human perception, we find that neurons are connected, not to every other neuron, but to a subset of neurons.

What this tells me is that what defines a "truly distributed creative system" is not the number of open channels (with 'all' being best) but rather the structure or configuration of those channels. And in this light, I contend that there are two major models to choose from:

- egalitarian configurations - each node has the same number of connections to other nodes
- inegalitarian configurations - nodes have unequal numbers of connections to other nodes

Now the 'scale free' networks described by Clay Shirky are inegalitarian configurations. The evidence of this is the 'power law' diagram that graphs the number of connections per member against the number of members having this number of connections. Very few members have a high number of connections, while very many members have a low number of connections - this is the 'long tail' described by Anderson.

The networks are scale free because, theoretically, there is no limit to the number of connections a member could have (a status Google appears to have achieved on the internet). [*] Other inegalitarian networks have practical limits imposed on them. The network of connections between airports, for example, is an inegalitarian configuration. Chicago is connected to many more places than Moncton. But the laws of physics impose a scale on this network. Chicago cannot handle a million times more connections than Moncton, because airplanes take up a certain amount of space, and no airport could handle a million aircraft. This is another example of the congestion problem.

What distinguishes the inegalitarian system from the inegalitarian system is its the number of 'hops' through connections required to travel from any given one member to another (this can be expressed as an average of all possible hops in the network). In a fully inegalitarian system, the maximum number of hops is '2' - from one member, who has one connection, to the central node, which is connected to every other node, to the target node. In a fully egalitarian system, the maximum number of hops can be much higher (this, again, is sensitive to configuration).

As the discussion above should have made clear, it should be apparent that fully inegalitarian systems suffer as much from congestion as fully connected systems, however, this congestion is suffered in only one node, the central node. No human, for example, could be the central node of communication for 6 billion people. This means that, while the number of hops to get from one point to another may be low, the probability of the message actually being communicated is also low. In effect, what happens is that the inegalitarian system becomes a 'broadcast' system - very few messages are actually sent, and they are received by everyone in one hop.

In other words - maximal connectivity can result in the *opposite* of a truly distributed creative system. It can result in a maximally centralized system.

I'm sure there's a reference from critical theory or media theory, but what would to me define a truly distributed creative system is 'voice' (sometimes called 'reach'). This could be understood in different ways: the number of people a person communicates with, the average number of people each person communicates with, the minimum number, etc. My own approach to 'voice' is to define it in terms of 'capacity'. In short, any message by any person could be received by all other people in the network. But it is also defined by control. In short, no message by any person is necessarily received by all other people in the network.

One way to talk about this is to talk about the entities in the network. When you look at Watt and Barabasi, they talk about the probability that a message will be forwarded from one node to the next. This, obviously, is a property of both the message and the node. Suppose, for example, that the message is the ebola virus, and that the node is a human being. The virus is very contagious. If contracted to one person, it has a very high probability of being passed on to the next. But suppose the person is resistant. Then he or she won't contract the virus, and thus, has a very low probability of passing it on.

The other way to talk about this is to talk about the structure of the network. The probability of the virus being passed on increases with the number of connections. This means that in some circumstances - for example, a person with many friends - the probability of the virus being passed on is virtually certain. So in some network configurations, there is no way to stop a virus from sweeping through the membership. These networks are, specifically, networks that are highly inegalitarian - broadcast networks. Because the virus spreads so rapidly, there is no way to limit the spread of the message, either by quarantine (reducing the number of connections per carrier) or inoculation (increasing the resistance to the message).

In order to create the truly distributed creative system, therefore, you need to:

- limit the number of connections for any given node. This limit would be based on what might be thought of as the 'receptor capacity' of any given node, that is, the maximum number of messages it can receive without congestion, which in turn is, the maximum number of messages it can receive where each message has a non-zero chance of changing the state of the receptor node.

- maximize the number of connections, up to the limit, for any given node. This might be thought of as maximizing the voice of individual nodes. What this does is to give any message from any given node a good start - it has a high probability of propagating at least one step beyond its originator. It cannot progress too fast - because of the limit to the number of connections - but within that limit, it progresses as fast as it can.

- within these constraints, maximize the efficiency of the network - that is (assuming no congestion) to minimize the average number of hops required for a network to propagate to any other point in the network.

These conditions combine to give a message the best chance possible of permeating the entire network, and the network the best chance possible of blocking undesirable messages. For any given message, the greatest number of people possible are in a position to offer a countervailing message, and the network is permeable enough to allow the countervailing message the same chance of being propagated.

What sort of network does that look like? I have already argued that it is not a broadcast network. Let me take that one step further and argue that it is not a 'hub and spokes' network. Such networks are biased toward limiting the number of hops - at the expense of voice, and with the risk of congestion. That's why, in hub and spoke networks, the central networks become 'supernodes', capable of handling many more connections than individual nodes. But this increase in capacity comes with a trade-off - an increase in congestion. This becomes most evident when the supernode attempts to acquire a voice. A centralized node that does nothing but reroute messages may handle many messages efficiently, but then the same node is used to read those messages and (say) filter them for content, congestion quickly occurs, with a dramatic decrease in the node's capacity.

Rather, the sort of network that results is what may be called a 'community of communities' model. Nodes are highly connected in clusters. A cluster is defined simply as a set of nodes with multiple mutual connections. Nodes also connect - on a less frequent basis - to nodes outside the cluster. Indeed (to take this a step further) nodes typically belong to multiple clusters. They may be more or less connected to some clusters. The propagation of a message is essentially the propagation of the message from one community to the next. The number of steps is low - but for a message to pass from one step to the next, it needs to be 'approved' by a large number of nodes.

When we look at things like Wenger's communities of practice, we see, in part, the description of this sort of network. Rather than the school-and-teacher model of professional development (which is a hub and spokes model) the community of practice maximalizes the voice of each of its members. It can be called a cluster around a certain topic or area of interest, but the topic or area of interest does not define the community, it is rather an empirical description of the community (and thus, for example, we see people who came together as a hockey team in 1980 continue to be drinking buddies in 1990 and go on to form an investment club in 2000).

Maximally distributed creativity isn't about opening the channels of communication, at least, not directly. It is about each person having the potential to be a member of a receptive community, where there is a great deal of interactivity among the members of that community, and where the community, in turn, is a member of a wider community of communities. Each person thus is always heard by some, has the potential to be heard by all, and plays a role not only in the creation of new ideas, but also, as part of the community, in the evaluation and passing on of others' ideas.

==

[*] I just want to amend my previous post slightly.

I wrote: "The networks are scale free because, theoretically, there is no limit to the number of connections a member could have..."

This should not be confused with the definition of a 'scale free network', which is specifically, that "a network that is scale-free will have the same properties no matter what the number of its nodes is."

But the relationship between my statement and the more formal definition should be clear. If there is a limit to the number of connections created by the physical properties of the nodes, then the mathematical formula that describes one instance of the network (a small instance) cannot be used to describe all instances of the same type of network.

Wednesday, October 10, 2007

Sleep Apnoea

Responding to Seb Schmoller.

I have sleep apnoea. The key to diagnosis isn't the tiredness - this can be caused by a wide number of things, including narcolepsy. It is the irregular heartbeat.

As you describe, people with sleep apnoea stop breathing hundreds of times during the night, depriving the brain of oxygen and waking them up. This causes strain on the heart and lungs as they labour to function without oxygen.

The major consequence of sleep apnoea isn't traffic accidents - though I don't downplay the seriousness of driving while sleepy. It is heart failure, caused by years of irregular heartbeats. People suffering from sleep apnoea simply drop dead in their sleep, never knowing what hit them.

The treatment for sleep apnoea can consist either of rmoving the obstruction through surgery (less common) or through forcing air into the lungs during sleep (more common). This requires the use of a Continuous Positive Airway Pressure (CPAP) machine during sleep.

Users wear a breathing mask which is attached to the CPAP. The machine monitors breathing patterns and increases air pressure if breathing slows.

The CPAP is expensive ($2000) and the masks ($200 per) are fragile. They are not covered under Canada health care and only partially covered by group health insurance. So they are a significant expense.

That said, they are worth every penny. I have suffered from sleep apnoea since I was young, but having a CPAP over the last four years or so has made a huge difference in my quality of life. I really notice it on those rare days when I don't use it or when I am having trouble with a mask (as I am now).

It means I have to take the CPAP with me when I travel, so I get to know airport security people very well. The machine is large (1 foot long) and too fragile (and important) for baggage, so it consumes most of my carry-on space. And hotels are stingy with power plug-ins, so setting it up on the road can require acrobatics.

Sleep apnoea doesn't define my life and it's just one of those things that people need (like glasses and dentistry) and sometimes cannot afford. I persnally believe that people who need such reatments should get them as a matter of course, provided by our health care system.

Tuesday, October 02, 2007

Standards

In response to a discussion list post, in which I called IEEE's policy of charging fees for standards a 'scam'.

Dismissing my perspective on this as 'tribal' misrepresents my actual position. I am aware of the discussion on this list and elsewhere about the open publication of IEEE documents. I am also well aware of the many and nuanced versions of 'open' - including those definitions of 'open' floated by corporate interests in order to obfuscate the discussion. With all of that said, 'scam' is the result of my considered judgment on the matter, not some sort of expression of community affiliation (which, to people who know me, would be laughable).

There is no legitimate definition of 'open' that restricts distribution of a document to those who register and pay a subscription fee. And a process that causes a standard to disappear thusly from public view as soon as it is approved cannot either be called 'open'. As a participant in the creation of the document (however reluctantly) I have been afforded my own personal copy of the PDF version of the standard. But despite the fact that utterly no expense would be incurred by IEEE were I to post it on my website, I am prohibited from doing so. We are presented a scenario where the cost charged to purchasers covers putative expenses, and yet a case where few - if any - expenses exist.

Though it was not the target of my original posting, the process of 'standards building' is one that should certainly be subjected to some examination. No doubt most members of this committee are aware of the dubious votes cast in favour of the (ultimately unsuccessful) ISO standardization of OOXML. I'm pretty sure members of this list could attest to the influence of corporate interests in favour of certain (proprietary) solutions, to the detriment of the community as a whole. Indeed, other members of this list may be representatives of such interests themselves, and will be less concerned with the truth of my remarks than with the suppression of them.

As the members of this list are certainly aware, there are many routes to the creation of standards, including those that ensure that planes fly and food remains safe. It is arguable - and I would argue - that the influence of financially interested parties acts to the detriment of the standards process. Whether or not planes fly is a matter of physics; whether or not food is safe is a matter of biology. These scientific facts should not be amended with lobbying. The result of the corporate influence is that we sometimes get standards that are questionable, based on an unreasoned and political denial of physics and biology. The process is also one that favours corporate interests, the commitment of time and resources being beyond the typical un- or under-funded scientist. We get standards (or 'Recommended Practice') that exist only because some company wanted to push a project; I contend that DREL is one such publication, the 'science' (as it were) of digital rights being far from being established one way or another.

From where I sit, the inaccessibility of the published standards contributes to the gerrymandering of the process by keeping it out of public view. I'm been in and around IMS and ISO and IEEE and the like long enough to have seen this play out repeatedly. When 'agreement' comes to who has the most money to spend to get their way, then we have distorted the process. The only way to fix this starts with open process and open publication, which is why I sometimes appear 'tribal' when expressing my opposition to the status quo.

Update

Response to a comment, posted October 3. Sorry I can't put the comments here, but they're being posted on what is essentially a closed list.


People who know me know that my outrage is not sudden. My opposition to the IEEE policy has been ongoing, and members of this committee have heard its expression from time to time over a number of years.

I find it flattering that your understanding of my position is derived from Richard Stallman. However, open publishing is a bit different from open source. It is a well understood and widely adopted policy. Leading voices include Steven Harnad and Peter Suber. There are thousands of open access journals being published, not to mention millions of other resource under various Creative Commons licenses. The IEEE may not concur - which is why I object - but it is simply and blatantly false that "everybody else" does not concur.

Nothing prohibits IEEE from open publishing. Nothing prohibits it from allowing others to post copies of IEEE standards. The purpose of IEEE is not to make money, despite this representation. It is to create and publicize standards. Something it is currently failing to do effectively because of closed processes and publications.