Saturday, September 29, 2007

Notes on Audio and Videocasting

Just notes - because it's good to share

Windows

First step: recording audio. See Audacity.

Second step: recording both sides of a conversation. There's a bunch of 3rd party applications. See http://webcastacademy.net/presentations/upei2007

  • Recording Conversations with remote participants
http://webcastacademy.net/node/1083
http://labnol.blogspot.com/2006/06/how-to-record-skype-conversations.htm

Third: Streaming

None of these programs capture both ends and push it out.

Use Virtual Audio Cables - Tools

http://webcastacademy.net/webcaster-kit

Then streams out to shoutcast server using simplecast. (Or an open source program called icecast (but it doesn't support mp3 well))

or freecast.com - http://freecast.co.uk

Mac

You're essentially using nicecast - http://www.rogueamoeba.com/nicecast/


Screencasting

Jing - http://www.jingproject.com


Screen sharing - live sharing of computer screens

http://yugma.com


Video streaming

ustream.tv - free hosted service


To connect audio and video to computer - Pinnacle device - takes audio and video cables in one side, and provides a/v computer inputs out the other side.

Thursday, September 20, 2007

Weights and Measures

Responding to this post about the metric system from Doug Noon:

In Canada, the metric system was implemented gradually while I was in school - first distances and temperatures, then weights and measures. The parts that were used, I learned. the parts that were not yet in use, I didn't learn. The result is that my intuitive sense of measurement is a mixture of the two:

- temperatures - I understand Fahrenheit, but if asked, will always respond with a Celsius temperature, and it bugs me when weather maps (eg., from Weather Underground) are all in Fahrenheit, so much so that I only use Environment Canada online weather maps.

- distances - kilometers only. I understand miles, but they're too long. Driving 100 km in an hour is something that makes sense to me. Shorter distances? Meters. There's a large grey area there - I don't have a good intuitive idea 0f 20 feet or 60 yards, whatever. And please don't give me a length in 'football fields'.

- height - feet and inches. I know I'm just a hair under 6 feet. No idea what that is in centimeters. When I'm measuring wood for carpentry, I will use both inches and centimeters - whichever way the ruler is facing, I don't care.

- weight - pounds and tons. I know a kilogram is 2.5 pounds. But I don't know how many kilograms I weigh (I'd have to calculate it). I know what a gram is, but I don't use grams any more. There's 28 grams to an ounce. But I don't really know how heavy an ounce is.

- liquid measures - don't ask me what a fluid ounce is, I have no idea. I understand liters. Quarts are like liters, only a bit bigger. I know gallons, but I never have enough of any fluid to actually have a gallon of anything. Yet, when describing fuel efficiency, I only understand miles per gallon - 8 is bad, 40 is pretty good. I haven't a clue what liters per 100km looks like. What is bad? I don't know. 5?

- land - acres and square miles. But, mostly,. acres. Because we had a 1 acre lot when I was a kid. And the land where I lived was divided into quarters (160 acres), 4 of which made a square mile. That's what I believe, anyways. Hectares? Funny square acres.

- energy - my mother dieted so I understand calories. Only in theory, however. I actually had to look up a couple of weeks ago how many calories people should consume in a day, 2000 - 3000. Which makes 60 calories good, 400 calories bad. Unless you're starving, in which case good and bad are reverse. The equivalent in joules? Please, I don't even know where to begin. That said, I consume power in watts and kilowatts, and pay for it by the kilowatt hours. Every appliance is compared to my gold standard, the 2500 watt hair dryer I bought as a kid. Horsepower? No idea what that is. Cars have, what, 12?

- pressure - I read a lot of science fiction, so I understand 'one atmosphere'. Kilopascals? Forget it. Millibars? Forget it. For pressure, I understand 'low' and 'high'. For my bicycle tires, I use PSI - but I had to look up a few months ago how many PSI to fill my tires (65). How many PSI is the atmosphere? Not a clue.

Tuesday, September 18, 2007

What Is Left?

I have always described myself as 'left' and I have been described as everything from a Marxist to a 'moderate socialist and radical democrat'. I have run politically from the left side of the spectrum, and when asked to describe my politics, will either choose 'very liberal' or 'socialist', depending on what the choices are. So I am in a position to offer a response.

But a response to what? "What is left?" That is, in the first instance, an empirical question. What do the people who collectively lable themselves 'left' believe, in the aggregate? But the result is not likely to be anything I believe, nor is it very likely to be anything a majority of left-leaning people believe, unless it is defined at such a level of vagueness as to be uninformative. Moreover, the answers from such a survey will vary with the questions asked, as Burmeister's post shows. Is 'left' the party of change? Looked at from one perspective, perhaps. Looked at from another, perhaps not.

In any case, we are in such a state of flux that it probably makes more sense to speak, not of what the left is, but what it ought to be. Not so much from the perspective of political advocacy - as in, 'we ought to save the environment' and 'we ought to support labour' - but as in what ought to be thought of as the foundational views of the left. Because it's very easy to get this wrong, and misleadingly wrong, by glossing what is the fundamental distinction.

To take an example, we hear that that the left is the party of the poor, the 'working class', while the right is the party of the rich. Such a definition automatically disenfranchises any person of means from leaning left. But more importantly, and more significantly, it separates people from their aspirations. Mostly people do not want to be poor, do not want to be a part of the working class, and insofar as the left represents such people, it is because being poor or being a worker is associated with bad things, like poverty and death. The left needs to be able to speak to a person's aspirations as well and as clearly as it does to their status or to their class, and identifying supporters of the left as 'the working class' fails to do this.

Or another example, a commonly made distinction between right and left, specifically, that the right is about 'individuals' while the left is about the 'collective'. It is very tempting to take this definition, and the term 'socialism' suggests that the left concerns itself mostly with society, mostly with the affairs of groups, or more, in advocating the rights of groups over those of the individual. But to simply state the distinction thus is to miss some important differences between the right and the left. Because the right will advocate for the rights of the group when it suits them - when promoting nationalism and patriotism, for example, or when appealing to people's religious convictions. And it leaves the left open to a straw man attack, that it is not concerned with the rights of the individual, which is simply false.

Another common way of distinguishing between left and right is to characterize the left as the party of 'equality' while thinking of the right as the party of 'privilege'. This certainly has historical origins. But it again allows for misinterpretation. We are familiar with the misuse of 'equality' as a means of preserving privilege, for example. Opponents of affirmative action argue against the practice on the grounds that it results in 'unequal' treatment. The simple doctrine of 'equality' needs to be refined, to become something like 'equality of opportunity' or 'proportionality'. With each refinement the basic justice of the position is obscured, as the practice seems to allow more and more unequal treatment. And then there is the clincher, that to any simple observation, people in fact are unequal, and that no amount of 'social engineering' is going to change that.

And yet, if we reject all three principles, we get a rich individualist who believes his position is the result of innate ability rather than social injustice. Exactly, it seems, the opposite of what we would call a representative of the left. And therefore, while these principles cannot stand as definitive of the left, it certainly seems that there is something captured by these principles that we would call characteristically left.

If I had to characterize what it is about right-wing political philosophy that bothers people from the left, I would characterize this aspect as the inherent 'atomism' of the right. This is the atomism that informs classic liberalism, the idea of each individual agent striving for his or her own personal advantage. This atomism is what informs social darwinism, the 'survival of the fittest' mantra we so frequently hear from the right (which is why they also declare that collective action is 'unfair', since it distorts this mano-a-mano competition). This is the individualism of Ayn Rand and the libertarians, the elevation of 'genius' as a natural category, the exaltation of CEOs and political leaders, the suggestion that people - even the children - are in their station in life because of choices they have made. "Take responsibility" and "don't be a victim."

Atomism is an important doctrine and is at the heart of most of our contemporary structures and institutions, which is why parties of the right are today characterized as 'conservatives'. Atomism is, at heart, the doctrine that the qualities of the whole are are not the consequence of properties of the whole, but rather, are a consequence of the properties of the individuals that make up the whole. Just as, say, the nature of the bar of lead is not the 'leadness' that the bar as a whole somehow possesses, but rather, is the result of the nature of the individual lead atoms that make up the bar. In the same way, if a society is, say, 'just', it is not because of some obscure property of being a 'just society', but rather, is made up of the 'justice' in each individual member of society.

This way of viewing the right wing allows us to understand some of its most perplexing attributes. We hear, repeated over and over, for example, the doctrine of individuality. The right criticizes the left because of the left's advocacy of 'central government' - and then turns out to be the party encouraging nationalism and patriotism and conformance to a central doctrine, whether it be religious truth or hatred of a common enemy or instilling the values of good citizenship. The right also criticizes the left for favoring rules and regulations - for 'fettering the marketplace' - for example, and yet at the same time, characterizes itself as the 'law and order' party, insisting that criminals do maximum time. How do we make sense of this?

As follows: the right, on the one hand, celebrates the individualism of each atom, but at the same time, depends on the purity of each atom. If the 'leadness' of a lead bar is mad up of the 'leadness' of each of its atoms, then atoms that are not lead make the bar less lead-like. The nature of a lead bar comes from its individual atoms, but the value of a lead bar is derived from each of its atoms being the same. This is why it is essential, from the perspective of the right, to preach the apparently contradictory doctrines of individualism and conformity.

Again, I want to stress that this theory is what informs our social and political structures. Essentially, our institutions are made up of 'atoms' of similar types. Hence, in a democracy, each person receives a vote, but the government is formed by people who all vote the same way. Similarly, religious faith is a personal decision - this is enshrined in various constitutions - but religious faiths are characterized as people who have all reach the same religious point of view. And nations, being composed of people who have the same race, the same language, the same culture, are typically depicted as people who have the same religion as well. Which is where we get not only the idea that the United states (say) is a 'Christian country', but also, the idea that it would be stronger if it were more Christian.

We can see here how the depiction of the left, or of left-leaning causes, as a 'mass movement' simply plays right into this picture. By accepting the idea of 'mass movement', we are either accepting that political movements are made up of masses of same-thinking individuals, or we are presenting some sort of (fictional) 'general will' that will be ascribed to, or imposed in some way, on the individuals comprising the mass. And there then comes to be little to choose from between a fiction imposed by the left and some set of principles, whatever they happen to be, articulated by the self-designated representatives from the right. Moreover, since the representatives from the right have generally some advantage of wealth or position, it appears that there must be something to what they are saying, because they have achieved this position and wealth.

Historically, the left has achieved success by emulating the strategies and tactics of the right. The difference has been in the determination of the beneficiaries of those strategies and tactics. Thus, when the industrialists made themselves wealthy by owning the means of production, members of the left sought to seize the means of production. When the right wing resorted to military means to assert its dominance, the left resorted to military means in kind. When the right adopted the mechanism of the democratic vote (supplanted by influence-generating systems to sway voters) the left also adopted the mechanisms of political parties and propaganda. But what is important, is that none of these strategies - not unionism, not communism, not social democracy - defines the left.

So what does? My own view is that, philosophically, we could see leftism as a blend between the ideas of Immanual Kant and John Stuart Mill. And, specifically, the following: from Mill, the idea that the greatest social good is achieved when each person is able to pursue his or her own good in his or her own way; and from Kant, the idea that each person is, and ought to be treated as, an end, and not a means. These are, I think, principles with which proponents would agree - and more, principles with which, when pressed, proponents of the right would disagree. Because, while at first glance these principles appear to be atomist principles, they are not, and they are not in the way that signifies what (to people on the left) constitutes the fundamental flaw of conservative philosophy.

We need to look at Rousseau to understand this. "Man is born free, and yet everywhere he is in chains." Why is this? Where does our imprisonment, our enslavement, come from? From the requirement to be the same. From the requirement that we constitute, in our atomism, one of the whole. When proponents of the right argue for, say, the freedom to pursue the good, they do not mean a person's 'his or her own good', but rather, some sort of absolutist declaration of what constitutes 'the good', whether it be derived from religion or from misguided sense of national purity. And when proponents of the right consider the 'value' of a person, they see this value as conditional, based on a person's natural abilities, based on whether they 'contribute to society', based on whether they are of the right faith or the right nationality. People are means to make a company great, to make a nation great.

The fundamental principle of the left is that each individual person is of value in and of him or her self, that this value is unconditional, and that derived from this value ought to be certain (socially constructed) rights and privileges. This is the origin of the doctrine of equality, this idea that each person ought to have an equitable share of the pie, a fair shot at the brass ring, or a right to a say at the meeting. This is the origin of the idea that social systems that leave people in poverty, in starvation, dispossessed and enslaved, is fundamentally wrong - not because of some higher principle, about the nature of the world or of society, but because of the simple truth, that each person, including the indigent, has a basic and fundamental standing in society.

But why? You may ask. The answer is two-fold. First, in atomism, the whole may never be more than the sum of the parts. Atomism is a quantitative philosophy. We don't ask what, where or why, we ask, "how much?" And second, because atomism is a philosophy based on sameness, the whole may never be better than the best of its parts. The best a bar of lead can get is to be as good (as pure) as the best lead atom in the bar. The point of atomism is not merely individualism, but that some individuals in society set a standard, to which the rest ought to aspire. But also, it is the idea that the decisions made by such a society, will be the decisions made by individuals, with which the rest will concur. The best decision, the best ideas, a society can have, are the ideas articulated by an individual, to which the rest will adhere.

But we know the weaknesses of each of those two parts. First, we know that (to use the popular slogan) the whole is greater than the sum of its parts. That a population - or even a mass of metal - has properties that are over and above the properties of its individual atoms. And second (and crucially), we know that these properties are better than the properties of any individual in society. 'Better' not so much in the sense of 'ethically good' (though a strong case could be made for this) but, more concretely, 'better' in the sense of 'more accurate'. If the ship of state is governed by one person, the captain, then it has a greater probability of hitting an iceberg than if it is governed by the whole.

What the left has lacked for many years (what it has always lacked, in my view) is an articulation of why the whole is greater than the sum of the parts, of how that whole comes to be expressed, and in the context where the fundamental principles, just articulated, are essential elements of that articulation, and not accidental correlates that could, in a contingency, be ignored. In other words, it needs an articulation of why the whole is greater than some of the parts in such a way that the essential liberty and standing of the individual is never threatened.

And that is accomplished by defining the whole as the result in autonomous, thinking, communicating, rational autonomous agents, rather than merely passive elemental atoms. Where the whole is created, not by what we are, but by what we do, such that where the actions of each person are seeing always as contributing to the whole, not merely 'adding to' it or 'subtracting from' it. By viewing, in other words, society, not as a mass, not as a machine, but as an ecology, a network.

In an ecology and in a network, the properties of the whole are not created from the sameness of the components, but rather, as a result of the interactions of the components. Consequently, the properties of the network are not contained in any one individual in the network. The network is not some big copy of an individual of the network. Nor does it operate under the guidance and direction of one entity in the network. If we think of a forest, for example. It is made up of a mixture of trees and shrubs and birds and bears. There is no one part of the forest that the rest copies. There is no 'sameness' in a forest, except at very superficial levels. And the forest isn't governed in any sense by any of its members, or by anything. The forest becomes what it is as a result of the interactions of its members, such that every entity in the forest contributes to what the forest becomes.

From this perspective, we can now begin to articulate a political position, based on the premises describing what makes for an effective network - or, say, what makes for a healthy forest. These are principles that govern the effectiveness of networks in general, of which a society is only one example, and hence can be described, and studied, empirically, Hence, what I offer here is only my first estimation, based on an understanding of mathematical, computational and physical structures of networks. These are properties of the individuals in a network - and I think, we can see, that the combination of these four properties, adds up not only to a formula for successful networks, but also as a formula describing the basic dignity of each member in society.

First, diversity. A successful network fosters difference, not sameness. There is no presumption of a 'pure' prototype, a creed or a faith, a doctrine or fundamental sent of principles to which all members of a society must adhere. One of the fundamental principles of Marxism is indeed a principle of diversity, not equality: "from each according to his means, to each according to his needs." Intuitively, we understand this. We know that a forest needs to be composed of a variety of trees and animals; when it is composed of a single type of tree, and few animals, it cannot survive, and must be tended, and even then is more likely to be wiped out by a virus or disease. Diversity is what Richard Florida writes about when he talks about the 'Creative Class', the most productive element of society.

Diversity is what propels some of the major planks of leftist thought: the idea that we live in a multicultural society, the idea that we ought to encourage and endorse people of minority faiths, values and statuses. The encourage of diversity is part of what propels a leftists' celebration of gay-lesbian causes, aboriginal rights, minority rights, and more, while at the same time encouraging people in the expression of their religious beliefs, not to mention expressions of culture and identity in art, music and drama.

Second, and related, autonomy. Where the individual knowers contributing of their own accord, according to their own knowledge, values and decisions, rather than at the behest of some external agency seeking to magnify a certain point of view through quantity rather than reason and reflection. Without autonomy, diversity is impossible and sameness becomes the predominate value of society. Autonomy is fundamental to human dignity, for without it, a person is unable to contribute in any meaningful way to the social fabric.

Autonomy underlies the left's interest in social justice and equality. People who live in conditions of poverty and dependence cannot express their will. The right wing often depicts the free market merely as the (best possible) means to distribute resources, howver, the market, as it now exists, has become the means through which we employ scarcity in order to create relationships of power, where one person, the one with the resources, is able to deprive the second person of his or her autonomy. Wage-labour isn't simply about the inequality of resources, it is about the capacity of one party to impose its will on the other. Leftists believe that market exchanges are and ought to be exchanges of multual value, not conditions of servitude imposed by one against the other, and hence seek the redistribution of resources in order to maximize autonomy.

Third, interactivity. Knowledge is the product of an interaction between the members, not a mere aggregation of the members' perspectives. A different type of knowledge is produced one way as opposed to the other. Just as the human mind does not determine what is seen in front of it by merely counting pixels, nor either does a process intended to create public knowledge. Without getting too far from the topic of this discussion, knowledge is not merely the accumulation of facts and data, nor even the derivation of laws and principles, but rather, is the recognition of states of affairs. Recognition is not possible without interactivity, because recognition entails an understanding of the relations between points, which requires several perspectives on those points.

Interactivity lies behind the leftists insistence on matters of process. It is not simply the case that 'the results matter', because without process, getting the right results is a matter of luck, not policy. An interactive process values and respects the rights of each of its members to speak and be heard. It is therefore a statement of the fundamental freedoms of society - of expression, of the press, of assembly. It is also the value that fosters respect for the principles and structures of society, the laws and institutions. It's why we have trials - where the matter can be discussed and brought out into the open - rather than mere rulings, and why things like arbitrary detentions and sentencing are contrary to the principles of a just society.

Fourth, and again related, openness. The is, in effect, the statement that all members of society constitute the governance of society. From a network perspective, the principle of openness entails a mechanism that allows a given perspective to be entered into the system, to be heard and interacted with by others. It is not simply a principle of connectivity between the members - though it is in part that - but also the principle that there is no single channel or proprietary mechanism through which that connection is established. It is, at its base level, at once the principle that there ought to be a language in which to communicate, but also, that no person should own that language, and that there ought not be any particular language.

In computer science openness means open standards and open source software; in political discourse it means open processes and accessible rule of law. It means that the mechanisms of governance ought to be accessible to each person in society, which results in policy running the gamut from electoral spending limits to voting reform to citizen consultancy and open government, and ultimately, direct governance by the people of their own affairs, self-governance, in the truest sense.

These may not be the only principles, and they may not be the most fundamental, but I offer them as a statement of what it means, at the most elemental level, to be left. These principles offers us some sort of hope in society, a hope that we as a whole can be better than the best of us, but also with the understanding that this is made possible, not through repression and control, but only through raising each and every one of us to the highest level possible, to participate most fully and most wholeheartedly, in society.

Friday, September 14, 2007

The Napoleon Complex

I just want to capture my part of a discussion that took place on Matthew Tabor's website.

To introduce this, I'll quote part of the post I was responding to (I don't want to cite everything in the post because his site is not licensed under Creative Commons):

The Journal of Common Sense in Education has a new submission ready for peer-review: rightly or wrongly, appearance is a factor. I can’t be too bothered when I read about wasted time and resources dedicated to questions that could be answered properly by anyone who has a) left their home in the last 20 years and/or b) has firing neurons. Why?

Because I’m pleased that third-rate researchers are putting time into trivial subjects instead of mucking up research that actually matters. It’s the lesser of two evils.

EdWeek hops on the bandwagon:

Researchers Julia Smith of Oakland University in Rochester, Mich., and Nancy Niehmi of Nazareth College of Rochester, N.Y., analyzed test results and other data for nearly 9,000 boys across the country who started kindergarten in 1998. They found that kindergarten teachers systematically perceived boys who were shorter than average—or even just shorter than the other boys in their class—to be less skilled in reading, mathematics, and general knowledge than their test results indicated.

Well done.


Here is my response:

I don’t see why you consider this to be useless research.

First of all, teachers shouldn’t discriminate on the basis of height, don’t you think? Because height has nothing to do about academic ability.

You may call such discrimination a “fact of life” but it remains true that this is a learned behavior and one that perpetuates inequities in the school system.

It’s also the sort of behaviour that may change over time. So while we knew that teachers discriminated against height 20 years ago, it may be that they no longer do so. You can’t tell unless you check to find out.

Lots of things like this change. A study in 1950 might have found that most teachers are racists. You wouldn’t call a similar study conducted 20 years later useless because “it’s something we learned 20 years ago”.

Indeed, the idea that we can say of any demographic that “we learned it 20 years ago” betrays an almost abnormally underdeveloped understanding of humans and society. Do you think that if we’ve studied attitudes once that we’ve established it for all time? Weird! Odd!

We need to survey for attitudes and behaviours on an ongoing basis, because these change, and because inappropriate attitudes and behaviours, especially on the part of teachers, can cause great and unnecessary harm.

Tabor replied, in part:

The study wasn’t about whether we should discriminate on the basis of height. It was about if we do, which is a very different thing and is not to be confused with a value judgment....

Your point about re-checking the status of discrimination is valid; even so, I’d put “height discrimination” about 384th on the list of important issues in education.

There is not a compelling case that the status of height discrimination or the factors contributing to it have changed dramatically between 1987 and now. A compelling case may warrant a re-examination of the situation; that’s the basis for most race-based research, the situational foundation of which is constantly in flux. This is not the case with height discrimination with 5 year olds.

My response:

> I’d put “height discrimination” about 384th on the list of important issues in education.

Fine.

We’ve got plenty of researchers, enough that we can get down to things that are 384th on your list.

Which is a good thing, because for other people, such things rank a lot higher.

> There is not a compelling case that the status of height discrimination or the factors contributing to it have changed dramatically between 1987 and now

Well, interestingly, the only way to create such a case would be to research it.

That’s the thing about research - you do it to determine *whether* there is a compelling case. You don’t do it only *if* there is a compelling case.

Finally…

Both you and a commenter spend time talking about how you hate to ‘fritter and waste’ time on such ‘nonsense’. Then why are you writing about it?

Why not focus on the things that are important to you? If we have researchers working on *those*, then what do you care if other people are working on other things?

Nobody expects to have the entire force and weight of the government or the national research infrastructure lined up behind *your* priorities. At the very best, the most you can claim is some part of that.

It’s like police work. If your house is burgled, they’ll send some officers to investigate. But they won’t send the entire force - and it would be pretty trivial to complain that some police out there somewhere are investigating crimes that you consider to be unimportant.

So unless you can show that the height-researchers are directly diverting resources away from much more important things, then you don’t have a case.

And if you don’t have a case, then you’re just being nasty for no good reason, to make political points or something, I don’t know.

Another response from Mathew Tabor, greatly clipped:

I don’t adhere to the relativism that you seem to - that is not an insult, just an honest way of comparing our views. Though some victim of height discrimination might rank the issue above 384th, that doesn’t mean that the rest of us should bow to their interests...

I judge research by its value...

My response:

> I judge research by its value.

No, you judge research by what *you* value. Big difference.

Discrimination doesn’t bother you. You think it’s something short people just have to live with. So research in to it is trivial.

You represent your values as absolutes:

> in no way will I suggest that the average women’s studies dissertation carries more value than the investigation of a cheaper cure for malaria.

Leaving aside the basic dishonesty of comparing “the average” something with the “cure” of something…

It may be that understanding the systematic discrimination against half the population may produce many more returns for society that the savings obtained by treating malaria more cheaply.

It may even be that the cheaper cure to malaria is made possible by a woman who is earning a PhD as a result of a program that was created as a consequence of the dissertation author’s work.

We don’t live in a world of simple causes and effects where everything is predictable. We live in a complex world where a wide range of factors go into producing results.

The idea that you have some sort of privileged position from which to judge which studies will produce good results (even if ‘good results’ = ‘things you want’, which is itself questionable) is unsustainable.

Another response from Matthew Tabor, again clipped heavily:
You’ve misrepresented me quite badly. At no point did I say that discrimination didn’t bother me or that I “think it’s something short people just have to live with.” There are many real problems in education and elsewhere - I don’t question that. I do question the priorities we sometimes have in addressing those problems.

Comparing an average soft-discipline social science dissertation to one that delivers a solution to a specific, pressing need was done purposely. It was not dishonest; the point is that those two aren’t an apples-to-apples comparison.

I hope that you don’t sincerely believe that understanding systematic discrimination based on height is likely to produce more returns - or be a more valuable moral goal - than curing a disease that kills over 1 million per year, most of whom are infants, children and pregnant women....

And finally, my last response:

The reason I took the tone I did is because of what I felt was unwarranted nastiness in the original post. Like this, for example:

“I’m pleased that third-rate researchers are putting time into trivial subjects instead of mucking up research that actually matters. It’s the lesser of two evils.”

Given that you have actually named the researchers in question, your attack on them as “third rate” and on their work as “trivial” suggested to me an intent not to, as you say in a comment, to “to stick to the merit [or lack of merit] of the arguments.”

If your intent was a serious evaluation of their arguments, then I apologize for assuming that it was motivated by other concerns. It felt to me that your intent was to discredit the source and the topic, and if you return to your original post, you may see why I may be forgiven for leaping to this conclusion. Again, if that was not your intent, I apologize.

Treating the issue substantively, I am still not convinced by your evaluation of the usefulness of the research in question. It appears to me that the evaluation criteria are not well formed.

Let me be specific. You write that a study of height discrimination is obviously less important than “curing a disease that kills over 1 million per year.” By this, I take it that your criterion for evaluating research work is the number of lives saved by that work.

Now maybe you see what you said differently. You took me to task, after all, for asserting that discrimination doesn’t bother you. Yet you also said that research into it is “trivial”. The conclusion I draw is pretty natural, and when taken in conjunction with your other comments, deductive. I do not misrepresent you if I assign to you the consequences of your statement. And in a similar manner, your statement directly implies that you consider ‘the number of lives saved’ to be the criterion by which we judge research.

If so - and I’m sure you will agree with me that there is significant room for questioning that criterion - then it seems to me that your support for the one sort of research over the other simply doesn’t follow.

The example you chose is ‘finding a cheaper cure for malaria’. Now it may be that lives are saved by lowering the cost of treatment, but unless the savings are dramatic, then that number will be nowhere the million lives being lost to the disease.

But more to the point, when we ask, “what is killing those people,” a more complex answer emerges. Because it is not *simply* that they are dying from malaria. Malaria can be cured with prescription drugs. Therefore, if the people dying had access to the drugs, they would be cured. What is killing them, arguably, is not malaria. It is their lack of access to prescription drugs.

Indeed, if we look at the top causes of death worldwide, it becomes clear what a role income plays, so much so that the list needs to be divided between high, middle and low income countries. http://www.who.int/mediacentre/factsheets/fs310/en/index.html
That is why we see malaria at 4.4 percent of deaths in low income countries, and not a factor at all in middle or higher income countries.

Without even looking at the reasons why people die of malaria specifically, it becomes evident that most of the causes of death are what might be called ‘lifestyle’ causes, not diseases. At 4.4 percent for only one group of nations, malaria is in fact a pretty minor cause of death. It is greatly overshadowed by things like heart failure, lung infection and obstruction, cancer and stroke.

These are medical conditions but they have to a large degree social causes - things like diet and nutrition, exercise, smoking and pollution, stress and deprivation. These things in turn are almost totally related to issues of poverty and injustice.

From where I sit, the single-minded focus on ‘disease’ is nothing more than a distraction from the real problems facing people in society. That is not to say that I think research into the cause, treatment and cure of disease is trivial. It is not; it is important, and we should do it.

But it is not so important that it dwarfs all else, and it is certainly nowhere nearly as important as research into justice, equity, and poverty. One major area of this research is research into discrimination. And one well-known type of discrimination is that based on height.

Now I am not saying that some paper on a height-discrimination study will cure poverty on earth. But neither will malaria be cured by the typical the typical paper on that disease. Each type of research project plays the same role: it contributes to our understanding of a wider field. Sometimes there are major breakthroughs that save a lot of lives - but these breakthroughs are rare - and they don’t exist only in medicine.

And that returns me to the reason why I took the tone I did. Because none of what I have just outlined is surprising or even controversial. Most people know this, and when they look on it and reflect, they see that it is true. That’s why society as a whole supports research into many disciplines, most of which have nothing to do with medicine.

There’s a whole class of research, that might be called ‘poverty studies’ or ‘equity studies’ or ’social justice research’ or the like, that examines this sort of question. It is work that is important. And yet it is work that is often dismissed as “trivial” not because it is genuinely trivial, but because if often has political implications, that would impact the power and privilege of the wealthy.

Thus, trivializing research that isn’t part of the ‘hard sciences’ becomes part and course of a wider political strategy, one that is intended to perpetuate wealth and privilege, even if this does result in continued poverty, suffering and death.

Now if this wasn’t your intent when you labled one such study “trivial”, I apologize. And I will say only that, even if this wasn’t your intent, this is the effect.

I think I’ll stop addressing this here and let us both get on to other issues. I will be happy to read your response and let it be. You should, after all, have the last word on your own blog.

And the final reply:

Stephen,

I’d like to be very clear - I discussed only the researcher’s work. That is sticking to the merits of the argument.

Though I appreciate your willingness to let me have the last word, I expressed what I wanted to in the articles and comments. There’s just no need for more, it’s all there.

Tuesday, September 11, 2007

Figuring It Out

Responding to Vicki A. Davis, who writes:
Self education doesn't work. We would never leave kids on their own to "figure out" math or literature but we know that in order to speed their learning, we should educate them on the principals that work. Likewise, leaving kids to "figure out" effective digital citizenship is equally preposterous.
This post views 'self education' through a polarizing lens, one that depicts the choices as being something like 'being taught' and 'leaving kids to figure out things for themselves'. The reality is nothing like that.

I don't think that anyone, anywhere, is writing about casting kids - or adults, for that matter - adrift in a sea of information with no anchor or support. A kid can be 'not taught' and yet still not be left to 'figure out' thinks on their own.

This becomes evident when we look at specific examples. Take 9-11 for instance. Yes, we could teach every kid that 9-11 was caused by terrorists in airplanes, not bombs. But there are very many similar conspiracy theories. Should we also teach kids that there are no aliens in Area 51?

At some point, they have to make the call for themselves. At some point - very quickly - they have to, as you say, 'figure it out' - because there will be no way we can teach them all the fact about what they may or may not encounter online.

What we want them to do, of course, is to be able to make those calls. The problem with the person who comes away thinking the WTC was bombed isn't that he wasn't taught the right things about 9-11, but that he doesn't have the tools to 'figure it out'.

Indeed, a person who reads a website and concludes that it's true, no matter what it says, is dangerously illiterate. He has been raised by people who believe that he should be 'taught' - and should not 'figure it out' for himself.

The fact is, when a student encounters a website like that, we want him or her to *get* support - we want him, or her to verify the facts on other websites, or to consult with peers or elders for verification.

We want the student to know that, even when he is not being 'taught', he has not been cast adrift - he is not alone, he is not without support. Indeed, the very essence of media literacy is understanding that there is a supportive net of information surrounding you, even when you're not in the classroom, and that (therefore) you should never rely solely on those who purport to teach you.

This is not merely a matter of semantics.

The alternative to 'being taught' that I am sketching here is misrepresented pretty much every day by people, usually teachers, who assume that students are simply incapable of learning on their own.

It is misrepresented in exactly this way, by suggesting that the only support and guidance a student can get is from a teacher.

This is not merely false, it is also dangerous, because it leads to a sense of dependency - it leads exactly to the sort of behaviour depicted in the original post.

The best thing a teacher can ever do for his or her student is to achieve that day when the student can say to the teacher, "I don't need you."

That's why, for better or worse, we release them after 12 years or so.

Monday, September 10, 2007

Setting Up Sunbird

Note: This looks like a lot, but it really isn't. Take a few minutes to skim the document, and if it looks like it's something you want to do, follow the instructions step-by-step. I've tried to keep them precise and detailed.

Sunbird is the Mozilla calendar application. It is intended to run alongside the Firefox web browser and the Thunderbird email client as your open source alternative to commercial software (and in particular, Outlook Exchange).

Or, if you prefer, you can install Lightning. This is exactly the same as Sunbird, except that it runs and an extension inside the Thunderbird email client. Sunbird and Lightning both do the same thing. You do not need to install them both. You can install one, the other, or both. It's up to you (thanks to Ryan Nicolson for suggesting this clarification).

As a stand-alone calendar Sunbird works quite well and is very intuitive. Find a date, click on the date, add an event. That's about all there is to it.

But users expect more of online calendars. In particular, they expect to be able to share their calendars with other users. They expect to be able to merge calendars, viewing several calendars in a single window. And they expect to be able to add and update events in their calendar using email notifications.

All of this works seamlessly, if annoyingly, in Outlook. But for the open source user, it has all been a confusing mess.

The reason for this is that there's no easy way to publish your calendar so that others can use it. Just as Outlook has the Exchange server, an open source calendar needs an online server of some sort. The designers of Sunbird chose Webdav. Webdav, which stands for "Web-based Distributed Authoring and Versioning", is a W3C standard application. But most people don't have access to a Webdav server, which makes it a really bad choice.

So here's what I did: I have set up Sunbird to take care of my calendaring needs and am using my Google account instead of a Webdav server. It was actually a very simply process, in the end, but the online documentation is awful. So I created my own. Here's how to set up Sunbird using Google Calendar.

  1. Create a Google Calendar

    Go to Google Calendar and create an account or login with your existing Google account. To create a Google Calendar, click on 'Add' in the left-hand margin. Then fill out the form.


  2. Get the Google Calendar Address

    In Google Calendar, click on the 'manage calendars' link at the bottom of the left-hand column. Click on the name of the calendar. This will take you to an editing screen. At the bottom of the editing screen you will find buttons labled 'HTM:, 'XML' and 'iCal'. Pick either of the 'XML' addresses (it doesn't matter which one) and save the URL that pops up. This is your 'calendar address'.

  3. Install Sunbird

    Download the appropriate install file from the download page.

    Mac OS X
    Download the Sunbird .dmg file. Double click the Sunbird Disk Image to open it in Finder. Then drag the Sunbird icon into the 'Applications' directory.

    Linux
    Download the Sunbird .tar.gz file. Move it to the directory where you want Sunbird installed. Then extract the tarball. For example:
    tar -xjvf sunbird-0.5.en-US.linux-i686.tar.gz
    This will create a subdirectory called 'Sunbird'. To run Sunbird, run the executable file called 'Sunbird' in the Sunbird directory.

    Windows
    Download the self-extracting .exe file. Double-click on the file to install.


  4. Install the Google Calendar Sunbird Extension

    This extension is one of the many add-ons you can install in your calendar to extend its functionality. You can find the Google Calendar extension in the Sunbird Add-On area.

    Click on the Install Now button to get to the license and download page. Do not click on the 'Accept and Install' button. This will try to install it in Firefox (what a horrible usability error, eh?)

    Instead, right-click on the button and save the .xpi file to a directory you'll remember and be able to find later.

    Once the file has downloaded, open up Sunbird. Click 'Tools' and then 'Add-ons'. This opens a dialog box. Click 'Install' in the lower left-hand corner, and then find the .xpi file in the directory where you saved it. Let it load, then click 'Install Now' in the installation dialog. Finally, restart Sunbird.

  5. Create the Google Calendar in Sunbird

    To associate Sunbird with your Google calendar, you will create a new calendar in Sunbird and then associate it with the calendar you created in Google.


    1. In Sunbird, click on 'File' and then 'New Calendar...'. Select 'On the Network' and click 'Next'.

    2. In the dialog that appears, select 'Google calendar'. In the 'Location' box, enter your calendar address you saved from the XML button when you created your Google calendar. Click 'next'.

    3. Give your calendar a name and select any colour. Click 'next.

    4. It will then prompt you for a login. Enter your Google userid and password. You can also check the 'remember these values' box. Click OK, and then click finish


    5. Your Sunbird calendar is now associated with your Google calendar. That means that when you update your calendar in Sunbird, it will automatically update in Google. And when you update it in Google, it will automatically update in Sunbird.

      You can associate your Sunbird calendar with any number of Google calendars in this way.

    6. Install the Lightning Thunderbird Extension

      Thunderbird, as mentioned above, is the Mozilla email client. What Lightning does is synch Thunderbird and Sunbird. It's a lot like embedding your Sunbird calendar right inside Thunderbird, the way Outlook does it.

      These instructions assume that you have Thunderbird installed and are using it for your email already. Note that Lightning requires Thunderbird version 1.5 or 2.0. You may need to get Thunderbird and install it.


    Once Thunderbird is installed you will want to install an Extension called Lightning. You can't actually find it from the Thunderbird page on the Mozilla website; you have to go directly to it.

    When you go to the Lightning download page, don't get drawn into all the jargon-filled geeky instructions. Simply go directly to the appropriate download page (the links at the upper right):

    As before, do not click on the 'Install Now' button. Instead, right-click on the button and save the .xpi file into a directory you'll remember.

    In Thunderbird, click on 'Tools' and then 'Add-ons'. Then click on the 'Install' button in the lower left-hand corner and select the Lightning .xpi file you just saved. Allow Thunderbird to install the file and restart.

  6. Install the Lightning Google Calendar Extension

    When Thunderbird restarts you'll notice a calendar and some tabs occupying the bottom of the left-hand column. There will also be a new 'calendar' option in the toolbar. But it's not set yet; you must still load the Google extension for Lightning.

    In Thunderbird, just as before, click on 'Tools' and then 'Add-ons'. Then click on the 'Install' button in the lower left-hand corner. This time, select the Google Calendar .xpi file - yes, the very same one you downloaded for Sunbird. The same .xpi works for both Sunbird and Lightning. Allow Thunderbird to install the file and restart.

  7. Create the Google Calendar in Thunderbird and Lightning

    To associate Thunderbird and Lightning with your Google calendar, you will create a new calendar in Thunderbird and then associate it with the calendar you created in Google.


    1. In Thunderbird, click on 'Calendars' in the lower left-hand box, and then 'New...'. Select 'On the Network' and click 'Next'.

    2. In the dialog that appears, select 'Google calendar'. In the 'Location' box, enter your calendar address you saved from the XML button when you created your Google calendar. Click 'next'.

    3. Give your calendar a name and select any colour. Click 'next.

    4. It will then prompt you for a login. Enter your Google userid and password. You can also check the 'remember these values' box. Click OK, and then click finish


    Your Thunderbird and Lightning calendar is now associated with your Google calendar. You can see it in the box to the lower left. You can uncheck (and delete, if you want) the default 'Home' calendar, and check the newly created google calendar to view it.


  8. Embed Google Calendar on Web Pages

    One thing I wanted to do with my calendar system was to publish it on a web page. That's not possible directly with Sunbird (nor can I upload the data to my website) so I am relying on Google for this.

    There's no easy way to find it from Google Calendar itself, but you can access information on how to embed Google calendar on web pages from within the Google Calendar help system. (If you must find it within the Calendar, click 'Manage Calendas', click on your calendar from the list, then click on the 'HTML' box (right beside the 'XML' box you have been using. Under the URL in the box that pops up there is a link to the configuration tool.

    I created two HTML embeds, one a small one for my home page (see the lower right), and another dedicated calendar page for the full version (I had to create a link to a full version because while there is a way to subscribe to my calendar, there's no link to simply view the calendar).


All of this, um, mostly works. Some things are still buggy (let's remember that this is all still beta software). On the Mac, it seems to have blanked out the little download screens, so you can't see the progress of software downloads. The notifications worked fine in the Mac, but didn't seem to add to the calendar from the Window. I would really have liked to be able to drag emails into the calendar, but that's not available. Also, lightning changed the spam icon in Windows, which is really pointless and annoying.

But what's really really annoying is having to go to all of this trouble in the first place. Why can't Sunbird simply upload a file to an FTP site. Yes, there is a site out there that says you can. But it simply doesn't work.

Similarly, why can't there be at least one Webdav site out there for people to use (assuming we must use Webdav, which was, IMHO, a really misguided decision). Well there are commercial services that might work. There is a free service, iCal Exchange. I spent a lot of time on this, but in the end, it simply didn't work.

Right now, if you want to incorporate your calendar data with external applications, your best (and almost only) bet is to do it through Google Calendar. So long as Sunbird's publishing features remain so impaired, this won't change.

So you might be wondering - why go to all this trouble?

Quite simply - I can now access and update my Calendar on mac, Linux and Windows machines, either using the Sunbird application, through Thunderbird, or online through the browser. I could also do it using my mobile phone, if I had one (NRC won't pay for a mobile phone to test stuff like this with). You simply can't do that with any other calendar program (and especially not with Outlook Exchange, which locks you into the Outlook client and which doesn't work on Linux at all (without CrossOver).

And what this does is to set myself up for the possibility of better calendar syndication and integration in the future. The Google calendar site enables exports in XML (actually a version of RSS) and iCal, which means I can syndicate. Sunbird can export files in iCal (it can't send them anywhere useful yet, but I figure that's a matter of time).

And it let's me syndicate calendars into my online calendar, either directly (via Google calendar) or indirectly (via Sunbird). This sort of functionality will be needed to create any sort of calendaring function for a future Personal Learning Environment, so becoming familiar with the concept and the potential is a good idea.

Friday, September 07, 2007

The Open Journal Format

For some time I have been thinking of launching a journal. Because I think that when people talk about 'peer reviewed publications' they have a point, and that point is, that a piece of writing is not merely popular, but also, respected and recognized by a particular academic community.

We need such mechanisms because there is too much to read, too much even in narrowly defined disciplines. And there is no particular mechanism for identifying that which is important within a particular discipline. The popularity-based systems, like Slashdot and Digg, cater to certain communities, sure, but tend, eventually, to what we might call a scholarship of the middle - no particular discipline, no particular level of quality, no particular virtue.

That is not to discount the systems whereby content is selected and reified by the masses. I am a regular reader of such lists and they are a constant source of amazement and amusement. High quality content does get selected by the crowd, but not all of it, and not reliably within a certain discipline.

Historically, as I mentioned, content selection for academic materials has been by means of 'peer review'. The process varies across journals, but in its most typical instantiation, proceeds as follows: a writer submits a manuscript to an editor, who reads it. The editor, at his or her discretion, sends the manuscript to a small committee of reviewers. The reviewers rate the submission for appropriateness for publication. They will often recommend changes and improvements. A final version is drafted, and it is typeset and published.

There are, in my view, two major weaknesses of this approach:

First, it proceeds in secret. The manuscript is not viewed by the reading public until after it has been selected. That it is being considered for publication is not known. Thus, if a manuscript is submitted and rejected, it may never see the light of day. This is wasteful. And it results in the very real possibility that highe quality works might never be seen because they did not pass the scrutiny of a few people.

Second, the decision is made by only a small number of people. Very few people actually review manuscripts; typically three or four. If these people are not attentive to the material they are reviewing, they may accept substandard material, or reject high quality material, simply because they were not paying attention. Having material reviewed by a wider number of people reduces this likelihood. It creates the possibility for buzz around a selection, a conversation that will result in its being not only improved but also brought to the attention of reviewers.

So how do we fix these things? The approach is to decouple access from review. Specifically, authors' manuscripts ought to be easily and widely accessible prior to publication. In this way they can be read by a large number of people. This does not mean that all people read all articles; there is no need for that. But it does mean that the typical article would be read by well more than three people.

Once access has been enabled, then we need to develop a review process. The problem (in my view) with sites like Slashdot and Digg is that a resource rockets from access to acceptability with virtually no restraints. Insofar as there is a community, it is like they are a pack, jumping from one popular thing to the next, with no sense of direction or consistency.

Also, there is no sense of 'peers' in this process. There's no sense that the submissions have been evaluated by people who have demonstrated their commitment to a certain subject area or background or expertise in the field. It's one thing to say "The Golden Age of Paleontology" was popular; it is quite another to say that it was popular among paleontologists.

But what constitutes 'being a paleontologist'? Traditionally, we have required some sort of certification. A person needs to become a PhD in paleontology. Then they need to be selected by an editor of a journal to sit on a review board. This qualifies them to review publications in paleontology.

There is merit to this approach. We have it on good grounds that the person is very likely an expert in the field. They have passed a rigorous and formal course of study. They have been subjected to examination, and have the academic credentials to prove it. Very often, they will also hold a position at a university or a research institute. By the time they are selected to become a member of a review committee (and eventually, an editorial board) they will have successfully published a number of articles, establishing their importance in the field.

This approach has served us well historically, however, there are signs of strains. It takes a long time to earn a PhD, under fairly restrictive circumstances. New disciplines and technologies are being developed so rapidly that by the time a person becomes an expert in one thing, it has been replaced by another, which didn't even exist at the time she began her studies. The membership of peer review committees adds to this ossification; their expertise may be of disciplines that have long since come and gone.

Moreover, even if the academic route is a reliable means of establishing expertise, it is no longer the only one. We are seeing with increasing frequency people establish expertise outside their domains or disciplines. We are seeing people, through a process of self-education and public practice, become well established and well respected even in academic fields.

In some cases this is by necessity; it costs a great deal to earn a PhD, much more than the cost of a computer and internet access, and so the informal route is the only means available. This is the case for the majority of people in the world.

And in other cases it is by choice, as no PhD programs exist in a new area of study or invention. This was the case, for example, in internet technology. It had to be built, first, before people could become experts in it, while the people who built it became experts by building it.

So we need to allow for the likelihood that there is a great deal of expertise in the world that exists outside the domains of the traditional academic community. That the path of obtaining a PhD is one way to establish one's expertise, but not the only way. And that there will exist people who can quite genuinely be called experts in practitioner communities, self-selected or intentional communities, communities of practice, and elsewhere.

It is with these thoughts in mind that I have, over time, be thinking of the appropriate sorts of mechanisms for the management of academic journals. And so it seems a good time now to suggest how I think what I'll call 'the open journal format' should proceed.

I call the system 'open journals' not to confuse them with 'open access journals' but to stress that much the same principles are being applied. An open journal will be at heart an open access journal, but in addition, the process of selecting and reviewing articles for submission will also be open.

Here, then, is the process:

  1. People write articles and post them online. They may be blog posts. They may be contributions to discussion lists. They may be comments or web pages. It doesn't matter how the content has been published online, simply that it be published online, be licensed in such a way that would allow publication in the journal, and be accessible to whomever wants to read it.

    Typically a journal would have a 'subscription list' consisting of a set of RSS feeds recommended by its members. The list, available as an OPML file, would allow readers to subscribe RSS feeds from the larger writing community that produces content relevant to journal readers. The list could be subscribed as a single feed. An example of this is the Edu_RSS feed. Readers can consult a single source, selecting from the hundreds of posts created every month, to find the few that ought to be included in the journal.


  2. A journal's 'readers' nominate an article for submission. To 'nominate' an article is to bring it to the attention of the editor and other readers. People may become journal 'readers' by registering at the journal (other journals may allow at their discretion for anonymous 'readers').

    There are various ways to manage nomination. It is important to keep it fairly simple, and at the same time, to allow for buzz and community to develop around an article and around a readership in general. Typical process would resemble a 'del.icio.us' or 'Digg' style mechanism, where readers could '+1' an article, and participate or discuss the article. Another mechanism may be to read readers' blogs and count the number of members that link to the article.


  3. The journal's 'members' select from the nominated articles those articles that they think should be published.

    What is a 'member' of the journal? Simply - the founder of the journal plus any person who has had an article previously published in the journal.

    To be a member of a journal (I will capitalize this from now on) is not only to have had something published in the journal, but also to have been recognized as a 'peer' by the other authors who have had something published.

    The selection process is therefore two-fold: members are selecting not only a submission, but also the person. This means that to a degree, the candidate's previous body of work will be assessed as well as the actual submission. The role is not of 'gatekeeping' but of recognition.

    Members' votes are public, and they would typically comment on the accepted submissions, perhaps suggesting improvements. The number of articles published each month would vary, depending on the members' selections, but would typically be small. The top five vote-getters, say.


  4. The submission is prepared for publication. It is submitted into the journal editing system, spelling and grammar are checked, links to references confirmed, and the like. Galleys are created for the print edition (which will be published at a print-on-demand service such as Lulu). The author, in consultation with the editor and members, may make changes to the original submission at this point. The issue appears as an open access publication on the web.

The idea of such a system is that there are things that balance the journal between popularity and rigor.

It is possible for a journal to become too much of a clique, for the members to select only each others' papers. If so, then the people who are being left out can found their own journal. Because nominations are public, it will be easily evident which journal is the most difficult to get into because of quality, and which are the most difficult to get into because of exclusivity.

Will this work? I think it will. It might not work for any particular journal - some journals may simply not attract readership because the writers admitted were not of a high quality, or because the members make poor choices, or because the subject area is simply not useful or inappropriate. It will take a certain amount of momentum to launch a journal, a momentum that can be gained only by having qualified people and quality ideas to begin with.

I realize that similar projects have been tried by others. I am especially aware of David Wiley's attempt - and this is very similar to that. With one exception - that it draws from content that people have already posted. There is no particular danger of 'rejection', no chance that your submission won't see the light of day, because you don't even submit it. The process of recognition and nomination is undertaken entirely by your readers.

I am seriously considering this. I invite your comments.

Thursday, September 06, 2007

Ten Futures

Drawing on Richard MacManus's 10 Future Web Trends, this is a bit linear, but has the virtue of identifying future trends, not things that are around today.

1. The Pragmatic Web

Forget about the Semantic Web. Whether or not it ever gets built, you can be sure that we will be complaining about it. Because while the Semantic Web gives us meaning, it doesn't give us context. It will give us what we can get from an encyclopedia, but not what we can get from phoning up our best buddy.

The pragmatic web, by contrast, is all about context. Your tools know who you are, what you're doing, who you've been talking to, what you know, where you want to go, where you are now, and what the weather is like outside. You don't query them; they carry on an ongoing conversation with you. The pragmatic web is chock-full of information, but none of it is off-topic and none of it is beyond your understanding (and if you need to know more, it will teach you). The pragmatic web isn't just a web you access, read to and write to, it's a web that you use every day.

2. Global Intelligence

While from time to time our computers are going to appear pretty smart, some of them even smarter than we are, they will be dwarfed by the emerging global intelligence or world mind. This won't merely be the 'invisible hand' of the marketplace, this will be the whole body. And it won't be based on the mere one-dimensional system of valuations of things in terms of capital, it will be composed of multi-dimensional interactions of wide varieties of media, including all of what we call 'media' along with money, votes, population movements (aka traffic), utilities (power,water, gas, oil) and resources (minerals, food) and more.

The global mind will to a large degree be inscrutable. We won't know what it is trying to do, what it wants, what it thinks are 'good' and 'bad', or whether it is even sane and balanced. That won't stop a slew of populists from claiming to 'know' where the global mind is headed (a la evangelists or Marxists) - though of course, except at a very macro level, the destiny of an individual is independent of the destiny of the global mind. The global mind is the sort of thing that raises questions about the meaning of live, the value of ethics, and the nature of knowledge. Our answers to these questions over the next few decades - even as global climate change and wars and natural disasters ravage our populations - will shape the course of society through the next centuries.

3. Extended Reality

We think of 'reality' as being constituted of the physical world and then of 'virtual reality' as being the digital world, or as we sometimes say, 'virtual worlds'. The two worlds are very different in that, well, one world is real and the other is not.

'Extended reality' is a digital version of the real world such that the digital version is as real as the real version. What that means, pragmatically speaking, is that if it hurts in the extended world, it hurts. We will have full sensory coupling with the virtual world, making the virtual world every bit as 'real' to us as the real world.

This reality will not just be a simulation of 'reality'. Rather, what will emerge as the combination of the two is a kind of 'hyper-reality', where objects exist both in the physical world and the digital world (think 'Spinoza' rather than 'Descartes'). The physical world and the virtual world will act as one; eat in the 'virtual' world and your body (such as it is) in the 'real' world will be nourished.

How could this ever happen? Well, take something like, say, 'money'. Is it real, or is it virtual? If you spend money, do you give the other person something real or something virtual? 'Money' is a perfect example of something that can exist in both realms. That's what makes it such a powerful force in today's society! But if money - which, when you think about it, was tangible, solid gold and therefore the last thing you would think couple become virtual - then what else? Food, say?

We will live in an age of biochemical manipulation. Yesderday, we could create synthetic virtual worlds biochemically with drugs - 'take a trip and never leave the farm'. With sufficient computational power, we can create thes eworlds directly through interaction with computer systems. But we can also - by manipulation of matter electronically - create the 'fuel' that makes continued presence of the body possible. Do ing something in the 'virtual world' has real-time direct biochemical consequences, some of which are constitutes of energy inputs, which are converted to 'food' - or at least, the biochemical consequences of food.

4. Mobility

We will again in the future become a species of nomads, moving in tribes and herds through society, grazing on energy and information inputs as they become available.

This will happen as a result of a convergence of two factors. First, we will no longer be in want. At a certain point in time, sooner than we thing, the technologies we have put in place to ensure the continued uneven distribution of resources (which we then use to extort labour out of deprived populations) will become moot. It will not be possible to maintain wealth technologically; there will be no 'means of production' unique to a certain privileged class of people. Hence, we will not need to hoard food and other possessions; we can simply take what we need from the ambient environment.

Secondly, we will by then be in the habit of needing much less. Consumer goods - ubiquitous today - will become expensive and impractical in the future. Owing a library of books, for example, will be a "wealthy man's folly" - a lot like keeping a Spanish Galleon in the back yard to support your own personal trade link to China. We will have few possessions, and those mostly as keepsakes or mementos. 'Rooted' people will be thought of in the future the way we think of 'nomadic' people today - unable, for some social-cultural reason to mesh with the rest of society.

5. The Human Grid

Human minds will continue to be effecient and effective processing systems, able to assimilate megabytes of information in seconds, intuitively recognize patterns, make decisions, and communicate ideas. Consequently, human contributions to the 'economy' (ie., the system of production of material goods for the sustainment of life) will consist primary of providing mental 'inputs' to the production enginbes that actually do the work (much the way we 'drive' tractors today, but at a much more complex level).

Consequently, organizations will be able to derive value by enabling human minds to cooperate in the coordination or operation of elements of production. By contributing our thoughts and opinions on everything from celebrities to the weather to tomorrow's sports scores, computational systems will be able to derive the algorithms that will process iron ore, grow grain crops, and harvest energy from the wind and the Sun. It will be understood by these programmers that pop culture is a metaphor for the instruments of production, and that therefore human cognitive capacity can be mined directly by tracking thoughts and opinions about popular phenomena.

The collection of these thoughts and opinions from a network of people, all interacting with each other in an environment that includes entertainment, sports and other passtimes that engage the mind will be called the 'human grid'.

6. Smart Objects

This is discussed in Bruce Sterling's Distraction a bit, where he describes a hotel that instructs its owners on how it should be built. Objects - even everyday objects - will have a built in capacity for at least a primitive level of intelligence.

More importantly, these objects will be connected with other objects. We don't expect a lot of intelligence from strawberry jam, for example, but we expect it to at least know about what types of bread and peanut butter there are in the house (ie., your current mobile dwelling), to be able to monitor its compliance with your physical systems, to be able to suggest itself as a solution to current needs, to be able to offer relevant instruction, or to at least provide some input to the overall ambient room's conversation with you.

Your use of a product - whether it be strawberry jam, a fishing rod, or an auto-gyro, will have an impact on a whole network of other human and non-human systems. Taking the vehicle out for a spin, for example, will prompt a host of services to prepare themselves for your eventual arrival (and, indeed, you might not be going back). When you land - wherever that happens to be - your personal needs will already be in place (including any artifacts that you may have left behind). Consume a bit of strawberry jam and the global production system will conspire to manufacture that much more (assuming, of course, that it believes you will live to consume it and will have the inclination to do so).

7. Holoselves

No person can be in two places at once, of course, but one's avatar can travel one place while you travel to another, so when it comes time for that meeting in Colorado, you just shift your sensory input matrix to the holoself sitting down at the desk in Denver. Time for a lunch-time walk, so you transfer to the next holoself, which has been waiting patiently (like a book on the shelf) for you to pick it up in the Amazon eco-reserve. In the evening (after a holoself meeting in Zurich) you settle in with your 'real' self in Cairo for a nice evening meal and a show at the Pyramids.

Holoselves are, for all intents and purposes, artificial humans - you'd be hard pressed to tell the difference, and when they're legally occupied by a human, have all the rights and responsibilities of a human. People will naturally prefer to own their own dedicated holoselves, but it will be possible to share holoselves (the physical structure adapts to suit the host intelligence). Actual cognition (sensation, reflection, and the like) takes place partially in the 'real' brain, partially in the 'holo' brain (after a certain point the distinction between 'real' and 'holo' brains becomes more philosophical than practical - asking "Am I the same person in Cairo as I am in Denver" is pretty much the same as asking "Am I the same person tomorrow as I am today?"

The neat thing about holoselves is that they need not be human; the need just enough resident intelligence to input and process (coherently) perceptions and to communicate with other (holo and non-holo) instances of the controlling intelligences. This will lead to numerous holo-fads, like holo-birds, holo-fish, and more.

8. Living Art

When sentential utterances (words and sentences) are abandoned as a means of communication, it will become more natural to convey thoughts and information in multi-modal multi-sensory artifacts. We are beginning to see these even today with things like lolcats and YouTube videos. As our powers of expression (and the tools that helps us) become more sophisticated, we will create complex multi-faced fors of expression, the most advanced of which will (almost?) qualify as 'life' and will most certainly quality as 'art'.

Consider, just to gain an idea of this, how one wizard might express a thought to another in Harry Potter. Certainly the wizard would not write a note. Rather, the wizard would conjure an object of some sort - like a message owl, say. But the artifact will not 'carry' the message; the artifact will embody the message. On receipt of the of the 'message owl' the person would not merely read or be told, but rather, would interact with the owl - have a conversation with it - such that the subtleties and nuances of the message are expressed in a way that the recipient can understand them.

We things of communications today as means of carrying 'information'. This function will not cease in the future - we'll still need to say "My name is Johnny" or "I have an apple" to people in the future. But we'll say it in such a way that everything the recipient could want to know - the type of apple, the genetic history of the apple, the provenance of this particular apple, my preferences and opinions, stated and implied about apples, the current market value of apples - will also be contained in the message, not necessary (and not typically) in sentences, but through a range of conventional multimedia iconology (kind of like giving somebody a white rose to say "let's be friends" and a red rose to say "let's be more").

We will, of course, also have 'living graffiti' - buts of badly created living art that clutter city streets and cling to walls - they'll have to be flushed with high-powered steam hoses into thre organic recycling facility. And we'll still have spam - but at least when the message is delivered, you'll be able to eat it.

9. Global (Non-)Government

This is kind of an obvious one, but it should be clear that we will not have 'nations' in any geography-based sense of the term in the future.

This will become necessary due to the clamour of refugees trying to get to the highly developed regions of central Asia and Africa from their economically backward homes in North America and Europe. Many of these will be brought over by formally American and European corporations, which will relocate to the centre of their major markets in India, the Congo and China.

In any case, the concept of government will have been radically redefined by that time in any case. Government will be no longer of geographical region but rather of sectors. We see a good example of this in its infancy in standards development. Standards are not managed by national governments; they are rather managed by councils with (interested) representatives from around the world.

More and more, sector councils will govern affairs. Fisheries, for example, having recovered from the panic of the early 2000s, will have been removed forever from national control. Energy production on the global grid will have followed. Many other industries - aviation, telecommunication, food production, finance - are already being governed in this way.

The big change will happen during the mass-democratization events that (I expect) will take place in the middle of the 21st century. The sector councils will be badly managed by the corporate oligarchy that created them - they will act against the best interests of people (though it will take a disaster greater than Bhopal to demonstrate that to people) and will serve to preserve the privilege and wealth of a few. This, combined with the world wide 'free movement' movement - arguing that people, as well as capital and trade goods, should be able to move freely - will cause a crisis and an economic collapse. Governments will move militarily against corporations, which will agree to a power-sharing structure.

For the most part, after that, government will disappear from the lives of people. There won't be elections or anything like that; rather, people will participate directly in the management of sectors in which they are involved. Because people will have (what we today call) guaranteed incomes (but which amounts to free necessities of life) it will not be possible to coerce people in managerial hierarchies, and so corporat governance will be by networked decisions - each person will create creatively and 'pseudo-entities' composed of temporary collections of simultaneous inputs will achieve corporate outputs. That's how the first mission to Mars will be managed.

10. Cyborgs

This is a pretty easy one. The only thing preventing us from merging humans and machines today is that we cannot yet build machines at the scale and complexity required for human-machine interaction. Human inputs operate at the microscopic level, and require complex interactions. Even something so clumsy as replacing an organ requires that we grow - rather than make (though there are some few exceptions, like the artificial heart) - the organ, and then deal with interactions we couldn't design for with anti-rejection drugs.

But it should be evident that with biocomputing and nanotechnology we will be able to build, say, neural nets that can be installed alongside our existing cerebellum and can take over functionality as the original equipment wears out.

Most likely, the initial successes of cyborg technology will be in artificial perception. Replacing eyes, ears and other sense organs will succeed because base mechanical devices will be able to interface (much like a computer peripherals) with sensory input layers. Parts of these will also be created; we already have an artificial hippocampus.

There will, of course, be a large-scale industry in the psychology of cyborgs. Can a person be a ship and not become insane? How do we keep such a person occupied? Several of the technologies outlined above - like holoselves, for example, will be crucial. Metaphor will become reality - and it will become a major ethical issue - and a human right - to know one's actual situation.

Tuesday, September 04, 2007

Stager, Logo and Web 2.0

Gary Stager offers an impressive assessment of the use of Web 2.-0 tools in learning by virtue of an extended comparison between those tools and Logo, the revolutionary e-learning system developed by Seymour Papert in the 1960s.

His criticisms are directed mostly toward David Warlick and others who are advocating the 'revolutionary' use of Web 2.0 tools in schools. I think his criticisms are effective against the School 2.0 movement. And I think that it is because the School 2.0 movement has not embraced the lessons that should have been learned from decades of school reform.

And the main lesson is, I would say, school reform won't work. Schools were designed for a particular purpose, one that is almost diametrically at odds with what ought to be the practices and objectives of a contemporary education, an education suited not only to the information age but also to the objectives of personal freedom and empowerment.

Let me look at Stager's list, point by point, and outline how the discussion has shifted.

However, there are some primary differences between Logo (and its variants) and the panoply of Web 2.0 tools, including:
  • The Web 2.0 tools promoted by Warlick and Utecht were not created by educators or for children. Educators hope to find educational applications despite having almost no input into the development of future tools.
Logo and the rest were designed specifically for an educational environment. But it's not clear that this wasn't a mistake. While some continue to argue that tooks and resources should be specifically 'educational', I would suggest that the newer approach (which we are calling 'learning networks' or 'connectivism') stresses continuity between the educational environment and the wider social environment.

This has two aspects. First, it means that the tools used by students are essentially the same as those used by practitioners, and that students can see and interact with practitioners. What is learned, needs to be learned in a context, and through not mere instruction but concrete modeling and demonstration. And students' practice and experimentation needs to be relevant. This does not preclude totally any sort of scaffolding or safety nets, but it does argue for something more open and more widely used.

Second, it means that learning needs to be better incorporated into the tools and environments employed by practitioners. The best example of this can be seen in game design, where gameplay and instruction are seamlessly intertwined. Again, this argues against technologies that can be employed only in learning, and for technologies that integrate well with each other.

  • The Web 2.0 tools come out of corporate, not academic, cultures with very different motives.
This is to some degree true but to a large degree false.

It is true, of course, to the extent that many Web 2.0 initiatives are corporate initiatives, coming from companies like Skype, Google, and others. But a lot of it is coming from academia as well.

But more to the point, the division between the 'academic culture' and the 'corporate culture' misses the point. In fact, traditional academia and business share a great deal in common - structures, authorities, leaders, standards, scale, mass production, uniformity, and more. The 'school' is the perfect blend of academic and corporate culture, and as such, is everything you would expect; compartmentalized, rigid, authoritarian.

What Web 2.0 represents - or, more accurately, what the larger movement of which Web 2.0 is a part represents - is the rejection of that, on both the corporate and the academic levels. 'Decentralizing decision-making' has the same essential logical structure as 'personalizing learning'. New types of collaboration (not 'teams') in the corporate world resemble new types of collaboration (not 'classes') in academia.

Yes, the edges are blurred. Yes, traditional corporations with vested hierarchies and old-school models of economics try to play in the Web 2.0 world. One thinks of NBC in iTunes. And just so, some people in education who are still invested in the teacher-and-school model of learning try to present themselves as Web 2.0. Things aren't as neat as we would like. But proponents of traditionalism - cast in a guise like 'School 2.0' - should not be mistaken for what they are not.
  • There is no educational philosophy inspiring the development of the Web 2.0 tools or their use.
Yes there is. Moodle and Elgg, for example, adopt explicitly Constructivist theories to inform their design and development. Others in development (such as the various personal learning environment prototypes and applications) follow the Connectivist approach, as outlined by George Siemens. I have tried very hard over the years to outline my own theories of knowledge, learning and community, which I believe have influenced the development of tools and practices.

But, of course, in the very question we see the assumption of compartmentalism that characterizes old-school thought. Why would we need a specifically educational theory? As though learning is some practice or discipline totally separate, totally unrelated, to the rest of our lives? We've left people like Dewey and Moore behind, we've left things like transactional distance and other cognitivist information-theoretic approaches behind.

People working in Web 2,0 in learning have drawn from a variety of sources - social network theory, social media theory and Criticism, connectionism and other approaches to AI - as well as specific works, such as the Cluetrain Manifesto and the Hacker Ethic (to name a couple). And more than a few people working on Web 2.0 in edcucation have referred to people like Illich and Freire.

True, you won't necessarily find these theories described in Warlick and Utecht. But you are looking in the wrong place, if that's where you're looking. Not everytbody is a philosopher; not everybody is a theorist.
  • Although a principle of the Web is the democratiziation of knowledge, this is an abstract concept to educators raised on textbooks and being commanded to recite from scripted lesson plans.
Right. One of the debates I've had recently centres around the textbook. And it's not just that the textbook is an inefficient paper-and-ink publication. It's the whole idea of standardization and lesson plans and curriculum that the textbook brings with it. We should stop using textbooks because they cost too much. We should stay off textbooks because we get a better education as a result.

People often ask me how long I think it will take before we see the changes I describe - changes brought about by the democratizing powers of the web - to be realized in schools. My response is usually, "we won't." The changes we see in learning won't happen in schools. They'll happen outside of them. And they are very likely to make schools irrelevant.

And it's not just the sort of changes we already see in Knowsley - though it is that. It is also informal learning and workplace learning, it is also online learning communities, and it is learner-directed learning.

Logo probably never had a chance. As Stager says, quite accurately, "As more computers were delivered to schools and the enthusiasm of the early adopters were drowned out by teachers with other priorities, Logo became harder to sustain in schools. Add commercial pressures that devalued children making their own software (for obvious reasons) and the rest is history."

My thinking is that Web 2.0 will be more successful doing outside schools what Logo attempted to do inside them.
  • The greater Web 2.0 community has little interest in reforming education.
Yeah, we've pretty much given up.

As Dave Pollard says, "Bucky was right: 'You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.' We won't win zoning battles or economic control battles or electoral system battles or proportionate representation battles in the courts or the election campaigns or the markets that are controlled by the elite. We must instead walk away from these corrupt and dysfunctional systems and build new ones, responsive and responsible and sustainable alternatives that others can look at and say 'yes, that works much better'."

When I speak to teachers these days, I don't tell them how to improve the way they teach their students. I talk to them about how they can improve the way they teach themselves.
  • Web 2.0 attracts very little interest in the educational psychology or even teacher education communities.
Maybe. But that's kind of like saying that communes attract little interest in the editorial offices of the Wall Street Journal.

I would not expect educational psychologists, or even teacher education specialists, to have a particular interest in Web 2.0. Not simply because they are pretty much immersed in the old-school tradition, but because they don't really work in the field of educational technology (and compartmentalism Rules All).

But I would expect them to be impacted - eventually, because the speed of academic journal enquiry is glacial - by many of the themes and ideas behind Web 2.0. I find, for example, a lot of discussion in educational psychology about learning networks. Why would that be? Because many of the principles behind Web 2.0 are the same as those principles behind the new theories about neural networks.

The things we as a community talk about can be found throughout the literature. Access to online learning resources. Learning networks. Bringing learners together for knowledge sharing. Peer tutoring in ad hoc transient communities. And incidental learning through games and recreation. I could go on and on and on.
  • There exists very little peer-reviewed scholarship regarding Web 2.0. In fact, many people in the blogosphere are openly contemptuous of theory and scholarship in favor of "the wisdom of crowds," a new and popular, albeit inherently anti-intellectual world-view.
My estimation would be that the scarcity of peer-reviewed scholarship for Web 2,0 specifically is more reflective of the relative newness of the term than of a dearth of any academic interest in the subject. I suspect as well that there is some trepidition on the part of academic writers to employ what appears to be nothing more than the most recent catchphrase. Justifiably.

There is no dearth of research, peer reviewed or otherwise, in the topics surrounding Web 2.0. A search in Google Scholar, for example, returns hundreds of results for 'folksonomy'. Thousands of results for 'recommender system'. More than a hundred thousand results for 'social network'. Six hundred thousand results for neural network.

This quantity of results is important, because these researches (and many others) form the basis for what Stager is calling an "inherently anti-intellectual world-view".

Stager's criticism is like saying that the brain is 'inherently anti-intellectual' because there are no 'super-neurons' to which all other neurons must defer. Like saying that markets are 'inherently anti-intellectual' because there is no arbiter of supply and demand. Like saying a forest is 'anti-intellectual' because nobody organizes the trees and the shrubs.

It's a criticism that makes sense - barely - against a particular application of the theory, one which favours connectionist decision-making mechanisms over authority-driven mechanisms in certain intellectual enterprises, such as the evaluation of research and project funding.
  • By definition, the Web 2.0 community is leaderless. Too often, non-equivalent opinions are given equal weight without a demand for evidence or supporting arguments.
This is a misleading argument clouded in vagueness. Who are these people who are giving 'equal weight' to non-equivalent opinions? What does that mean, even, in this context?

The suggestion, of course, is that you need a 'leader' in order to assign these weights (or, perhaps, to be assigned these weights - like I said, it's vague).

Of course, nothing is actually given equal weight to another. When we read a paper - or a post - we assign a subjective weight to it. Each of us assess the paper, and decides for ourselves whether the paper has any value. Some people may emerge as having produced works of greater subjective value. But those 'leaders' insist that they are not leaders. This is a good thing. But let me explain what it means.

When Stager is talking about 'equal weight', what he is talking about is a priori weight. That is, the value that would be assigned to paper A or paper B before it is even read. The 'leaders' of the movement are (presumably) those people whose works have the greatest a priori weight.

All other things being equal, the criticism is apt - no paper is presumed to have any inherent value simply because it was written by some expert or authority. But this does mean that readers must wander aimlessly through the interweb seeking and never finding the most useful material. There are many ways to find good content, usually through some process of recommendation. Reputation does play a part in such a system, but not the central and defining role it plays in more traditional systems. Nor should it.

The fact is, even if I had the best ratings in the entire educational web 2.0 community, it could still be the case that my next post will be a dud (some would say it likely!), while the contribution by some unknown might be sparkling and brilliant. What we want is a system that demotes the did and promotes the new work. This is exactly what an authority-driven system prevents.
  • There is very little material written for educators on using Web 2.0 tools in a creative fashion. Will Richardson's book is a fabulous resource for understanding the read/write web, but hardly offers provocative project ideas.
Unless educators require their own special books, maybe written in big print or something, this statement is empirically false. There have been reams of materials written on how to use Web 2.0 applications.

Not so much, though, on how to use Web 2.0 to enforce order, force students (and teachers) to follow the curriculum, increase standardized test scores, or any of the education-as-industry sort of activity. Insofar as Richardson's work fails to offer provocative project ideas, it is because it is working within the constraints of 'school'.

This past weekend I taught myself how to make my own personalized Google Maps, share my lecture notes with the world, to find MP3 files almost instantly, and more. It doesn't require someone to come along after to write something especially for teachers to tell them to, say, have students create their own custom Google map for, say, a biology project, does it? Or, for that matter, for students to come up with these ideas on their own?

We expect people to find their own way, not to be told what to do. In old-school thinking, teachers and students follow a manual or guidebook. In new-school thinking, they write it.
  • No matter how cool, powerful or revolutionary Web 2.0 tools happen to be, there are few if any mature objects-to-think-with embedded in them and certainly no explicit statement that their use is designed to transform the learning environment.
I find my RSS reader and blogging engine to be great objects to think with. That this is not their sole function is a strength, not a weakness.

As for the 'explicit statement', Stager is welcome to read Educational Blogging and get back to me.

More seriously, this point again hits on some of the points made above. The idea that there should be exclusively educational applications. The idea that everything should all be spelled out in a guide. The idea that the impact should be inside the school environment. None of these is the case. In fact, just the opposite.
  • The emphasis on information reinforces passive pedagogical practices, whether intentional or not.
This is just a mis-statement of Web 2.0

Web 2.0 is about doing, not consuming. You can see this expressed any number of ways by any number of people.

The most charitable interpretation I can give to Stager's point is that Web 2.0, because it is a computer-based medium, necessarily involves nothing more than information processing.

But in fact, if you are producing, rather than merely sharing or consuming, information, then you are necessarily getting up and away from the computer screen.

This is one of the key big differences between Web 2.0 and many of the approaches to learning that came before.

Think about what's involved, say, in creating a photo album or making a video. You have to go out into the community, talk to people, place yourself in a position to observe, interact with the environment, and more. Then you have to process that experience, to reflect on it through the act of creating the blog post, video or whatever.

It is not possible to learn passively in a Web 2.0 environment. Page-turners, just like classrooms, are inherently 1.0
  • While they may be really powerful or innovative software applications, a teacher simply does not need Skpe, Google Eartth or Second Life. Using them will do little to challenge conventional classroom practice. Some of the richest examples merely enhance the existing curriculum.
In related news, it was discovered that adding wings to trains did little to make them faster get them to the station on time.

While some of the reschoolers focus on the changes that Web 2.0 could make to the classroom, let me stress again that the people who work in the field mostly think that the greatest benefits of Web 2.0 are felt outside the classroom.

(I should also point out that none of Skype, Google Earth, or Second Life are, strictly speaking, Web 2.0 - but I can leave such trivialities to the side. I suppose.)

Web 2.0 applications may not be the greatest teaching applications in the world. But they revolutionize learning.
  • Web 2,0 requires robust ubiquitous access to the Internet. Most schools have demonstrated an inability to trust teachers and kids online and as a result create insane barriers to teachers using the Web in an educational fashion.
Quite so. More of the same.

Though I would point out that many Web 2.0 applciations require lower bandwidth than their inefficient 1.0 predecessors. YouTube video, for example, works well on iBurst technologies in Africa (I tested it) while a 20 megabyte .mov is a non-starter. AJAX applications require only small status updates rather than complete page-loads. Many applications, such as Zoho and Google Reader, are designed for use offline as well as online. Things like SMS and instant message are optimized for efficiency in small, burst transmissions.

That's one of the great things about Web 2.0. It's a great leveller. Tools like Slideshare and Blogger and YouTube and the rest meant that everybody could produce and consume multimedia, not just a select few. It's internet access the way it was meant to be.

Schools may be blocking access to Skype, weblogs, Facebook, and the rest - but in so doing are only pushing themselves closer to irrelevance. They are certainly - despite their best efforts - not blocking students.
  • By definition, Web 2.0 is temporal (just wait for 3.0) and new tools emerge every hour. As a result, teachers don't see a reason to invest much time in mastering technologies that will be obsolete or leapfrogged tomorrow. For many enthusiasts, collecting the tools is as important as using them.
Fortunately, one of the principles of Web 2.0 is that the tools should require a lot of mastering.

Thick software manuals and how-to guides are the legacies of a 1.0 world. People expect to learn a tool as they're using it.

They expect this with other learning too. The whole idea of stopping everything you're doing to 'learn something' is old-school thinking.

As for 'collecting the tools' - it's more like a conveyor belt. As new tools are collected at the front, old tools are dropped off the back. Which explains why the last time i opened Microsoft Word, I got the 'software personalization' dialogue.
  • Times have changed. Few Americans protest anything, not the war in Iraq, not the erosion of civil liberties. Educators don't even fight overly restrictive and counter-productive network policies that castrate the Internet. Has ISTE raised the issue before Congress? Has the NEA made this an issue of working conditions? No, there is little appetite for rocking the boat. We have become passive and compliant just like our schools wish for our students.
This comment goes well beyond the domain of Web 2.0 (not to mention being incredibly culturally-centric) but here goes...

As I commented above, people have pretty much given up on trying to reform the existing institutions. We've seen a lot of people try. Meet the new boss... same as the old boss. Why bother to fight the restrictions. School web is blocked? Just use your iPhone. Policies are overly restrictive? Just ignore them. I mean - what are they going to do, fire you from your $25K job? Why rock the boat when it's going over the waterfall?

People are not just opting out of traditional education. They are also opting out of traditional business and traditional government. Making their own decisions instead of trying to sway bodies that purport to make decisions for them.
  • I know I'll get flamed for this, but the educational Web 2.0 community has little first-hand experience in social activism and scant knowledge of existing school reform literature. Like the discovery of new tools, one gets the sense that proponents of Web 2.0 in education are discovering educational theories here and there and then applying these ideas to the new tools.
Different explanations for different people. If Stager wants to focus on Warlick, Utecht and Richardson, that's his prerogative.

Speaking for myself, I think my activist credentials are pretty solid.

Not that it's relevant. If I'm right, I'm right no matter what my credentials are. If I'm right, I'm right whether or not I know about school reform literature.

Back when I went to school, we called this kind of criticism an ad hominem. And it wasn't worth the paper it was written on. But I will say (if I can say it without being too testy) the ad hominem is a staple of old-school teaching and reasoning, and I don't know where it would be without it.
  • What is the unifying educational theory behind using Skype, Second Life, Scratch and Google Earth?
Here: e-learning 2.0, Connectivism, Connective Knowledge, Learning Networks, and elsewhere.