What I'm Working On

In response to an in-house request to describe, in accessible language, what it is that I'm working on.

I don't mind explaining - though I will confess it's difficult to explain. It really combines a number of quite distinct ideas in a way that isn't always clear.

The idea is based in e-learning but isn't limited to that. The challenge of e-learning has always been to locate and deliver the right resources to the right person. A *lot* of digital ink has been spilled on this. Mostly, the way people approach it is to treat online resources as analogous to library resources, and hence to depict the problem of locating resources as a search and retrieval problem. Which in a certain sense makes sense - how else are you going to find that one resource out of a billion but by searching for it?

And some good work has been done here. The major insight, prompted by the Semantic Web, was that resources could be given standardized descriptions. In e-learning we got the Learning Object Metadata, which is a set of 87 or so data fields that e-learning designers should provide in XML format to describe their learning resources. This would allow for searches - not just keyword or phrase searches, Google already does that, but structured searched. For example, Google could never discover a resource that is best for Grade 10 students, but if somebody filled out the TypicalAgeRange tag then the resource would become discoverable.

That, indeed, has always been the limit of data mining technologies. No matter how good your analysis, you have only the resource itself to look at. And sometimes these resources are pretty opaque - a photo, for example - and while we can (and do) locate resources on the basis of their similarity to each other, we cannot differentiate between physically similar, but otherwise very different, resources. Consider, for example, the problem of detecting pornography in image libraries (from either the standpoint of retrieval or filtering - it's the same either way). It's not just a question of being able to distinguish between porn and sports coverage of a swimming meet, but also distinguishing between porn and medical journals, anthropology and art. Naked bodies always look pretty similar; whether one is scientific or pornographic is a matter of interpretation, not substance.

On the internet, what some people have realized is that this sort of problem is not so much a problem of description as a problem of relation (good thing, too, because studies showed that nobody was going to fill out 87 metadata fields). A type of technology called 'recommender systems' was employed to do everything from pick music to match you with your perfect date. A recommender system links three different types of data: a description of a resource, a description of a person, and an evaluation or ranking. In summary, we were looking for statements of the type, "people like P thought that resources like R were rated Q". This formed the basis of the sifter-filter project, which was adopted by some people in Fredericton and became RACOFI. Here's one presentation of the idea, which predtaes RACOFI: Here's another.

Part of this work involves the idea of the resource profile. This is a concept that is unique to our project. The main point here is that, for any resource, there are multiple points of view. The very same book may be described as heavy or light, as good or bad, as appropriate or inappropriate, depending on who is doing the describing. Crucially, it is important that the people producing the book not be the only ones describing the book (otherwise every bood would be 'excellent!!'). That's why we have reviewers. Looking at this more closely, we determined that there are different types of metadata: that created by the resource author, that created by the user of the resource, and that created by disinterested third parties (such as reviewers and classifiers). But now, when we look at this, the different types of resource, and the different types of metadata, it becomes clear, that any idea of thinking of metatada as anything like a document is misguided. Metadata is the knowledge we have of an object - specifically, the profile - but this varies from moment to moment, from perspective to perspective. My paper, Resource Profiles, describes this in detail.

The key here is this: knowledge has many authors, knowledge has many facets, it looks different to each different person, and it changes moment to moment. A piece of knowledge isn't a description of something, it is a way of relating to something. My 'knowing that x is P' is not a description of 'x', it is a description of 'the relation between me and x'. When I say 'x is P' and you say 'x is P' we are actually making two different statements (this is why the semantic web is on the verge of becoming a very expensive failure - it is based on a description, rather than a relational, theory of knowledge). One way of stating this is that my 'knowing that x is P' is a way of describing how I use x. If I think 'x is a horse', I use it one way. If I think 'x is a tree', I use it differently. This is especially evident when we look at the meanings of words (and especially, the words that describe resources). If I think "'x' means P" then I will use the word 'x' one way. If I think "'x' means Q", I will use it a different way. Hence - as Wittgenstein said - "meaning is use".

The upshot of all of this is, no descriptive approach to resource discovery could ever work, because the words used to describe things mean different things to different people. You don't notice this so much in smallish repositories of only tens of thousands of items. But when you get into the millions and billions of items, this becomes a huge problem (even huger when you add into the mix the fact that people deliberately misuse words in order to fool other people).

OK. Let's put that aside for the moment. As metadata was being developed, on the one hand (by the semantic web people) as a description format, it was also being developed (by the blog people) as a syndication format. That is to say, the point of the metadata wasn't so much to describe a resource as it was to put the resource into a very portable, machine-readable format. The first, and most important, of these formats, was RSS. I have been involved in RSS for a very long time, since the beginning (my feed was Netscape Netcenter feed number 31). It was evident very early to me that syndication would be the best way to address the problem of how to deliver selected learning resources to people. Here's where I first proposed it.

As we looked at the use of RSS to syndicate resources, and the use of metadata to describe resources, it became clear that content syndication would best be supported by what may be known as distributed metadata. The idea here is that the metadata distributed via an RSS feed links to other metadata that may be located elsewhere on the internet.

We used this to develop and propose what we now call 'distributed digital rights management'. The idea is that, in resource metadata, which is retrieved bu a user or a 'harvester', there is a link to 'rights metadata', in our cased described in open Digital Rights Language (ODRL). This way, the RSS metadata could be sent out into the world, distributed to any number of people, stored who knows where, and the rights metadata could sit right on our own server, where we could change it whenever we needed to. Since the rights metadata in the RSS file was only a pointer, this meant that the rights information would always be up to date. Here are several presentations related to the concept.

This is the mechanism ultimately employed by Creative Commons to allow authors to attach licenses to their work (and there is a CC declartion in RSS). It is also, belatedly, how other standards bodies, such as Dublin Core, have been approaching rights declarations. To be sure, there is still a large contingent out there that things rights information ought always accompany the object (to make the object 'portable' and 'reusable'). It is, again, this old idea that everything there is to know about an object ought to be in the object. But 'rights', like 'knowledge', are volatile. A resource (such as an Elvis recording) might be worth so much one day (Elvis is alive) and twice as much the next day (Elvis is dead). The owner of a Beatles recording might be Paul McCartney one day and Michael Jackson the next.

The combination of resource profiles, syndication, and distributed metadata gives us the model for a learning resource syndication network. Here are the slides describing the network and the paper. This is what we had intended eduSource to become (unfortunately, people with different interests determined that it would go in a different direction, leaving our DRM system a bit of an orphan - and eduSource, ultimately, a failure). But if we look at the RSS network (which now comprises millions of feeds) and the OAI/DSpace network (which comprises millions of resources) we can see that something like this approach is successful.

That's where we were at the end of the eduSource project. But the million dollar question is still this: how does your content network ensure that the right resource ends up in the right hands?

And the answer is: by the way the network is organized. That - the way the network is organized - is the core of the theory of learning networks.

But what does that mean?

Back in the pre-history of artificial intelligence, there were two major approaches. One approach - called 'expert systems' - consisted essentially of the attempt to codify knowledge as a series of statements and rules for the recovery of those statements. Hence, rule-based AI languages like LISP. The paradigm was probably the General Problem Solver of Newell and Simon, but efforts abounded. The expert system approach brought with it (in my view) a lot of baggage: that knowledge could be codified in sentences, that thought and reasoning were like following rules, that human minds were physical symbol systems, that sort of thing. (This approach - not coincidentally - is what the Semantic Web is built upon).

The other approach, advocated by Minsky and Papert, among others, was called 'connectionism'. It was based on the idea that the computer system should resemble the mind - that is to say, that it should be composed of layers of connected units or 'neurons'. Such a computer would not be 'programmed' with a set of instructions, it would be 'trained' by presenting it with input. Different ways of training neural nets (as they came to be called) were proposed - simple (Hebbian) associationism, back-propagation, or (Boltzman) 'settling'. The connectionist systems proved to be really good at some things - like, say, pattern recognition - but much less good at other things - like, say, generating rules.

If we look at things this way, then it becomes clear that two very distinct problems are in fact instances of the same problem. The problem of locating the right resource on the internet is basically the same problem as the problem of getting the question right on the test. So if we can understand how the human mind learns, we can understand how to manage our learning resource network.

Connectionism says that "to learn that 'x is P' is to be organized in a certain way", to have the right set of connections. And if we recall that "A piece of knowledge isn't a description of something, it is a way of relating to something. My 'knowing that x is P' is not a description of 'x', it is a description of 'the relation between me and x'" it becomes evident that we're working on the same theory here. The problem of content organization on the internet is the same as the problem of content organization in the brain. And even better: since we know that 'being organized in a certain way' can constitute knowledge in the brain, then 'being organized in a certain way' can constitute knowledge in the network.

Connectionism gives us our mechanics. It tells us how to put the network together, how to arrange units in layers, and suggests mechanisms of interaction and training. But it doesn't give us our semantics. It doesn't tell us which kind of organization will, successfully produce knowledge.

Enter the theory of social networks, pioneered by people like Duncan J. Watts. In the first instance, this theory is an explanation of how a network of independent entities can become coordinated with no external intervention. This is very important - a network cannot produce knowledge unless it itself produces knowledge, for otherwise we have to find the knowledge in some person, whcih simply pushes the problem back a step. Networks organize themselves, Watts (and others) found, based on the mathematical properties of the connections between the members of the network. For example: a cricket will chirp every second, but will chirp at an interval of as short as 3/4 of a second if prompted by some other cricket's chirp. provided every cricket can hear at least one other cricket, this simple system will result in crickets chirping in unison, like a choir, all without any SuperCricket guiding the rest.

Similar sort of phenomena were popularized in James Surowiecki's The Wisdom of Crowds. The idea here is that a crowd can determine the right answer to a question better than an expert. I saw personally a graphic example of this at Idea City in 2003 (they don't let me go to Idea City any more - too bad). The singer Neko Case asked the crowd to be her chorus. "Don't be afraid that you're out of tune," she said. "One voice is out of tune - but when 300 voices sing together, it's always perfectly in tune." And it was. The errors cancel out, and we each have our own way of getting at least close to the right note, with the result that all of us, singing together, hit it perfectly.

So knowledge can be produced by networks. But what kind of networks? Because everybody knows about lemmings and mob behaviour and all sorts of similar problems - 'cascade phenomena', they are called in the literature. They are like the spread of a disease through a population - or the spread of harmful ideas in the brain. This is where we begin with the science of learning networks.

The first part of to combine the science of social networks with the idea of the internet and metadata, which was done in papers like The Semantic Social Network. Thus we have a picture of a network that looks like the social networks being described by Watts and Surowiecki. These have been (badly) implemented in social network services such as Friendster and Orkut. To make this work, a distributed identity network is required. This was developed as mIDm - here and here - today, a similar concept, called OpenID, is in the process of being implemented across the internet.

Another part was to provide a set of design principles for the creation of networks that will effectively avoid cascade phenomena. Drawing for the earlier part of our work, including ideas such as distributed metadata, a theory of effective networks was drafted. Slides and Robin Good's nicely illustrated version of my paper. The proposal here is that networks that exhibit the eight principles will effectively self-organize (and this is a very rough rule of thumb, intended to cover for mathematics which might never be possibly solved - very very simple examples of these sorts of organizing principles are seen in things like 'the game of Life' - because the phenomena being described are complex phenomea (like weather system or ecologies) with multiple multrually dependent variables).

Adding to this was what I called the 'semantic principle', which is our assurance that the forms of ornagization our networks take will be reliable or dependable forms of organization. The epistemology of network knowledge is described in detail in my paper An Introduction to Connective Knowledge and Learning Networks and Connective Knowledge.

On the technical side, my main attempt to instantiate these principles is embodied in my development of Edu_RSS. I am currently migrating Edu_RSS from the NRC server to my own server, as directed. The idea behind Edu_RSS is that it harvests the RSS feeds of roughly 500 writers in the field of online learning, combines these feeds in different ways, and outputs them as a set of topical feeds. The system also merges with my own website and commentary. the idea is that a system like Edu_RSS is like one node in the network - ultimately, layers of the network will be created by other services doing much the same sort of thing. For a description of edu_RSS see here.

Very similar to EduRSS in concept and design is the student version of the same idea, generally known as the Personal learning Environment. The PLE differs from EduRSS in that it depends explicitly on external services (such as Flickr, del.iciop.us, Blogger and the like) for data retrieval and storage. The 'node in the network', with the PLE, is actually virtual, distributed over a number of websites, and also very portable (ideally, it could be implemented on a memory stick). I am working on the concept of the PLE both by myself and with external organizations.

Again, the idea behind these applications is to bring some of the threads of this whole discussion into convergence - distributed metadata, content syndication, distributed rights, identity, data, autonomy and diversity of perspective and view, multiple simultaneous connections creating 'layers' of interconnected individuals, and the rest.

The purpose of the Learning Networks project, over and above the theorizing, is to build (or help build) the sorts of tools that, when used by largish numbers of people, result in a self organizing network.

The idea is that, when a person needs to retrieve a certain resource (which he or she may or may not know exists) that the network will reorganize itself so that this resource is the most prominent resource. Such a network would never need to be searched - it would flex and bend and reshape itself minute by minute according to where you are, who you're with, what you're doing, and would always have certain resources 'top of mind' would could be displayed in any environment or work area. Imagine, for example, a word processor that, as you type your paper, suggests the references you might want to read and use at that point. And does it well and without prejudice (or commercial motivation). Imagine a network that, as you create your resource, can tell you exactly what that resource is worth, right now, if you were to offer it for sale on the open market.

That's what I'm working on. In a nutshell.

Comments

  1. There are a bunch of things I want to say in response to this post, but let me start with the obvious: Wow!

    I really like the way you use Wittgenstein to frame some of the linguistic elements at the heart of the problems of the semantic web. If I recall correctly, Wittgenstein's own struggles after the Tractatus was whether or not a formal system through which to conceptualize the logical relationship between language and meaning was a misguided path of inquiry. Ultimately, his later work seems to move away from this pursuit into a theory of 'Language-games' that conceptualizes language, as you point out beautifully in this post, as a series of dynamic relations between referents within specific cultural and historical contexts. I have yet to read a more lucid examination of the deeply linguistic theories at the heart of the differing philosophies of computer networked-based ontology.

    Your focus on a series of relationships between objects and people, rather than a series of inter-connected descriptions, marks the key difference in understanding how meaning is created within particular contexts in relationship to a particular moment. These two facts -specific contexts and change over time- mark that dynamic complexity that descriptions cannot account for. "A piece of knowledge isn't a description of something, it is a way of relating to something."

    A fascinating question, for me, as a follow-up to the above quote is the following: how is a relation something other than a description? In other words, a description is almost always framed as a way to use language to define the attributes of an object. Yet, a relation may offer something more than this linguistic framework -it suggests a subjective, proximal, imaginative, visual, and tactile relationship to an object. Yet, are we only limited to the linguistic frames of relationships for our networks -can we relate through visual patterns? -our position in a network? -a series of moving images? -a thought that is somehow extra-textual?

    "The upshot of all of this is, no descriptive approach to resource discovery could ever work, because the words used to describe things mean different things to different people." This may be the point of Wittgenstein’s move away from a logical grid for formal language structures into a more relational model of Language-games: the room for slippage and the possibility for generative misreadings, misinterpretations, and the relinquishing of intentionality -and to some degree by extension- meaning. How might social networks premised on relationships be able to incorporate, adjust to, and internalize competing meanings that a semantic description could never disambiguate?

    Such a focused examination of the underlying framework of the semantic web in relation to connectionism, social networks, and the Personal Learning Environment was like a course I've never taken or a book I've yet to read. But when you frame, even if subtly as you do here, these ideas through the developments of a 20th century philosopher's own struggles with the formal structures of language, ideas, and meaning -it takes it all to another level. Thanks for that.

    ReplyDelete
  2. stephen, this is truly breathtaking!

    a curious question about your last paragraph: in what respects would that differ from the semantic wiki-engine personal-webtop interface of www.systemone.at?

    greetings from innsbruck

    martin

    ReplyDelete

Post a Comment

Your comments will be moderated. Sorry, but it's not a nice world out there.

Popular Posts