The Open Journal Format
For some time I have been thinking of launching a journal. Because I think that when people talk about 'peer reviewed publications' they have a point, and that point is, that a piece of writing is not merely popular, but also, respected and recognized by a particular academic community.
We need such mechanisms because there is too much to read, too much even in narrowly defined disciplines. And there is no particular mechanism for identifying that which is important within a particular discipline. The popularity-based systems, like Slashdot and Digg, cater to certain communities, sure, but tend, eventually, to what we might call a scholarship of the middle - no particular discipline, no particular level of quality, no particular virtue.
That is not to discount the systems whereby content is selected and reified by the masses. I am a regular reader of such lists and they are a constant source of amazement and amusement. High quality content does get selected by the crowd, but not all of it, and not reliably within a certain discipline.
Historically, as I mentioned, content selection for academic materials has been by means of 'peer review'. The process varies across journals, but in its most typical instantiation, proceeds as follows: a writer submits a manuscript to an editor, who reads it. The editor, at his or her discretion, sends the manuscript to a small committee of reviewers. The reviewers rate the submission for appropriateness for publication. They will often recommend changes and improvements. A final version is drafted, and it is typeset and published.
There are, in my view, two major weaknesses of this approach:
First, it proceeds in secret. The manuscript is not viewed by the reading public until after it has been selected. That it is being considered for publication is not known. Thus, if a manuscript is submitted and rejected, it may never see the light of day. This is wasteful. And it results in the very real possibility that highe quality works might never be seen because they did not pass the scrutiny of a few people.
Second, the decision is made by only a small number of people. Very few people actually review manuscripts; typically three or four. If these people are not attentive to the material they are reviewing, they may accept substandard material, or reject high quality material, simply because they were not paying attention. Having material reviewed by a wider number of people reduces this likelihood. It creates the possibility for buzz around a selection, a conversation that will result in its being not only improved but also brought to the attention of reviewers.
So how do we fix these things? The approach is to decouple access from review. Specifically, authors' manuscripts ought to be easily and widely accessible prior to publication. In this way they can be read by a large number of people. This does not mean that all people read all articles; there is no need for that. But it does mean that the typical article would be read by well more than three people.
Once access has been enabled, then we need to develop a review process. The problem (in my view) with sites like Slashdot and Digg is that a resource rockets from access to acceptability with virtually no restraints. Insofar as there is a community, it is like they are a pack, jumping from one popular thing to the next, with no sense of direction or consistency.
Also, there is no sense of 'peers' in this process. There's no sense that the submissions have been evaluated by people who have demonstrated their commitment to a certain subject area or background or expertise in the field. It's one thing to say "The Golden Age of Paleontology" was popular; it is quite another to say that it was popular among paleontologists.
But what constitutes 'being a paleontologist'? Traditionally, we have required some sort of certification. A person needs to become a PhD in paleontology. Then they need to be selected by an editor of a journal to sit on a review board. This qualifies them to review publications in paleontology.
There is merit to this approach. We have it on good grounds that the person is very likely an expert in the field. They have passed a rigorous and formal course of study. They have been subjected to examination, and have the academic credentials to prove it. Very often, they will also hold a position at a university or a research institute. By the time they are selected to become a member of a review committee (and eventually, an editorial board) they will have successfully published a number of articles, establishing their importance in the field.
This approach has served us well historically, however, there are signs of strains. It takes a long time to earn a PhD, under fairly restrictive circumstances. New disciplines and technologies are being developed so rapidly that by the time a person becomes an expert in one thing, it has been replaced by another, which didn't even exist at the time she began her studies. The membership of peer review committees adds to this ossification; their expertise may be of disciplines that have long since come and gone.
Moreover, even if the academic route is a reliable means of establishing expertise, it is no longer the only one. We are seeing with increasing frequency people establish expertise outside their domains or disciplines. We are seeing people, through a process of self-education and public practice, become well established and well respected even in academic fields.
In some cases this is by necessity; it costs a great deal to earn a PhD, much more than the cost of a computer and internet access, and so the informal route is the only means available. This is the case for the majority of people in the world.
And in other cases it is by choice, as no PhD programs exist in a new area of study or invention. This was the case, for example, in internet technology. It had to be built, first, before people could become experts in it, while the people who built it became experts by building it.
So we need to allow for the likelihood that there is a great deal of expertise in the world that exists outside the domains of the traditional academic community. That the path of obtaining a PhD is one way to establish one's expertise, but not the only way. And that there will exist people who can quite genuinely be called experts in practitioner communities, self-selected or intentional communities, communities of practice, and elsewhere.
It is with these thoughts in mind that I have, over time, be thinking of the appropriate sorts of mechanisms for the management of academic journals. And so it seems a good time now to suggest how I think what I'll call 'the open journal format' should proceed.
I call the system 'open journals' not to confuse them with 'open access journals' but to stress that much the same principles are being applied. An open journal will be at heart an open access journal, but in addition, the process of selecting and reviewing articles for submission will also be open.
Here, then, is the process:
The idea of such a system is that there are things that balance the journal between popularity and rigor.
It is possible for a journal to become too much of a clique, for the members to select only each others' papers. If so, then the people who are being left out can found their own journal. Because nominations are public, it will be easily evident which journal is the most difficult to get into because of quality, and which are the most difficult to get into because of exclusivity.
Will this work? I think it will. It might not work for any particular journal - some journals may simply not attract readership because the writers admitted were not of a high quality, or because the members make poor choices, or because the subject area is simply not useful or inappropriate. It will take a certain amount of momentum to launch a journal, a momentum that can be gained only by having qualified people and quality ideas to begin with.
I realize that similar projects have been tried by others. I am especially aware of David Wiley's attempt - and this is very similar to that. With one exception - that it draws from content that people have already posted. There is no particular danger of 'rejection', no chance that your submission won't see the light of day, because you don't even submit it. The process of recognition and nomination is undertaken entirely by your readers.
I am seriously considering this. I invite your comments.
We need such mechanisms because there is too much to read, too much even in narrowly defined disciplines. And there is no particular mechanism for identifying that which is important within a particular discipline. The popularity-based systems, like Slashdot and Digg, cater to certain communities, sure, but tend, eventually, to what we might call a scholarship of the middle - no particular discipline, no particular level of quality, no particular virtue.
That is not to discount the systems whereby content is selected and reified by the masses. I am a regular reader of such lists and they are a constant source of amazement and amusement. High quality content does get selected by the crowd, but not all of it, and not reliably within a certain discipline.
Historically, as I mentioned, content selection for academic materials has been by means of 'peer review'. The process varies across journals, but in its most typical instantiation, proceeds as follows: a writer submits a manuscript to an editor, who reads it. The editor, at his or her discretion, sends the manuscript to a small committee of reviewers. The reviewers rate the submission for appropriateness for publication. They will often recommend changes and improvements. A final version is drafted, and it is typeset and published.
There are, in my view, two major weaknesses of this approach:
First, it proceeds in secret. The manuscript is not viewed by the reading public until after it has been selected. That it is being considered for publication is not known. Thus, if a manuscript is submitted and rejected, it may never see the light of day. This is wasteful. And it results in the very real possibility that highe quality works might never be seen because they did not pass the scrutiny of a few people.
Second, the decision is made by only a small number of people. Very few people actually review manuscripts; typically three or four. If these people are not attentive to the material they are reviewing, they may accept substandard material, or reject high quality material, simply because they were not paying attention. Having material reviewed by a wider number of people reduces this likelihood. It creates the possibility for buzz around a selection, a conversation that will result in its being not only improved but also brought to the attention of reviewers.
So how do we fix these things? The approach is to decouple access from review. Specifically, authors' manuscripts ought to be easily and widely accessible prior to publication. In this way they can be read by a large number of people. This does not mean that all people read all articles; there is no need for that. But it does mean that the typical article would be read by well more than three people.
Once access has been enabled, then we need to develop a review process. The problem (in my view) with sites like Slashdot and Digg is that a resource rockets from access to acceptability with virtually no restraints. Insofar as there is a community, it is like they are a pack, jumping from one popular thing to the next, with no sense of direction or consistency.
Also, there is no sense of 'peers' in this process. There's no sense that the submissions have been evaluated by people who have demonstrated their commitment to a certain subject area or background or expertise in the field. It's one thing to say "The Golden Age of Paleontology" was popular; it is quite another to say that it was popular among paleontologists.
But what constitutes 'being a paleontologist'? Traditionally, we have required some sort of certification. A person needs to become a PhD in paleontology. Then they need to be selected by an editor of a journal to sit on a review board. This qualifies them to review publications in paleontology.
There is merit to this approach. We have it on good grounds that the person is very likely an expert in the field. They have passed a rigorous and formal course of study. They have been subjected to examination, and have the academic credentials to prove it. Very often, they will also hold a position at a university or a research institute. By the time they are selected to become a member of a review committee (and eventually, an editorial board) they will have successfully published a number of articles, establishing their importance in the field.
This approach has served us well historically, however, there are signs of strains. It takes a long time to earn a PhD, under fairly restrictive circumstances. New disciplines and technologies are being developed so rapidly that by the time a person becomes an expert in one thing, it has been replaced by another, which didn't even exist at the time she began her studies. The membership of peer review committees adds to this ossification; their expertise may be of disciplines that have long since come and gone.
Moreover, even if the academic route is a reliable means of establishing expertise, it is no longer the only one. We are seeing with increasing frequency people establish expertise outside their domains or disciplines. We are seeing people, through a process of self-education and public practice, become well established and well respected even in academic fields.
In some cases this is by necessity; it costs a great deal to earn a PhD, much more than the cost of a computer and internet access, and so the informal route is the only means available. This is the case for the majority of people in the world.
And in other cases it is by choice, as no PhD programs exist in a new area of study or invention. This was the case, for example, in internet technology. It had to be built, first, before people could become experts in it, while the people who built it became experts by building it.
So we need to allow for the likelihood that there is a great deal of expertise in the world that exists outside the domains of the traditional academic community. That the path of obtaining a PhD is one way to establish one's expertise, but not the only way. And that there will exist people who can quite genuinely be called experts in practitioner communities, self-selected or intentional communities, communities of practice, and elsewhere.
It is with these thoughts in mind that I have, over time, be thinking of the appropriate sorts of mechanisms for the management of academic journals. And so it seems a good time now to suggest how I think what I'll call 'the open journal format' should proceed.
I call the system 'open journals' not to confuse them with 'open access journals' but to stress that much the same principles are being applied. An open journal will be at heart an open access journal, but in addition, the process of selecting and reviewing articles for submission will also be open.
Here, then, is the process:
- People write articles and post them online. They may be blog posts. They may be contributions to discussion lists. They may be comments or web pages. It doesn't matter how the content has been published online, simply that it be published online, be licensed in such a way that would allow publication in the journal, and be accessible to whomever wants to read it.
Typically a journal would have a 'subscription list' consisting of a set of RSS feeds recommended by its members. The list, available as an OPML file, would allow readers to subscribe RSS feeds from the larger writing community that produces content relevant to journal readers. The list could be subscribed as a single feed. An example of this is the Edu_RSS feed. Readers can consult a single source, selecting from the hundreds of posts created every month, to find the few that ought to be included in the journal. - A journal's 'readers' nominate an article for submission. To 'nominate' an article is to bring it to the attention of the editor and other readers. People may become journal 'readers' by registering at the journal (other journals may allow at their discretion for anonymous 'readers').
There are various ways to manage nomination. It is important to keep it fairly simple, and at the same time, to allow for buzz and community to develop around an article and around a readership in general. Typical process would resemble a 'del.icio.us' or 'Digg' style mechanism, where readers could '+1' an article, and participate or discuss the article. Another mechanism may be to read readers' blogs and count the number of members that link to the article. - The journal's 'members' select from the nominated articles those articles that they think should be published.
What is a 'member' of the journal? Simply - the founder of the journal plus any person who has had an article previously published in the journal.
To be a member of a journal (I will capitalize this from now on) is not only to have had something published in the journal, but also to have been recognized as a 'peer' by the other authors who have had something published.
The selection process is therefore two-fold: members are selecting not only a submission, but also the person. This means that to a degree, the candidate's previous body of work will be assessed as well as the actual submission. The role is not of 'gatekeeping' but of recognition.
Members' votes are public, and they would typically comment on the accepted submissions, perhaps suggesting improvements. The number of articles published each month would vary, depending on the members' selections, but would typically be small. The top five vote-getters, say. - The submission is prepared for publication. It is submitted into the journal editing system, spelling and grammar are checked, links to references confirmed, and the like. Galleys are created for the print edition (which will be published at a print-on-demand service such as Lulu). The author, in consultation with the editor and members, may make changes to the original submission at this point. The issue appears as an open access publication on the web.
The idea of such a system is that there are things that balance the journal between popularity and rigor.
It is possible for a journal to become too much of a clique, for the members to select only each others' papers. If so, then the people who are being left out can found their own journal. Because nominations are public, it will be easily evident which journal is the most difficult to get into because of quality, and which are the most difficult to get into because of exclusivity.
Will this work? I think it will. It might not work for any particular journal - some journals may simply not attract readership because the writers admitted were not of a high quality, or because the members make poor choices, or because the subject area is simply not useful or inappropriate. It will take a certain amount of momentum to launch a journal, a momentum that can be gained only by having qualified people and quality ideas to begin with.
I realize that similar projects have been tried by others. I am especially aware of David Wiley's attempt - and this is very similar to that. With one exception - that it draws from content that people have already posted. There is no particular danger of 'rejection', no chance that your submission won't see the light of day, because you don't even submit it. The process of recognition and nomination is undertaken entirely by your readers.
I am seriously considering this. I invite your comments.
Stephen, I like the idea. I was going to mention Pitch, but of course you already had it in mind.
ReplyDeleteWhat you're proposing sounds a bit like an academic-oriented Digg website. Submit a link, have it reviewed/rated, and then the highest-rated submissions for a period "make the cut" for inclusion in the journal.
Is that what you've got in mind? Sounds like it might be able to straddle both traditional and unconventional "expert" definitions - could have a review board for topics, as well as encouraging comments and submissions from members/public at large.
The number of articles published each month would vary, depending on the members' selections, but would typically be small. The top five vote-getters, say.
ReplyDeleteWhy do this? Is electronic ink too expensive? If what you want is to help the readers pick a small set of papers, there is another way: readers could, for example, select only papers that were appreciated by one of their peers. A journal could accept 100 papers, but I might want to read only those that were reviewed by Stephen Downes.
It may not be easy to draft a top 5 list. Say you have this article who got 3 reviews including a poor one and 2 good ones, and this other article who got 20 reviews, all of them good except for two reviews.
What do you do then?
I may write a paper on "the current state of e-Learning" and it may attract many, many readers. Then I will write a paper on "Online learning and graduate programs in Information Technology" and maybe five people will ever read it. These two papers will get a very different readership and are not comparable in any way. Yet, both should get picked somehow.
I think that acceptance has to be a pass/no-pass thing, not a popularity contest.
spelling and grammar are checked, links to references confirmed, and the like.
ReplyDeleteI assume that the author pays for this?
I do not mind if that is the case, but please do not tell me "someone will do it for free".
it draws from content that people have already posted.
ReplyDeleteWell. This is probably a good idea since it discourages people from submitting crap on the odd chance that it may get positively reviewed, and with the intent of fixing it later.
However, it is not novel. I already post online content that I submit to journals. It is very common in Physics and other fields to post a "preprint" online during the review process.
The main difference here is that you require public posting of the content, not that you allow it... because it has always been allowed (somehow).
To D'Arcy: yes, that is basically what I have in mind.
ReplyDeleteTo Daniel: the purpose of the exercise is to create a nice package that collects the very best of a certain community. The benefit is to the readers, who are reading this package. There is no reason to collect it in one place, but (by the same argument!) there is no reason not to. And since a print publication would accompany the final version - as a non-digital archive - it makes sense to have a digital version.
I do not anticipate anything like author fees, nor do I anticipate creating a bunch of unpaid labour for the author. I have my thoughts on the 'commercial' model, but for now I want to focus on content management and selection.
Finally: I am not making any claim that what I propose is 'novel'. What I am saying is that this is how I think it should be done, this is how I am thinking of doing it. Has somebody tried this model and failed? That would be useful information.
There is also the issue that certain members of the community will vote in a "populous" manner through the article will not hold credibility. This is very much the case in Australia, where several excellent projects are not written up and remain buried because the individual is not in the "click". It is amazing how many carve out and support each other in securing lucrative contracts, yet their "content" is marginal - and many of these people are key players within the conference scene.
ReplyDeleteWhilst the process has great merit, it also has its down sides in validating marginal works.
Here at Utah State we've been working on a very similar idea. The big difference is we wanted to separate the review board from the selection of the articles. In other words, the crowd picks the articles, the board polishes them up.
ReplyDeleteThere is some disagreement (and there still is) about how we wanted to handle the voting system. For awhile, we were thinking:
+2 Accept with minor revision
+1 Accept with major revision
0 Neutral (ah, there's a controversy rating)
-1 Decline
Each registered user gets one vote per article. Our feeling is that most won't bother spamming something as oblique as an instructional technology journal.
To Daniel: Although an electronic journal can accept a potentially unlimited number of articles, the challenge of editing and vetting them in a reasonable time frame proves to be onerous.
In any event, the software is open source, so anyone can download it (it's not yet functional, but those with coding experience might be interested). Just search for rjournal at rubyforge.org.
Publications going to online is a certain direction in the future. This is what I believe. I agree with what you said, Downes.
ReplyDeleteAccording to Daniel Lemire's concern, I always believe that the general interest of public will decide the value of any literature. In fact, the value of a paper should not be decided by the authors. Certainly authors by themselves always think that their work is very much valuable. Only the general public has the final judgment of the real value of a paper.
In most of the cases, a less-viewed article is certainly less valuable than a highly viewed article. This is statistically true. Though there must be exceptions, no system could be ultimately fair for everything.
-- Yihong
Stephen:
ReplyDeleteInteresting idea! I wonder if there is an issue with incentive: if someone has eschewed the peer-review journal scene and has become a recognised 'expert' through their own self-publishing efforts (blogging for instance), then do they have much incentive to go through a selection & editorial process? After all, they are enjoying the benefit of self selection and self-editing!
I appreciate this might be an edge case - there aren't, after all, so many people in this position, but I wonder if, occasionally, the very 'best' content might be excluded from such a journal?
I am intrigued by this nonetheless and look forward to reading more comments!
Paul
> Interesting idea! I wonder if there is an issue with incentive: if someone has eschewed the peer-review journal scene
ReplyDeleteThe advantage of this proposal is that it caters to such people. They post their writing wherever they want. They don't need to submit or anything. Someone else nominates the article.
Hi Stephen,
ReplyDeleteSounds good. I'd love to see a journal of this type initiated. In particular, your emphasis on "voting articles in" instead of submission is innovative. It takes advantage of the network to initiate filtering quality ideas...and then subjects it to peer review (with peer defined less on merit and more on contribution).
As you may recall, I posted an article on "scholarship in an age of participation": http://www.elearnspace.org/Articles/journal.htm. We have a slightly different approach than what you advocate (and I expect, we should be issuing our first call for papers over the next week or so). We used OJS as the core for formal review...and wrote a community journal section (ruby) where we can "foster scholarship" and exchange ideas. This section of the journal has some of the components you mention (annotating, rating, posting short ideas/blog posts, etc.). What you suggest is more distributed - i.e. posts are nominated for inclusion. What we've done is create a community space where individuals can post their thoughts/articles for dialogue. You add the component of network-based filtering (which is a great idea, especially for bloggers and others who are used to dialoguing in open environments with distributed conversations).
All of that to say, good ideas, Stephen. I'm not sure if there is opportunity to overlap concepts, but I'd be willing to assist you with your endeavor in any way I can. By the same token, if you have interest in being involved in the journal we're working on, you'd be most welcomed. I think we could both benefit from awareness of what the other is doing.
The opportunity for increased scholarship focus is important...and hopefully we'll see numerous journals and approaches in line with the principles of openness we both advocate.
George
Hi folks
ReplyDeleteI have had a go at explaining some thoughts here
http://eduspaces.net/janeth/weblog/
that *must* have taken you more than 30 minutes to hash out :)
ReplyDeletei think it is a brilliant idea, and others would too.
to compare 2 social bookmarking sites: furl and delicious... delicious used to have a niche user group compared to furl. maybe it was more programmers, IT ppl, a younger generation- i can't put my finger on it. but then it became more diverse, as it got bigger. multiple languages supported... and then, it became alot of white noise.
also like tribe.net- tribe seemed quite niche to start, with more californians or burning-man goers. it offered an alternative conversation to places like friendster or myspace. then, it became bigger, and again more diverse- and more, em... muddy.
i think it's great to have one-size fits all tools like delicious and tribe.net, but to remain attractive and useful, niche spaces would be better.
i like the idea of a topical and focused peer-reviewed space- to keep submissions on-topic. or the 'academic oriented digg site'. great metaphor. maybe you're better off starting esoteric and small, with a limited topic or focus.
just some thoughts. 5 or so mins on my part. :)
Everyone can create their own professional portfolio to publish their own ideas. I have done so at http://digitlearn.blogspot.com as part of the requirements of my Master's in Distance Education. Any time there is selection it makes me fearful of what I call "paradigm stratification".
ReplyDeleteHi Stephen. Reckon the concept is sound and as Anonymous has suggested, in Australia the weight of things being sent out seems to favour either the small doyen who self reference alot in order to help improve funding opportunities, or those that have Govt funded positions n certain educational areas.
ReplyDeleteNeither good nor bad, just limited in scope. I've read a number of very good pieces that I know will never see the light of day due to the content being somewhat critical or an alternative to the 'chosen voices' and 'status quo' mantra. This picks up on your thinking about the selection criteria having limitations that might both promote less quality work and deny quality work.
For what it is worth I have both authored and coauthored pieces ready to go
Marcus Barber
ojs seems to be quite robust
ReplyDeletewhen we tried to prototype something like this, namely a loosely edited metazine, http://newfocus.hu/, we've used scuttle, http://sourceforge.net/projects/scuttle/, what del.irio.us is based on.
all participants who are registered can edit each others stuff and adding a 'bestof' tag means that an actual item has been approved by more than one editor.
Change This (nowadays defunct, taken over by http://800ceoread.com/) had a nice system where you could post your proposition on what you'd like to write if there's enough interest. That way was Personal MBA born. http://www.changethis.com/17.PersonalMBA
I wonder what's the easiest way to implement 'trust based on action/contribution'.
Surprising, the day you post this article, I was releasing the first pre-alpha version of a software which tries just to implement the idea you present here.
ReplyDeleteI think it will work too, so that I'm trying to make it. Any comment or help is welcome.
The project is called Sci-Wi, and is free software, available under the GPL on sourceforge :
Sci-Wi web site