What Is the Soul of Open Educational Resources?


I have already written a short post in OLDaily responding to Heather Ross's post but I would like to expand a bit on it with a short post here.

The gist of her post is that the values of open education run directly contrary to the use of artificial intelligence in education.  "GenAI may be fun to play with and make some tasks easier," she writes, "but the cost to the values of open, the planet, marginalized groups, and humanity as a whole are far too great." I don't agree.

Ross begins by making several points about the use and value of open educational resources at her own institution (quoted):

  • students at USask will save about $1.95 million
  • valuable nutrition information for those who may not have access
  • members of the 2SLGBTQ+ community can find information about various supports
  • faculty members... use them to integrate the SDGs into their courses 
  • increases their engagement through experiential learning and by making strides to improve students’ sense of belonging.

All these are great benefits. To me they are a vindication for the work that I and other members of the OER community have undertaken over the last few decades.

Ross also makes the point that under-represented students in particular can benefit from OER (again, quoted): 

  • When OER is used to increase the representation of learners from traditionally marginalized groups, students are more likely to feel like they belong.
  • When OER or open pedagogy are used to help learners feel like they can address some of the challenges the world is facing, particularly climate change.

I have no disagreement with any of this. As a longtime supporter of diversity, equity and inclusion for all people on campus and off, I am particularly enthused to see OER helping these communities in their learning, social and cultural goals.

Where we begin to part ways is with the view expressed by David Wiley in some promotional text: "using generative AI is a demonstrably more powerful and effective way to increase access to educational opportunity."

I've had my disagreements with Wiley over the years but we are in agreement on this point. Now what it means to say "increase access to educational opportunity" may be another point of contention; creating startups and making money isn't my idea of progress. But we agree on the potential of AI. 

Here Heather Ross disagrees. "Everything I’ve learned about open, everything I’ve ever believed about what the OER movement stands for is the antithesis of what GenAI is and does."

Let's consider the argument.

First, she writes that "Open is about improving access to education and the lives of learners worldwide, not just for those in privileged countries or communities. GenAI is used to create papers and images for the privileged, harming many of the very people we’ve said open is trying to benefit."

In response, I would argue that, all else being equal, anything that makes the creation of learning resources, faster, cheaper and more effective is going to help people access education worldwide. I am very cognizant of the fact that billions of people have no internet access, let alone AI. But AI virtually eliminates a key bottleneck: the cost of content production. A reasonably educated person can create a logic text in a day, for example. How can that not improve access?

Yes, AI can be used by the privileged to create papers and images. So what? Rich people can also use roads, but that's no reason not to build roads. It makes no sense to me to say that the only use of a technology must be to provide greater access to less privileged people worldwide. 

Of course, I did say 'all else being equal, and if AI actually harms the people OER is trying to serve, then there is a good argument here. But this harm, to me, would need to be demonstrated. We can't just say 'AI is harmful' and leave it at that. So we'll put a marker here and come back to the question of harm as it arises.

Second, echoing no small number of people, she writes that "While open aligns with several of the SDGs, GenAI is an environmental nightmare, from the energy needed to run the growing number of servers to the immense amount of freshwater needed to cool them (by the way, the same is true for cryptocurrency)."

Nobody doubts that creating AI models takes a lot of power, though if I were more cynical I would argue that it is not alone in this regard. We need to keep AI in perspective. The global electricity consumption in 2022 was 24,398 terawatt-hours and rising. AI, meanwhile, is projected to consume 80-100 terawatt hours by 2027. That's more of a rounding error than anything else. No, the real problem is that we're still using hydrocarbons to generate this power.

If AI helps us move to wind, solar and other renewable energy sources, that alone will have been worth the investment. But AI also helps achieve other efficiencies. I ran an item in OLDaily the other day where "The average time to upgrade an application to Java 17 plummeted from what's typically 50 developer-days to just a few hours. We estimate this has saved us the equivalent of 4,500 developer-years of work (yes, that number is crazy but, real)." 

The environmental argument is a straw man. Nobody is seriously denying the seriousness of climate change or the need to stop dumping pollutants into the air. But we do more damage to the environment with our morning cups of coffee.

Third, Ross argues that "While open is being used to integrate EDI and Indigenization into curriculum, GenAI, programmed by those of dominant groups, often fails to represent or misrepresents members of marginalized communities."

Well, this is true. And it's true because the source data used to generate AI, consisting of books and text and images and other content found on the internet and elsewhere, also fails to represent or misrepresents members of marginalized communities. What I would say about the people who are creating AI is that at least they're trying, which is far more than can be said of most content industries around the world.

As Ross knows well, the misrepresentations described by Bali are perpetuated in global commercial media. They are also perpetuated by governments, social media, advertisers, and pretty much any other form of human communication. 

What would improve the representation of marginalized communities in AI is an increase in the contributions of these communities in the data being used to train AI. This is true of representation in general: if we want to hear from marginalized groups, they have to speak. Yes, we need to be listening for them, and yes, we need to amplify these voices. 

AI can help them with this. If it takes a fraction of the resources it used to take to create a useful and usable OER, even if it has to be corrected for misrepresentation, then there is far more opportunity for people in under-represented groups to crate resources where they see themselves reflected in the materials being used in learning. AI-assisted transcription and  translation, resource recommendation, community formation and more can also help members of marginalized groups.

It just boggles the mind to thing that all the positive impacts of AI would be tossed aside because large language models trained on Twitter produce sub-optimal text.

Fourth, Ross also raised the copyright argument, writing "While open has always called for recognition of the work’s creators and contributors and gratitude for their willingness to share it openly, any such gratitude toward GenAI-created work that was taught on copyrighted works against the copyright holder’s permission will ring hollow."

This is factually untrue. It is true that many people have called for the use of the attribution clause in CC licenses, the fact remains that it is not required, and that many have argued in favour of the CC-0 license instead. And while some people have felt that gratitude for open content ought to be expressed, many people (including myself) have argued against the idea that open content is a 'gift' that requires gratitude.

But it doesn't matter anyway. Although the cases are still before the courts, many AI proponents (including myself) argue that learning from content is not the same as copying content. For the most part (and there are exceptions) AI does not copy and reproduce content, it extracts statistically relevant regularities or patterns in the content, such as word order.

Moreover, for any given piece of content produced by AI, there may be hundreds or even tens of thousands of pieces of content implicated in its production. When an AI is trained to use word ordering like 'should be' and not 'be should', for example, it is learning form content. But it wouldn't make sense to 'credit' the authors of the content for this 'discovery'.

Fifth, Ross argues that AI is a form of colonization. "Taking what isn’t yours to create something new without giving credit, having permission, or considering the impact on others isn’t innovation or acting in the spirit of open. At the least it’s theft, at the worst, It’s colonization."

There are different arguments here, but I would argue that all content - not just AI-generated content - takes from someone or something else to produce something that's new. Indeed, I have argued in the past that this very fact is good reason to oppose strong copyright:  

in virtually every article, every post, there is more than a little reuse even of the expressions of ideas, much less the ideas themselves. It's not that I am saying that there is nothing original under the sun. But what I am saying is that there is far less that's original than the supposed originators would like to claim. It is in my view blatantly dishonest to slap a copyright label onto anything you have written unless you are quite sure you have checked and verified the original statement of every idea in your work. For otherwise, your claim to copyright is nothing less than theft

Now I know that not everyone agrees with me on this. But when copyright holders lay claim to things like the language itself then there is a great deal of overreach taking place.

But how is any of this colonialism? Ross explains: "Most OER was created by authors who willingly released their work with an open license. Napster was the sharing of music without the artist’s permission."

Now there is a good argument to be made here. Colonialism is (at least in part) the appropriation by one society of some other society or culture's productivity or wealth for its own gain. And it is (at least in part) the imposing of laws, values and cultural elements by one society onto another. And to the extent that artificial intelligence does this, there is a reason to argue against it. 

I think that's a a pretty hard argument to make. Certainly, we could argue that AI is no more or less colonial than any other industry created and promulgated by western capitalist industry. But that just makes AI a product of a system of global capitalism. 

That's not something that's going to be changed by opposing AI. That requires global political and economic reform. And, while I am in fact in favour of such reform, I would not start by attacking AI. In fact, I would be looking at how to use AI to change it. Just as I sought, since the 1990s, to use digital technology and open educational resources to change it.

There are a few people who have create a cottage industry for themselves by opposing every aspect of artificial intelligence. I think they're wrong, and have concerns about them misleading educators about AI. But Heather Ross's article takes it a step further.

This is colonialism:

"No, you don’t get to wash over or destroy the work we’ve done and the great work still to come within the open movement.” If those encouraging the use of GenAI for open or for GenAI to replace open want to play a new game, that’s fine. We can’t stop you, but get off our field."

It's not your field. 

To be clear: I have always welcomed people who promote diversity, equity and inclusion. I think things like accessibility are important, I think representation is important - that was one of the reasons I thought MOOCs could be so powerful. And yes - when MOOCs were taken over by capitalists and run onto the ground, I didn't stop creating MOOCs. I also believe that personal empowerment and democracy are important, and I want (in true Canadian fashion) people to be as free as they can be. All this is important to me. I've been very clear about my objectives for decades now. They align with, but are not the same as, the social justice movement. But I have never said, "You must agree with me on why we're creating OERs in order to be a part of the OER movement." Nor will I.

Comments

  1. Excellent, well argued case against points repeatedly brought up against AI, which ignore the benefits.

    ReplyDelete

Post a Comment

Your comments will be moderated. Sorry, but it's not a nice world out there.

Popular Posts