AI and Environmental Justice
Responding to: Harnessing AI for Environmental Justice, by request. This originated as a Mastodon thread; I'll preserve the use of the informal 'you' throughout.
--
So, the document is addressed to activists and takes a "yes and" approach to their objections to AI, including in many cases where 'yes' isn't really the response I'd have, but I recognize the need to write to the audience.
I'm also personally less likely to use 'stories' to describe narrative framing, and more likely to use the language of 'frames' and 'setting context'. Again, though, I put this down to writing to your audience.
But I would save my main comment for the term that appears in the title, 'environmental justice'. The paper doesn't really grapple with the term until page 19, and this only in a well-placed call-out quote, specifically, " how might communities most adversely affected by climate impacts contribute to and shape conversations about the development of AI?"
And this leads to my major thought about the document as a whole: how do we ensure people are left out of AI?
I mean, when you get right down to it, AI is mathematics. And, it's more efficient to use a computer to do mathematics than to have a person do it. That's why NASA switched from human 'calculators' to computers.
This is the same for any other framing of AI. If you think of it as 'creativity', or 'content generation', or 'pattern recognition', it's going to be more efficient to use a computer than to employ a human.
That's why AI is a social justice issue.
So what do we mean, in this context, by environmental justice (which I take as closely related to social justice)? The paper posits: curiosity, transparency, accountability, diverse voices, sustainability, community, and intersectionality.
This is very much speaking to your audience, but I'm not seeing any theme or idea here. To put it in your terms: what is the story here? These are words they like to hear, but why are they here?
You could adopt a frame of 'justice as fairness' which asserts a narrative of non-harm and inclusion. But this classic of liberal ideology doesn't play well in this community.
Similarly, the utilitarian ethics,which underpins the twin ideals of beneficence and non maleficence, doesn't pay well with this audience.
Unfortunately, these are what generally tend to underline the 'consensus' on AI ethics (a false consensus, IMO, but still).
What's left? Some kind of Kantian-Marxist critical theoretical approach, or some kind of communitarian ethics-of-care approach.
Your paper takes a little from column A and a little from column B, setting up a class conflict between marginalized people and big tech, setting up a narrative of resistance, and that the same time drawing on ecotopian tropes of 'just enough', diversity, collaboration and intersectionality. Plus, from some third place, sustainability.
So, we come back to: AI is mathematics. And it might even (as I think) capture the mathematics behind cognition, sentience and consciousness (but of course you don't need to believe all that, AGI notwithstanding).
This to me leads to two threads of discussion:
1. Mathematics is undeniably good, but how much mathematics is good? What kind of mathematics is good?
2. The ethics of mathematics, captured in (1) above, and the ethics of other things.
I'll do both.
In fact, we can save a ton of time and resources through the use of mathematics.
But math isn't inherently good. Blockchain involves the wasteful use of mathematics just to make something difficult. Doing statistics just to support gambling preys on vulnerable.
And there is an inherent uncertainty involved in what we count, how we count, and who does the counting. I personality resist reframing everything in terms of 'value'. Justice does not equate to wealth,
And that leads to the second part: the contrast between a world defined by money, and everything else.
For example, why do we continue to use fossil fuels? Because it's cheaper, and industry doesn't care about anything else.
This debate has almost nothing to do with AI. I mean, maybe AI can make electrical grids more efficient, or nuclear plants safer, but it isn't at the core of the debate.
Big tech and AI are not synonymous.
Imagine someone said, when discussing math, that "resistance and refusal are important pillars." It makes no real sense.
I can see the sense of resisting oligarchy, authoritarianism, and inequality. But it doesn't follow that we should resist AI because they use AI. We should resist *their ownership* of AI.
If you take math, and apply it to everything we do, that's AI. It's all of ours to use toward, not against, a world that works for all of us.
Comments
Post a Comment
Your comments will be moderated. Sorry, but it's not a nice world out there.