My Ethical AI Principles

 

Sign posted in the common kitchen at the campground in Selfoss, Iceland, where this article was written

 

I understand. Artificial Intelligence (AI) is a technology unlike any we've seen before. It looks like it can do amazing things, and because of that, it also looks like it could get people into a lot of trouble. And we want to do the right thing. So yes, I understand, we want some ethical guidelines for using AI.

Right away, this discussion can go in one of two directions. On the one hand, we might be thinking of the legal risks involved in the use of AI, and how to avoid those. In this case, AI ethics is a bit like a code of practice to let people know that by following we took every reasonable precaution to prevent harm.

The other direction we might go is to approach ethics in the sense of what's right. We're not worried about the legality of what we're doing, we're worried about whether we're doing a good thing or a bad thing. It's so hard to tell, sometimes. Feeding a baby would seem to be a universally good thing, but feed a stranger's baby at the mall and you'll find out it's very much not OK.

I think the legal case it the easiest, though lawyers will inevitably make it a lot more complicated. Here is the universal legal principle for AI ethics:

Don't use AI to do things that are illegal.

Another way of saying the same thing is to say: things that are illegal do not become legal just because they were done with AI.

There are a lot of laws, and I can't cover them all, and they vary from jurisdiction to jurisdiction, but I can give you a sense.

- 'spying on your neighbour' - this is called 'stalking' in many locations. People used to be arrested for being a 'peeping tom' and looking through windows. It's still illegal if you somehow accomplish this using AI.

- 'defrauding people out of their life savings' - this is covered under theft or fraud legislation in most jurisdictions, and even if you use AI, it's still illegal.

- 'telling everybody someone is a cheat' - if you said it, and it wasn't true, it was slander, and if you printed it, it was libel, and even if you use AI to do it in a very convincing fashion, it's still unethical and you'll still get sued.

This article could go on at length on this. This is what most of the so-called 'codes of ethics' are trying to define. The codes that have been developed for the various professions still apply to those professions, and define what is legally required by practitioners. Nothing changes here.

Yet another way of saying the same thing is to say that things that are legal don't become illegal just because they were done with AI (now remember, we're not taking about what's right here, we're talking about what's legal).

This is a hard principle for a lot of people to accept because, while they were willing to tolerate a behaviour if only a few people did it, or if they did it slowly, or if it cost a lot, they're much less willing to tolerate the same thing if many people can do it quickly for low cost.

Here are a few examples:

- 'conducting a credit check on a person' - is done completely legally by companies like Equifax, who charge money for the service, but are deemed morally dubious when someone sets up a free AI-powered credit check web page

- 'figuring out who knows who' - if you read someone's papers and keep track of who they reference, you can build a network of connections, and that's perfectly legal; but it's questioned if you use an AI that does it for everybody using everything they've ever posted online

- 'learning a language from examples' - any human will learn a language by copying the word use and syntax they see and hear and read, but if a computer does this, people suggest that it is somehow stealing the ideas expressed in those communications

These examples may not convince everyone, and as a result they will be litigated, and there may even be some cases where the law is changed to reflect the fact that we don't want everyone to do some of these things with AI, but until the law is explicitly changed, there's no reason to believe legal actions suddenly become illegal.

I offer one second legal principle for AI ethics:

Don't do stupid things with AI.

It's true, 'stupid' is difficult to define, but by the same token, most of us recognize it when we see it. Trying to fly with your car (or anything else that is not an airplane), trying to defy the principle of inertia, signalling left then turning right, tossing water onto an oil fire - these are all examples of 'stupid'. 

I could say it like this: would a reasonable person try to do this with AI, and expect to be successful?

If your use of AI defies common sense, then there is a very good chance that you will get into legal trouble for using it that way. I can't define common sense for you (no set of principles could define common sense) but you'll recognize it when you see it.

In a sense, AI has its own unique brand of 'stupid', as does any technology. Remember when people would follow GPS instructions straight into the river? I once plugged my 2500 watt hair dryer into the back of my 100 watt stereo (in a plug meant for the turntable). Stupid.

How do you learn what counts as 'stupid' in AI? Just as you would learn it for anything else - listen to the lore. There are already hundreds and thousands of stories about stupid uses of AI (you can insert your own list here). Don't do these things.

That's all I have to say about ethics in AI in the legal sense. I don't honestly think it's possible or reasonable to define it more precisely at this point.

What about the morality of using AI? What counts as doing the right thing?

There's a lot that's wrong with our world. A lot of things that are perfectly legal are still arguably morally wrong. Some examples:

- buying a company simply to run it into debt, sell all its assets, collapse the pension fund, and fire all the employees

- selling a car that requires a proprietary internet subscription in order to function, and then raising the subscription price after the sale

- launching a lawsuit that a person can't afford to defend in order o force them to take down their web site

- forcing a woman who was raped to carry a baby to term

You get the idea. Now there are some problems with this list (and indeed, every such list we could come up with).

First, no two people would ever agree on every item on the list. Sure, thy may agree on a lot, but eventually, they will disagree. A lot of people won't get past the first few items on the list.

Second, any such list is too long to deal with, even for people who are largely in agreement, and yet there is no single way to shorten the list into basic principles (at least, none has been discovered after 2500 years of trying).

And third, people have tried to come up with principles, and people still don't agree with each other on what principles should count and what shouldn't count, and what can be allowed as an exception, and what can't be allowed as an exception.

The first principle of AI ethics, in the non-legal sense, is this:

Whether you think something is ethical or unethical, it will always be the case that not everyone agrees with you on this.

Subclause: and there are no universally acceptable grounds for determining who is right.

This is a tough pill for most people to swallow. For many people, some actions are not only wrong, they are *obviously* wrong. And yet - not everyone agrees with you on this.

Pick an action (or belief, or attitude - anything, really) that you think might be a suitable candidate. Cannibalism, say. Surely that's morally wrong.

If everybody believes that cannibalism is morally wrong, I may ask, then why is there a law against it? The very fact that there is a law tells us that some people needed to be prevented by force from practicing cannibalism.

And there are cases that fall right into the margins. The soccer team stranded high in the Andes by an airplane crash, their only possible food the already dead and frozen bodies of their fellow passengers. Wrong? Easy for us to say. But at that time they looked at their circumstances and decided: not wrong.

So here we are. Cannibalism. Not everybody agrees with you on that.

If there is an act - or a type of act (the vagueness of which produces wealth for lawyers) that most of use believe is immoral, we will create a law, thus shifting our question of ethics into the first sort of discussion: is the action legal or not? And the question of how best to make laws in a society I'll consider out of scope for this discussion.

Otherwise, there really isn't a consensus that applies. 

So here's the next principle of AI ethics, in the non-legal sense:

We cannot define AI ethics by listing the behaviours, or types of behaviours (or beliefs, acts, etc.) that are right or wrong.

I am not the first to say such a thing, and yet it is surprising to me how little-known it seems to be, at least, to judge from the efforts of people trying to define precisely that sort of list.

And it's a really important point, because it extends so far beyond the ethics of AI. How, for example, do we define AI literacy, if we can't define what sort of application of AI is appropriate or not?

Indeed, what we ought to be doing as educators is to help students understand what is the right or appropriate way to use AI, isn't it? How can we do this without knowing which behaviours are ethical, and which are not?

Well - if it were up to me, a course in AI literacy would be focused on *how AI works*, and by implication, would give us some insight into how we, as humans, work (not a 100 percent parallel, of course - just insights).

Now I'm going to make a very long story very short here in the next few paragraphs, so please forgive me, and bear with me.

AI - at least the type that uses neural networks, which is most of these days - works by pattern matching. These don't have to be exact matches - they can be similar - and is something new is similar enough, in a relevant sort of way, it counts as an instance of a pattern.

There's lots of ways things can be similar. Today's large language models (LLMs) have been trained on words, so the patterns thy match are pattern of words, and these are complex enough to give them an ability to emulate language quite well.

They haven't been trained on real world examples, so an LLM wouldn't recognize a tiger even if the tiger bit it in the dot matrix printer. Other types of AI *have* been trained on images, and would be quite good at recognizing a tiger.

Different AIs do different things. Knowing how AIs do things helps us use them properly, so we don't try to (say) gt a large language model to tell us (true) facts about tigers. The LLM is just repeating what people *say* about tigers, a lot of which isn't true (and here we can link back to "don't do stupid things with AI").

So that's part of it.

The other part is - like I said - we can learn a lot about how we as humans work from the tools we've built to work the way humans work. 

Humans, like neural networks, are pattern matching machines (OK, not machines, obviously, and not exclusively pattern matching, but there's a lot of pattern matching going on). Like an AI, we sense regularities, categories of things, rules of thumb, common sense principles, etc.

Some things we need to keep in mind:

- humans are *very* sensitive - we have the full range of internal and external senses, plus ongoing feedback from the effect previous experiences have had on us. 

- humans (like AI) are error prone. The fact that we perceive a pattern depends entirely on what we have experienced, and (almost) not at all on what's out there in the world. 

- our perception of a pattern is marked with a 'feel' for what sort of thing that pattern is. It's probably a different 'feel' for everyone. I like to use the term 'recognition' to describe that 'feel'. When we recognize something, we know it.

Eventually, our computers will do all this too. I don't know whether that counts as artificial general intelligence (AGI) nor do I care. They *will* do this.

So anyhow...

*Among* the many patterns in the world we recognize are some things that we'll vaguely classify as 'right' and others we'll vaguely classify as 'wrong'. We can call feeling of recognition a 'sense' though it's no more a sense that 'recognizing a tiger' is a sense.

Long story short, then: we have a sense of right and wrong. It's based on the patterns we've learned through experience, and it manifests itself (if you will) as a certain inescapable 'feel' that something is right or wrong.

No wonder we are utterly incapable of using reason to change a person's sense of ethics. We're using a few well-chosen words against a lifetime of experiences that have led us to *this*. 

So the third principle of AI ethics, in the non-legal sense, is this:

We can each recognize whether some use of AI is right or wrong, and this ethical sense is stronger than any set of artificial 'rules' or principles' could be.

I am not the first to observe this. Readers interested can begin with people like David Hume and (surprisingly) Adam Smith. Or for that matter Carol Gilligan and bell hooks.

Now I know - someone's going to say, "what if someone has the wrong sense about what's right or wrong?" Or, "what if someone has no sense of what's right or wrong?"

Both cases already exist outside the field of AI. The case of the latter - an inability to recognize right or wrong - actually has a name (sociopathy). The case of the former is also common: people misrecognize things all the time. There's no reason to believe this would not happen with ethics.

So how do we correct for this? 

We need to be really clear about what we're asking here. It could be either:

- how do we correct for errors in our own ethical sense?

- how do we correct for errors in others' ethical senses?

Most writing in this field is addressed to the second question. Ah, the hubris. My own feeling is that it's really hard to correct others' errors when there is so much room for error in one's own ethical sense. But we can talk about that.

Let's take a hypothetical person, and let's take a hypothetical set of values (which we could never write out, because thy are fine-grained and based in experience, not language).  How would we get that person to recognize whether acts are 'right' or 'wrong' in a manner consistent with that set of values?

Again, all too briefly: that person's experience must be consistent with that set of values. For example, to develop 'honesty' as an ethical value*, a person must experience honesty in their life, and to see it celebrated and practiced as an ethical value. There's no guarantee this would work, but without this it certainly wouldn't.

It's a long and complex process, involving family, education, peers, culture, work environment, and so much more. It requires community, care, challenges and difficulties, some luck, and a lot of goodwill. As with all of education generally.

And that leads us to the final question: how do we know the ethical values we are trying to teach others are the right ones? After all, our own ethical sense, like any other sense, can be flawed.

Really, it comes down to what sort of experiences we have, and how we build on them. This leads to the final principle of AI ethics, in the non-legal sense:

Our ethical senses benefit from practice and reflection.

We aren't born being able to recognize tigers (we can recognize some things at birth, but not tigers). But we can learn from experience and from pictures and stories and culture. We learn about real tigers, stuffed tigers named Hobbes, and paper tigers. And if it is important enough, we can learn how detect them from the slightest movement in the bush.

It helps (a lot) to talk about it, though it's not strictly necessary (a solitary and somewhat lucky orphan could learn on their own to recognize tigers (but not what they're called)). It helps to share experiences or pictures, so our imagination can form part of the pattern that we're learning to recognize.

The same is true of our ethical sense. We are (probably) not born with an ethical sense, and have very poorly developed ethics when we're young. We see what people say, and we see (which may be very different) what people do. And we are party to the judgements they make.

(There was a fascinating subreddit called AITA that discussed practical ethics and I enjoying testing my ethical sense with theirs; it has now been spammed by AI with fake questions, I guess for the purpose of training an AI ethicists based on the collective wisdom of Redditors).

I think that talking about ethics, openly and publicly, without judgement, is a beneficial practice. This is more than just creating lists of values we like. It's sharing what we think about some behaviour, how it makes us feel, what we want (or want instead), and what we're willing to do to cultivate that behaviour.

What we would see is that we do come from very different perspectives that ground in each of us our own distinct and equally valuable sense of what's right and wrong,  what's worth doing and what's to be avoided, in some cases, an ethics based in care, in others compassion, in others duty, in others (as in my own case) discovery.

Ethics - properly so called - isn't any one of these. It is what results when a person in a society is exposed to all of these, to more or less a degree, for better or worse.

-----

The principles (clip and save)



Principles of AI ethics, in the legal sense:

The universal legal principle for AI ethics: Don't use AI to do things that are illegal.

-- things that are illegal do not become legal just because they were done with AI.

-- things that are legal do not become illegal just because they were done with AI.

Don't do stupid things with AI.

Principles of AI ethics, in the non-legal sense:

Whether you think something is ethical or unethical, it will always be the case that not everyone agrees with you on this.

Subclause: and there are no universally acceptable grounds for determining who is right.

We cannot define AI ethics by listing the behaviours, or types of behaviours (or beliefs, acts, etc.) that are right or wrong.

We can each recognize whether some use of AI is right or wrong, and this ethical sense is stronger than any set of artificial 'rules' or principles' could be.

Our ethical senses benefit from practice and reflection.

----

* Note that we can't possibly define 'honesty' - it is far too complex and nuanced - the use of this word is just an example, nothing more.

Comments

Popular Posts