On Ethical AI Principles
I have commented in my newsletter that what people have been describing as 'ethical AI principles' actually represents a specific political agenda, and not an ethical agenda at all. In this post, I'll outline some ethical principles and work my way through them to make my point.
Here's how Google's search AI describes some major principles of ethical AI; they are drawn from a wide variety of documents from various agencies that say much the same thing. Your Google Search mileage may vary. I quote:
Ethical AI principles focus on ensuring AI is developed and used in ways that are beneficial and respectful of human values and rights. Key areas include fairness, transparency, accountability, privacy, security, and inclusiveness.
Here's a more detailed look at some key principles:
Fairness:
AI systems should not discriminate against individuals or groups. This includes ensuring that training data is representative and that models are not biased.
Transparency and Explainability:
AI systems should be designed in a way that allows users to understand how they work. This can help build trust and identify potential biases or errors.
Accountability:
There should be clear lines of responsibility for the development, deployment, and use of AI systems. This helps ensure that AI is used responsibly and that those responsible can be held accountable for any negative consequences.
Privacy and Data Protection:
AI systems should protect individuals' personal data and respect their privacy. This includes ensuring that data is collected and used ethically and that individuals have control over their data.
Inclusiveness:
AI systems should be designed to be accessible to all users, regardless of their background or abilities. This can help ensure that AI is used to benefit all members of society.
Reliability and Safety:
AI systems should be reliable and safe to use. This includes ensuring that they are not vulnerable to errors or malicious attacks.
Human-centered design:
AI systems should be designed with the needs of humans in mind. This includes ensuring that AI systems are easy to use, understandable, and beneficial.
Non-maleficence:
AI systems should be designed to avoid harming individuals, society, or the environment. This includes considering the potential risks and consequences of AI systems before they are deployed.
Sustainability:
AI systems should be designed to be sustainable and environmentally responsible. This includes considering the energy consumption and other environmental impacts of AI systems.
These principles are not exhaustive, but they provide a starting point for developing and deploying AI in an ethical and responsible manner.
Now let's look at each of these.
Fairness:
AI systems should not discriminate against
individuals or groups. This includes ensuring that training data is
representative and that models are not biased.
Ever since John Rawls's landmark A Theory of Justice fairness has been considered one of the bedrock principles of liberal democracy. The spectacle of Donald Trump claiming repeatedly about how "unfairly" he has been treated illustrates the hollowness of the concept, however. We are often told "life isn't fair" and that truism seems to be more universal than the principle of "justice as fairness".
In fact, there are many cases where we do not want fairness to prevail. Some examples are so obvious they are absurd: we don't want wild animals, diseases and natural disasters to have 'a fair go' when it comes to taking human lives. As fun as it would be to see hunters required to engaged in 'a fair fight' with the bear, we just don't allow this.
Similarly, we do not believe that noxious individuals and ideologies should have a fair chance of success. With the exception of Fox News, nobody believes we should be fair to fascism. People like Clifford Olson may receive a fair trial, but only after a very unfair campaign to capture the mass killer. Cannibalism isn't presented fairly as a lifestyle choice in classrooms.
There are also many cases where we don't care whether fairness applies or not. In sports, for example, we give awards to the highest, fastest and strongest, which isn't very fair. In a fair competition, arguably, these differences in ability would be handicapped; that's how it's done in golf and in bowling; why not the Olympics? It's not fair that some people are born in Somalia and others are born in Switzerland, yet we have a global system of laws ensuring that system of unfairness is not corrected. Who decides when fairness applies and when it doesn't?
It is moreover clear that there is nothing like a universal commitment to fairness. It is certainly observable that those who are in power and authority frequently manipulate the rules to preserve their position in society. There has been, for example, a recent negative response to the principle of equity, even though it is widely known that various marginal groups are subjected to systemic discrimination. The ethic of 'protecting your own' (whatever that amounts to) is widespread and widely practiced and often runs counter to fairness. I don't agree with it, but its existence in society is a fact.
Fairness works as an ethical principle only if presented from within a certain set of parameters. These parameters were created out of thin air by Rawls through the device of a 'veil of ignorance'. They constitute the assumption that, given a choice, most people would chose a society much like the one we already have, something like a 'meritocracy'. That's an easy choice for a 1970s university-educated American. The reality is much different for people around the world.
'Fairness' in artificial intelligence amounts to the desire to, from a position of privilege, set those parameters that define where we will be 'fair' and where we will make actual ethical decisions. Those without privilege will define 'fairness' one way; those without privilege will define it very differently.
Transparency and Explainability:
AI systems should be designed
in a way that allows users to understand how they work. This can help
build trust and identify potential biases or errors.
These are very different principles merged together to create an ethical condition whereby we can 'understand' how and why an AI acted or decided the way it did. I think it is difficult to set this as a requirement for a population unable to comprehend basic facts of science, geography and history.
Even more to the point, this proposal amounts to the assertion that 'AI systems should be designed differently than they are'. Most people understand the world in terms of simple principles and truisms, folk psychology and folk science. Causation is not complex in their world (there is no complexity in a world where we 'understand' everything).
In fact (as is well known) AI systems are exposed to datasets consisting of millions or even billions of parameters and through multiple iterations of combining and filtering arrays of these values are able to detect regularities. Any human could understand how AI works because the process involves nothing more than basic math; it would, however, take several lifetimes to check the calculations at human speed.
The challenge of transparency and explainability is as difficult for AI as it is for understanding how and why humans act the way they do, and as with humans, in practice actual transparency and explainability aren't that helpful. What we really want is some way to translate what went on in the other human's head to what goes on in ours, and maybe get a story in terms of beliefs, motives, intentions and expectations. This, though, is exactly what AI ethicists do not want when it comes to machines, so we are left in a bit of a quandary. We anthropomorphize even simple machines because it's how we understand complex phenomena.
In practice, our demands for transparency and explainability leave us with data and information that may or may not be relevant to our understanding. The requirement of transparency, for example, may tell us the source and nature of the training data, which we will then (by intuition) describe as 'fair' or 'representative', without being in any sense able to comprehend the billions of bits of information. We say, for example, that the computer 'reads' or 'copies' the content, when it does no such thing. The meaning we see in a body of data is completely ignored by the computer.
Similarly, the best candidate for 'explainability' of AI processes I have seen is the postulation of counterfactual. The computer produces a result 'A' and we ask, what would have produced the result 'B'? The semantics of counterfactuals are notoriously murky, however, involving at a minimum a predefinition of context and range of alternative possibilities, and at maximum a range of relevant possible worlds.
Finally, it is arguable that our being able to see or explain the actions of an AI have nothing to do with the morality of the AI or its use. Most people neither see nor can explain internal combustion, and being able to do so has little or no bearing on their statements on the ethics of driving cars and trucks. Similarly, it literally doesn't matter how the sausage is made for people to decide whether or not they enjoy a good bratwurst with their beer.
Accountability:
There should be clear lines of responsibility
for the development, deployment, and use of AI systems. This helps
ensure that AI is used responsibly and that those responsible can be
held accountable for any negative consequences.
A cynic might say that this amounts to the claim that there are no ethics without enforcement. It is the suggestion that unless people are held in some way responsible for what they will do with AI, they will do things that are ethically wrong with AI (in this case, where 'wrong' is interpreted as having 'negative consequences').
In any case, it's not clear that people actually value accountability. If they did, there would be no case for privacy and secrecy, because people would need to be accountable for what they did, whether it was with their money or in the bedroom. When people demand 'accountability' it's usually only for certain things, and usually for things other people are doing.
But in fact, people often are held accountable in society (at least, poor people and scapegoats are held accountable; the more privileged you are, the less accountable you are). Commit a crime, cheat on a test, and you'll be held accountable (in theory) whether or not you use AI. So what is special about accountability where AI is concerned? How is it different to 'use AI responsibly' than to 'use a hammer responsibly' or 'use a gun responsibly'?
The suggestion here is that there is some set of unethical things that can only be done with AI, or conversely, some things are unethical only if done with AI? These amount to the same thing because there's nothing AI can do that a human can't do, given enough time and resources. So what sort of things are these?
Mostly, they're the sorts of things really fast processing enables. For example, nobody has a problem if somebody counts every instance of 'and then' in War and Peace, but if an AI does the same thing, it's represented as a form of copyright violation. Similarly, if a person watches everything you do in a public square, and recognizes you whenever you visit, that's fine, but if a computer does it, it's surveillance. Or if a person collects all public information about you and compiles a profile, that's called 'good police work' or perhaps 'crediting monitoring', but if a computer does it, it's surveillance.
The difference isn't that humans don't do these things. The difference is that you have to pay humans to do them if you want them done, so only rich people or corporations can do them. But if they are enabled by AI, then anyone can do them. And that seems to be the point where it becomes a problem. But what sort of ethics is it where it's right if you are able to pay humans to do it but wrong if you can program a computer to do it?
Finally, it is arguable that the notion of accountability is a fiction in any case. Most of the time we don't really want to know who is accountable, because that typically involves too many people. This is especially the case for widespread social issues like racism and climate change. Who, really, should be accountable for these. The notion of accountability identifies a problem and then picks out a specific wrong that is the cause of that problem. But the specific wrong is often the wrong that is easiest to identify or the most efficiently prosecuted. That's why even though we would like to hold billionaires accountable for their crimes, governments seem much more interested in the accountability of single parents on welfare.
Privacy and Data Protection:
AI systems should protect
individuals' personal data and respect their privacy. This includes
ensuring that data is collected and used ethically and that individuals
have control over their data.
As has already been noted, this principle directly contractions the one just above. You can have accountability, or you can have data protection, but you can't have both at the same time for the same data.
I've encountered this in discussions about research ethics. A researcher collects data about, say, an indigenous community in the far north. The researcher makes promises to the effect that the data will be destroyed after the research is completed, thus respecting the subject's wishes. When the time to destroy the data comes, though, another department intervenes, and says the data must be retained. Otherwise, it would not be possible to hold researchers accountable if there were any human rights violations.
So which right prevails? In my experience, this is usually resolved by determining who has more power, and not by determining who is more ethically right.
We could raise a second point here as well, and that is the question of what it is about AI that is special when it comes to privacy and data protection. After all, these were issues that were raised long before AI was anything more than a gleam in Geoffrey Hinton's eye. How does AI make them different? After all, it can't do anything a human can't do; it just does it a lot faster, and with far more data.
The answer, of course, is that AI can draw inferences that are a lot harder (and more expensive) for humans to draw. An AI, for example, can recognize a person by their gait. Humans can do this too, but only for people they already know well; an AI can do it for anyone. But again, though, we are forced to acknowledge that AI simply makes it possible for everybody to do something that was previously only the domain of rich people and corporations. It was OK when it was just the rich, but it's not OK for everybody.
That's why, for example, it's OK for Equifax to connect data from a thousand sources to draw a complete financial profile of you and use that profile to tell people to deny you credit; but it's not OK for you as an individual to do the same thing with an AI and use that profile to deny them a date. It's a funny idea of an ethical principle.
Finally, and this is similar to some of the objections raised above, the idea of protecting personal data has its downsides. Should a person be able to protect their criminal record from being observed? Was it wrong when the Panama Papers revealed tax evasion and corruption on the part of officials around the world? Is it ethical for an infected person to keep the fact of their contagious illness a secret? Should people have the right to hide their license plate information?
In fact, society is composed of a complex interplay of information we keep secret, information we share with only a few others, and information that the public has a right to know. Often data is co-created and hence subject to an even more complex set of principles. There are no absolutes here, and the boundaries are subject to negotiation and change. Different societies with different values draw the lines differently. The real ethical decisions are made when we make these decisions.
Inclusiveness:
AI systems should be designed to be accessible
to all users, regardless of their background or abilities. This can help
ensure that AI is used to benefit all members of society.
I mean, we as a society cannot even agree that everyone should have access to food, drinking water and shelter. Almost nothing we produce is required to be used to benefit all members of society. It's hard to see how inclusiveness stands as an ethical value specifically for AI.
Indeed, it is arguable that the actual ethic in play in society is an ethic of unequal distribution of wealth and resources (including AI). The idea - and I've heard this expressed more times than I can count - is that without unequal distribution, there is no incentive for people to make their lives, or society, better.
Even in wealthy societies, people who are deemed able to work but who are not working (for whatever reason) are allotted only the minimal allocation of welfare, an amount that by any standard has to be considered as punitive, on the grounds that they could be doing more, but they aren't. It's hard to imagine a change in the ethos of our society that would allocate the benefits of AI to such people.
In other countries, where there is not enough wealth to support welfare, people simply go hungry and homeless. Here the welfare standard is applied on a national scale, where a certain amount of minimal foreign aid (usually provided on terms beneficial to the donor) is provided, a punitive amount really, on the presumption that otherwise these nations (and the people in them) would have no incentive to improve their lot. Again, it's hard to imagine an 'AI for Africa' campaign succeeding where 'food for Africa' programs have failed.
Above, I mentioned the ethos of "taking care of one's own". This applies here as well. The idea of inclusiveness suggests that we are all members of one society, but there are many people who believe that it is better to be members of exclusive societies. Certainly this is an ethic of the wealthy, for whom exclusivity is a status symbol.
If we as a global society really valued inclusion we would be living in a very different world. But the history of the world is the history of the exertion of privilege by one group over another. I wish it were not the case, but I am not so naive as to believe the rest of the world wishes it along with me.
Reliability and Safety:
AI systems should be reliable and safe
to use. This includes ensuring that they are not vulnerable to errors
or malicious attacks.
I am at work in an office in a city. I was required to drive here from my home in the country, even though I could do the work equally well from there. Driving kills roughly 2,000 people in Canada every year. It's hard for me to reconcile this with a concept of 'AI safety' from the same employer.
It's true that on balance we don't want to develop systems that kill people (unless, of course, we are developing systems to kill people, so they can be used by police or in war). We don't generally want to cause harm (unless, of course, we are in competition with someone, in which case causing harm may be fair). By the same token, we want the things we develop to be reliable, which means that they will do what we expect them to do, and not what we don't expect them to do, unless doing what we don't expect them to do is safer.
As you can gather from that somewhat cynical paragraph, the principles of reliability and safety are subject to a wide range of conditions, ranging from intent to expectations to tade-offs, cost and efficiently. The principle, at best, can be expressed as "all else being equal, we want AI to be reliable and safe". And also secure.
If any discipline can bear witness to the conditions and trade-offs, it's computer science. Our computers are reliable, sure, but not in the way aircraft are reliable. Our computers are safe, but not in the way a bank vault is safe. On any given day people wrestle with unintended consequences of their computers being neither reliable nor secure, often in ways that makes them unsafe - not in the sense of "you're gonna die" unsafe but in the sense of "you're gonna lose your life savings" unsafe.
There is abundant evidence that we as a society are willing to be more than flexible when it comes to reliability and safety. More often that not, there is often widespread support for campaigns opposed to measures that make things more reliable and safe. In my lifetime I've seen anti-seat-belt campaigns, anti-vaccination campaigns, a pro-gun lobby, and... well, you name it. It's not that being unsafe is an ethos (though for a certain segment of the population it's certainly an ethos), it's that being safe is not the universal value people suppose it is.
The same applies to resistance to malicious attacks. If this were really a universal value, nobody would ever set their password to 'password123' or write it down on a sticky note. Indeed, nobody would use the web at all. People gladly exchange safety and security all the time. It is inevitable they will make the same trade-off when it comes to AI. And again, that's where the actual decisions about ethics will be made.
P.S. I have spent my entire life living under the shadow of nuclear weapons. If people really wanted security and safety, they'd do something about that.
Human-centered design:
AI systems should be designed with the
needs of humans in mind. This includes ensuring that AI systems are easy
to use, understandable, and beneficial.
I suppose I could counter with the philosophies of people like John Muir and Peter Singer to the effect that not everything needs to be designed with the needs of humans in mind. We also need to design and develop systems with other species, the environment, and possibly even the needs of artificial intelligences in mind.
But the main point being made here, I think, is analogous to the thinking of 'human-in-the-loop' style arguments. The idea is that, ultimately, we want the humans to be in control, and the machines serving our needs. There are a couple of problems with this, though.
First, it runs against empirical evidence. It has often been observed that "the human shapes the tool; the tool then shapes the human". Whatever we build, we will adapt to it. There's no such thing as a design that is purely 'human-centered'. It's an iterative process, where the imperatives and the affordances of human and machine shape each other. This has nothing to do with ethics; it's just a fact about how interactive systems work.
Second, it's a bad idea. There are many cases where it makes sense to have the machine over-rule the human, if at least only temporarily. My car, for example, won't let me back into an obstacle. Instead it triggers an emergency brake which stops the car before it hits anything. This has no doubt saved the lives of countless children from the carelessness of their parents.
Many things are like that. The needs of humans are over-ruled by machines all the time. The ATM won't let me spend more money than I have. The gas pump will not let me have gas for free, even if I really really need it. Even the most inert of technologies prevent me from jumping off high bridges or walking in through an out door.
The same sort of case can be made for AI. As a baseball fan, I would like the final arbiter of balls and strikes to be a machine. The reason is that humans have historically done an awful job at this, even after years of training. I would like AI systems to mark tests and evaluate job applicants because they are far less likely to favour their friends, discriminate against people with foreign names, or have a bad Monday.
Putting humans in control means keeping the humans who are in control, in control. There are so many cases where that's a bad idea I cannot even begin to list them all.
I haven't addressed human-centered design specifically, but again I would point out that in some cases it's a good idea and in other cases it's a bad idea. Generally (though not always) it's a good idea to make the interface human-usable, but to make the mechanics suitable to whatever task they're performing.
Non-maleficence:
AI systems should be designed to avoid
harming individuals, society, or the environment. This includes
considering the potential risks and consequences of AI systems before
they are deployed.
Again, there's a big caveat here: we very often design things specifically designed to harm individuals, society and the environment. They're currently being deployed in places like Gaza and Ukraine. Major industries and billions of dollars are devoted to them. So it matters very much what we decide is worth harming and what we decide should not be harmed.
I, personally, would have included babies and children in the "not to be harmed" category, but large numbers of people apparently disagree with me.
Having said that, I can't really think of anything where we should not consider the potential risks and consequences. As a cyclist, I pay close attention to this. Similarly, when I'm engaged in my unnecessary commute, I'm careful to keep risks and consequences in mind. Most of the time, it's pretty benign. We recently bought a new microwave; the biggest risks were that (a) it might not work, and (b) it might be from the U.S. I also considered the possibility that it might explode.
People generally say things like "This includes considering the potential risks and consequences" as an indirect way of saying "there are serious risks and consequences". The ethical principle here is that it is best to avoid harm, or even more directly, that we should not deliberately cause harm. Unless, of course, we are seeking to deliberately cause harm.
The principle of non-maleficence originates in health care and generally reflects the idea that the treatment should not be worse than the disease. It is often carefully worded, because often one person's medical procedure is another person's stabbing. Context and outcome really matter. There are necessary harms, allowable harms, accidental harms, and unintended harms. All of these come into play, and how we determine whether a harm is unethical often depends on our point of view. Ask any person who has been laid off their job.
I would say that the ethical principle amounts to "don't be evil", but as we have all learned recently, it's not actually legal to embrace that principle if it runs counter to shareholder interests, at least in some jurisdictions.
Sustainability:
AI systems should be designed to be
sustainable and environmentally responsible. This includes considering
the energy consumption and other environmental impacts of AI systems.
Sustainability broadly conceived refers to the sustainable development goals (SDG) though most of us reduce 'sustainability' to 'environmental responsibility'. The best evidence suggests that this isn't really an ethical principle for most people (and certainly not one that overrides, say, personal self-interest).
After all, while I am sympathetic with the idea that AI should be environmentally responsible, the fact remains that it's among the least of our environmental concerns. Many things have far more impact. If I could work from home, I would be using far less energy even if I did nothing other than use AI. If I did without my morning coffee I would same more power than by quitting AI.
The power consumed by AI is not even a rounding error when compared to the total amount of energy consumed by humans. It's far too small to be considered even that.
In addition, let's compare the energy we're consuming if we're not using AI.
How many translators are there in the world. A (moderately trustworthy) Google search tells me there are 600,000 translators. Think of how much energy and resources it takes to pay for and support 600,000 translators. Though not quite there yet, it appears that AI will replace most of those. That means that whatever energy AI is using is replacing the energy we are using to support 600,000 translators.
Now, true, we are not actually eliminating 600,000 people (that would be quite unethical). With any luck (and an actual system of ethical stewardship) these 600,000 people would live their lives in more enjoyable and gainful pursuits. But that is worth the expenditure of energy, isn't it?
I mean, there's a certain selfishness in the whole energy argument. Things are going well for us, so we shouldn't be expending more energy. But raising another billion people out of poverty will increase our energy consumption, so really, we shouldn't do it. It's the ethical thing to (not) do.
Concluding Remarks
If you detect a certain anger and bitterness in the paragraphs above, good.
We do not live in an ethical world, at least, not by any standard I deem ethical. We a world of poverty and war, famine and hardship. We live in a world where there is injustice and deprivation, where corruption and oppression are so often the order of the day. And amid that, the very same people loudly proclaim that they have solved ethics, and that we should follow their lead.
No, ethics does not consist of any of the principles I have listed above, nor of the several others I could add from the many ethical treatises and frameworks I have read over the years. Indeed, ethics isn't at all about what others should do. That's a matter for jurisprudence and law, the systems we have struggled to build over the years to allow us to live alongside each other without (too much) fear and violence.
And indeed, most to the treatises and frameworks that pose as ethical principles are actually about laws and regulations. What kind of behaviour can we convince other people to follow? Or if we can't convince them, what can we put into law (with a suitable accountability framework) so we can force them to follow? That, though, has nothing to do with ethics. It's politics.
I have nothing against politics, except when it pretends to be ethics. Politics isn't about what's right, because we don't agree on what's right. That's why we need politics.
Ethics is personal. It's based in our own sense of what's right and what's wrong (itself a product of culture and education and upbringing and experience and reflection) and is manifest in different ways in different people (and not at all in psychopaths) and for me is a combination of empathy and fear and loathing and - on my good days - of peace and harmony and balance. It consists of what I am willing to allow of myself, what guides my decisions, what I am willing to accept, and what will cause me to push back with a little force or all the might I possess.
What bothers me is that most of the people who write about AI ethics know none of that - they are often good people, smart people, but they often have not done the work it takes to become fluent in ethical thought. And that's fine, so far as it goes, until what they decide is ethical is a standard that ought to apply to everyone. And then, from my perspective, it's like faith standing in for science, and we get pronouncements like "there must be an explanation" or "somebody must be responsible". And that's when I push back.
Comments
Post a Comment
Your comments will be moderated. Sorry, but it's not a nice world out there.