Responses to Pew on the Future and Ethics in AI


If you expect change, what do you think the “new normal” will be for the average person in 2025? What will have changed most? What will not change much at all?

Despite the economic turmoil that resulted from the pandemic, the net result will be an increased recognition of the role of governance and civil society. This will be seen in an increased level of support for social and economic support, including for example the need for public health care and the need for income support. It will also be seen in greater support for social and civic responsibility, including new controls on policing and greater access to services for minorities and underserved populations. And it will be seen in a wider recognition of social responsibility, for example a return to more progressive taxation, including especially corporate taxation, as a response to income inequality.

What hopes do you have for tech-related changes that might make life better in the coming years?

The most significant change could be summarized with the slogan 'protocol, not platform', as Mike Masnick argued last year. The idea is that instead of depending on a specific social media application to connect with friends and colleagues, people could use the application of their choice, and use a common messaging standard. This makes it more difficult for platforms to shape discourse using algorithms and to monetize discourse using tracking and advertising.

The current structure of dialogue and media privileges extreme and provocative content, which tends to polarize society and to make it more difficult to come to consensus on social issues. Discourse that is more cooperative and creative enables constructive responses to be adopted society-wide to pressing issues of the day, including but not limited to equity, environment, prejudice and policing. People are more likely to seek common ground when allowed to manage their own communication.

With common communications protocols, technological *solutions* to pressing issues will begin to emerge. For example, the cost of social and health care support is significantly reduced with electronic transactions, just as has been the case in finance. Common protocols also enable greater security, through such mechanisms of zero-knowledge proofs, for example. This allows better insight into the effectiveness of social programs, and enables governments and critics to evaluate innovation on more than merely financial or economic criteria, something pundits like Umair Haque have argued is necessary to respond to broader social issues.

Our experience during the pandemic showed clearly how even modest improvements in interoperable communications can have a significant effect. Before the pandemic, there was no incentive to support widely accessible cross-platform video conferencing. Then we had Zoom, a simple tool everyone could use, and suddenly we could work from home, learn remotely, or host conferences online. Even after the end of the pandemic, having learned how convenient and efficient so many online services have become, we will be much less likely to commute to work, attend residence-based campuses, or fly to conferences. This makes the world of work, learning and commerce much more accessible to large populations who previously did not have the resources to participate, and greatly increases our efficiency and productivity. 


What worries you about the role of technology and technology companies in individuals’ lives in 2025?


My concern is that our technology choices will force us into mutually exclusive and competing factions. These factions may be defined politically, or may be defined by class or race, by economic status, or by power and control. Technological dystopia occurs when one faction uses technology against the other, perhaps by means of surveillance and spying, perhaps by means of manipulation and misinformation, or ever by means of hacking and disruption. When technology divides us, it also disempowers us, as everything about us becomes subservient to the conflict. Our agency, our identity, our activities - all these become the means and mechanisms for one faction to fight the other.

In a sense, this is a worry about technological public spaces becoming private spaces. There's no application we can use or no online space we can go that isn't owned by some entity, and designed to further the objectives of that entity, with the social goods of individual freedom and social cohesion taking a back seat to those objectives. It's the sort of world where we no longer own things, but can merely lease them, subject to the terms, conditions, and digital rights management of the technology company. It's a world in which there is no space for creativity or free expression outside the constraints of end user licensing agreements, and no public space for discussion, decision and action where the needs of society can prevail over private and corporate interests.

By 2025 we will have a clear idea whether we are slipping into technological dystopia. The more difficult we find it to interact on an equal basis with people from other countries, other cultures, other political beliefs or even other platforms or social networks, the less likely we are to be able to find common solutions to global problems. The more prevalent surveillance and control through technological means becomes, the less likely a less powerful people can redress the excesses of the more powerful. These, eventually, will manifest in physical symptoms of dystopia: shortages, outages, civil unrest, open conflict.


Will AI mostly be used in ethical or questionable ways in the next decade? Why? What gives you the most hope? What worries you the most?

The problem with the application of ethical principles to artificial intelligence is that there is no common agreement about what those are. While it is common to assume there is some sort of unanimity about ethical principles, this unanimity is rarely broader than a single culture, profession, or social group. This is manifest by the ease with which we perpetuate unfairness, injustice, and even violence and death to other people. No nation is immune.

Compounding this is the fact that contemporary artificial intelligence is not based on principles or rules. Modern AI is based on applying mathematical functions on large collections of data. This type of processing is not easily shaped by ethical principles; there aren't 'good' or 'evil' mathematical functions, and the biases and prejudices in the data are not easily identified nor prevented. Meanwhile, the application of AI is underdetermined by the outcome; the same prediction, for example, can be used to provide social support and assistance to a needy person, or to prevent that person from obtaining employment, insurance or financial services.

Ultimately, our AI will be an extension of ourselves, and the ethics of our AI will be an extension of our own ethics. To the extent that we can build a more ethical society, whatever that means, we will build more ethical AI, even if only by providing our AI with the models and examples it needs in order to be able to distinguish right from wrong. I am hopeful that the magnification of the ethical consequences of our actions may lead us to be more mindful of them; I am fearful that they may not.


If you do not think it likely that quantum computing will evolve to assist in building ethical AI, why not? If you think that will be likely, why do you think so? 


This question is like asking "if you have a faster car, are you more likely to be driving it in the right direction?"

On the one had, yes, because a faster car can get to the right place much more quickly, which means there are more routes it can take to get to the right place. You are more able to correct what you are doing as you drive.

But on the other hand, no, because a faster can can take you much further away from the right place than you could have imagined with your slower car, and take you beyond any hope of recovering your sense of direction and correcting your course.

 

Comments

Popular Posts