- Foreword
- Introduction
- No dea(r)th of philosophy
- Public philosophy for gremlins
- Identity dialogues
- Philosophy matters
- Philosophy as democratic underlabour
- The marketplace of ideas: who's buying?
- The point is to change it
- On the inequity of ethics
- Did that answer your question?
- Philosophy in the flow of political life: realism, moralism and community wealth building
- To the shoemakers and the ship-builders: on publicly-engaged philosophy and AI ethics
- Breaking bread with the enemy
- Of weasels and women, or, what is public philosophy anyway?
- Philosophy protects the climate
- The pathology of the prison
- Call the midwife
- Ours to question why
- On the new demise of ethics
- The world and his wife
To the shoemakers and the ship-builders: on publicly-engaged philosophy and AI ethics
Professor John Tasioulas, University of Oxford[ref]John Tasioulas is Professor of Ethics and Legal Philosophy at the University of Oxford and inaugural Director of Oxford’s Institute for Ethics in AI. He has previously taught at Glasgow, King’s College London, and University College London, and held visiting appointments at the University of Chicago, Harvard University, and the University of Notre Dame. He has published extensively in moral, political, and legal philosophy, with a focus on human rights, punishment, international law, and the ethics of AI and digital technology. He is the author of On Justice and Mercy: Essays in Moral and Legal Philosophy (forthcoming, OUP) and the editor of The Cambridge Companion to the Philosophy of Law (CUP, 2020). He has acted as a consultant on human rights to the World Bank and served as a member of the International Advisory Board, Panel for the Future of Science and Technology (STOA), European Parliament and as a member of the Prime Minister of Greece’s High-Level Advisory Committee on AI. He is a Senior Fellow in Schmidt Sciences’s AI2050 program and is currently engaged in an AI2050 project on humanistic AI ethics together with Professor Hélène Landemore (Yale University).[/ref]
What is the contribution that the arts and humanities, including philosophy, can make to public engagement with the increasingly pervasive technology of artificial intelligence (AI)?
Choice
Perhaps the most fundamental contribution of the arts and humanities is to make vivid the fact that the development and deployment of AI technology is not a matter of destiny, but instead involves successive waves of highly consequential human choices. It is important to identify the choices, to frame them in the right way, and to raise the question: who gets to make them and how?
This is important because AI, and digital technology generally, has become the latest focus of the myth that social evolution is determined by independent variables over which we, as individuals or societies, are able to exert little control. A recent book by a leading industry figure employs the metaphor of a dehumanised force of nature to characterize the AI revolution – ‘a coming wave’ obeying a ‘a single, seemingly immutable law’, according to which a new technology ‘gets cheaper and easier to use, and ultimately it proliferates, far and wide.’[ref]Suyleman, M. (2023). The Coming Wave: AI, Power and the 21st Century’s Greatest Dilemma. London: The Bodley Head.[/ref].
The humanities are vital to combatting this techno-determinism, which is profoundly disempowering for individuals and democratic publics alike. They can do so by reminding us of other technological developments that arose the day before yesterday – such as the harnessing of steam or nuclear power – and how their development and deployment for good or ill were always contingent on human choices, and therefore hostage to systems of value and to power structures that could have been otherwise[ref]Acemoglu, D. and Johnson, S. (2023). Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity. London: Basic Books.[/ref].
Ethics
Given the necessity for choice, the second contribution the arts and humanities can make is to emphasise the inescapability of ethics in articulating and making these choices.
Ethics is inescapable because it concerns the ultimate values in which our choices are anchored, whether we realise it or not. These are values that define what it is to have a good life, and what we morally owe to others, including non-human animals and to nature. The arts and humanities in general, and not just philosophy, engage directly with ethics – the ultimate ends of human life. And, in the context of AI, it is vital for them to resist various worrying contractions and distortions that the notion of ethics is apt to undergo. Thanks in part to the incursion of big tech into the AI ethics space, ‘ethics’ is often interpreted in unduly limited ways. For example, as a form of self-regulation lacking legal enforceability. It would be a hugely damaging capitulation to the distortions wrought by big tech to adopt its anaemic understanding of ethics as – at best – essentially self-regulation, or – at worst – corporate PR.
All forms of ‘regulation’ that might be proposed for AI, whether one’s self-regulation in deciding whether to use a social robot to keep one’s aged mother company, or the content of the social and legal norms that should govern the use of such robots, ultimately implicate choices that reflect ethical judgments about salient values and their prioritisation. Reclaiming a broad and foundational understanding of ethics in the AI domain, with radical implications for the re-ordering of social power, is an important task of the arts and humanities.
The dominant approach
The next question we might ask is: what is the shape of the ethical self-understanding that the arts and humanities can help to generate? The starting-point, I think, is to recognise that there is already a dominant approach in this area, that it has grave deficiencies, and that a key task for the humanities is to help us elaborate a more robust ethical outlook. I take the dominant approach to be that which is found most congenial by the powerful scientific, economic and governmental actors in this field.
Like anyone else, AI scientists are prone to the illusion that the intellectual tools at their disposal have a far greater problem-solving purchase than is actually warranted. This is a phenomenon that Plato diagnosed long ago with respect to the technical experts of his day, such as shoemakers and ship-builders. The mindset of scientists working in AI tends to be data-driven, it places great emphasis on optimisation as the core operation of rationality, and it prioritises formal and quantitative techniques.
Given that intellectual framework, it is little wonder that a leading AI scientist like Stuart Russell finds himself drawn to a preference-based utilitarianism as his overarching framework of ethics[ref]Russell, S. (2019). Human Compatible: AI and the Problem of Control. London: Allen Lane.[/ref]. According to Russell, the morally right thing to do is that which will maximise the fulfilment of human preferences. So, ethics is reduced to a technical exercise in prediction and optimisation – deciding which act or policy is likely to lead to the optimal fulfilment of human preferences.
But this view of ethics is, of course, open to serious challenge. Its concern with aggregating preferences threatens to override important rights that erect strong barriers to what can be done to individuals. And that’s even before we observe that human preferences may themselves be infected with racist, sexist or other prejudices. Ethics operates in the crucial space of reflection on what our preferences should be, a vital consideration that makes a belated appearance in the last few pages of Russell’s book. It does not take those preferences as ultimate determinants of value.
This crude, preference-based utilitarianism also exerts considerable power as an ideology among leading economic and governmental actors. This is less easy to see, because the doctrine has been modified by positing wealth-maximisation as the more readily measurable proxy for preference-satisfaction. Hence the tendency of GDP to hijack governmental decision-making around economically consequential technologies such as AI, with the consequent sidelining of values that are not readily captured by market demand. Hence, also, the legitimation of profit-maximisation by corporations as the most effective institutional means to societal wealth-maximisation.
The three Ps: pluralism, processes and participation
The kind of ethics we should hope the arts and humanities steer us towards is one that ameliorates and transcends the limitations of this dominant paradigm. This humanistic ethics would have at least the following three features: the three Ps.
Pluralism
This humanistic ethics would emphasise the irreducible plurality of values, both in terms of the elements of human well-being and the core components of morality. This pluralism calls into question the availability of some optimising function for determining what is all-things-considered the right thing to do. It also undermines the facile assumption that the key to the ethics of AI will be found in one single master-concept, whether that be safety, trustworthiness, human rights or something else. How could human rights be the overarching framework for AI ethics when, for example, AI has a serious environmental impact that cannot be exclusively cashed out in terms of its bearing on anthropocentric concerns? And what about those human values to which we do not think of ourselves as having a right but which are nonetheless important, such as mercy, kindness, or solidarity? Nor can trustworthiness be the master value. It is at best parasitic on compliance with more basic values, hence it cannot displace the need to investigate those values. Equally, safety is an impoverished comprehensive rubric for AI ethics, since the concerns of the latter go well beyond protecting life and limb, even if highly speculative worries about the ’existential risk’ to humanity posed by AI systems have tended to draw attention away from this fact.
Acknowledging the existence of a plurality of values, with their nuanced relations and messy conflicts, heightens the need for choice, and accentuates the question of whose decision will prevail. This sensitive exploration of a plurality of values and their interactions is what the arts and humanities, at their best, offer us. I say at their best because, of course, they often fail in this task. My own discipline, philosophy, has itself in recent years often propagated the highly systematising and formal approach to ethics that I have condemned. Philosophers have much to learn from closer engagement with other humanities disciplines, like classics and history, and with the arts, especially fiction, which often penetrate to the heart of issues like the significance of personal relations, or the nature of human emotion, in ways that the more discursive methods of philosophy cannot. Exemplary here, of course, has been the work of philosophers like Martha Nussbaum, Charles Taylor, and Bernard Williams.
Processes not just outcomes
Of course, we want AI to achieve valuable social goals, such as improving access to education, justice and health care, and to do so in an effective and efficient way. The COVID-19 pandemic cast into sharp relief the question of which outcomes AI is being used to pursue. Did it help us, for example, reduce the need for our fellow citizens to undertake hazardous labour in the delivery of vital services, or is it primarily engaged in profit-generating activities, like vacuuming up people’s attention online and encouraging crass consumerism, that have little redeeming social value? The second feature of a humanistic approach to ethics is to underline that what we rightly care about is not just the value of the outcomes that AI can deliver, but the processes through which it does so.
Compare the use of AI in cancer diagnosis and its use in the sentencing of criminals. When it comes to cancer, what may be all-important is getting the most accurate diagnosis, and it is largely a matter of indifference whether this comes through the use of an AI diagnostic tool or the exercise of human judgement. In criminal sentencing, however, there is a powerful intuition that being sentenced by the robot judge – even if the sentence is likely to be less biased or more consistent than one rendered by a human counterpart – means sacrificing important values, such as accountability and empathy, relating to the process of decision.
This point is familiar, of course, in relation to such process values as transparency, procedural fairness and explainability. But it goes even deeper, because of the dread many understandably feel when contemplating a dehumanised world in which decisions that affect our deepest interests have, at least as their proximate makers, automated systems that can neither be held directly accountable nor empathise with the plight of the humans subject to their decisions[ref]Campbell, J. (2020). Causation in Psychology. Harvard University Press, ch.3.[/ref], [ref]Tasioulas, J. (2023). ‘The Rule of Algorithm and the Rule of Law’, in C. Bezemek, M. Potacs, A. Somek (eds), Vienna Lectures on Legal Philosophy vol 3: Legal Reasoning. Oxford: Hart Publishing, pp.17-39.[/ref].
Participation
The third feature relates to the importance of participation, whether as an individual or as part of a group of self-governing democratic citizens, in realising our ethical values. At the level of individual well-being, this takes the focus away from theories that equate human well-being with some end-state, such as pleasure or preference-satisfaction, that could in principle be secured by a process in which the person who enjoys them is entirely passive, for example, by pumping vast quantities of an antidepressant drug in the water supply. Contrary to this view, the exercise of our distinctive human faculties for reason, communication and social connection lies at the core of human wellbeing. This has immense relevance for assessing the significance of the increased deployment of AI systems in place of human endeavour.
One of the most important sites of participation in constructing a good life, in modern societies, is the workplace. According to a McKinsey study, around 30% of all work activities in 60% of occupations are capable of being automated[ref]McKinsey Global Institute (2017). A future that works: Automation, employment, and productivity | McKinsey. [online] www.mckinsey.com. Available at: https://www.mckinsey.com/featured-insights/digital-disruption/harnessing-automation-for-a-future-that-works/de-de./ref]. Should we accept the idea that the large-scale elimination of job opportunities, due to automation, can be compensated for by the benefits that automation brings? The answer depends on whether the participatory self-fulfilment of work can, any time soon, be feasibly replaced by other activities, such as art, friendship, play or religion. If it cannot, addressing the problem with a mechanism like universal basic income, which involves the passive receipt of a benefit, will not be enough[ref]Tasioulas, J. (2024). ‘Work and Play in the Shadow of AI’, in D. Edmonds (ed.), AI Morality. Oxford: Oxford University Press.[/ref].
Similarly, we value citizen participation as part of democratic self-government. And, arguably, we do so not just because of the instrumental benefits of democratic decision-making in reaching better decisions (‘the wisdom of crowds’ factor), including better decisions regarding the fair distribution of the benefits of new technologies. The need for democratic control, and the diffusion of power more generally, to ensure the fair distribution of the wealth resultant from new technologies is an important theme in Acemoglu and Johnson’s Power and Progress. But we also value participatory decision-making processes because they affirm the status of citizens as free and equal members of the community. This is an essential plank in the defence against the tendency of AI to be co-opted by technocratic modes of decision-making. These modes can erode democratic values by seeking to convert matters of political judgement into questions of purely technical expertise. In this process the rhetoric of safety, and existential risk, has played a significant role in constraining democratic control over AI by encouraging an expert-based discourse that sidelines the experience and perspectives of ordinary citizens.
At present, much of the culture in which AI is embedded is distinctly technocratic, with decisions regarding the ‘values’ encoded in AI applications being taken by elites within the corporate or bureaucratic sectors, often largely shielded from vigorous democratic control. Indeed, a small group of tech giants accounts for the lion’s share of investment in AI research, dictating its overall direction. Meanwhile, AI-enabled social media poses risks to the quality of public deliberation by promoting the spread of disinformation, aggravating political polarisation, and so on. Similarly, the use of AI as part of corporate and governmental attempts to monitor and manipulate individuals undermines privacy and threatens the exercise of basic liberties, effectively discouraging citizen participation in democratic politics. On the positive side, we need to think seriously about how AI and digital technology more generally can enable, rather than hinder and distort, democratic participation, not just in the sphere of formal law-making, but also in corporate governance and beyond[ref]Landemore, H. (2024). ‘Can Artificial Intelligence Bring Deliberation to the Masses?’, in R. Chang and A. Srinivasan (eds), Conversations in Philosophy, Law, and Politics. Oxford: Oxford University Press.[/ref].
Democratising the arts and humanities – The Lyceum Project
If the arts and humanities are to advance a humanistic ethical agenda in AI of the kind I have described, they must themselves be democratised. In a democracy, it is not enough to give people a vote while effectively excluding them from meaningful deliberation; and if they are to deliberate as equals, they must have access to the key sites where basic ideas about justice and the good are worked out.
The arts and humanities are prominent among those sites. Hence the wisdom of Article 27 of the Universal Declaration of Human Rights, which includes a right to participation in science and culture. We can see manifestations of this right, enabled by digital technology, in the resurgent citizen science movement[ref]Vayena, E. and Tasioulas, J. (2015). ‘We the Scientists’: a Human Right to Citizen Science. Philosophy & Technology 28, pp. 479-485.[/ref]. But we must also address the exclusion of our fellow citizens from the domains of artistic creativity and humanistic enquiry. This means that work on AI within the arts and humanities should not merely be accessible to a wider public, nor should it merely model civil and rational debate. It should also afford our fellow citizens the opportunity to articulate and critically assess their own views in dialogue with others.
Lest all this sound unduly abstract, I should add something about one of the more notable attempts I have made, as Director of the University of Oxford’s Institute for Ethics in AI, to practice what I am preaching.
On 20 June 2024, in partnership with Stanford University, ’Demokritos’, Greece’s National Centre for Scientific Research, and the World Human Forum, a Greek-based think tank, the institute held the Lyceum Project, a one-day public event: ‘AI ethics with Aristotle’. The event took place at the Athens Conservatory, a stone’s throw from the original site of Aristotle’s school, the Lyceum. Around 500 people attended, from different walks of life, to listen to talks and panel discussions featuring both early-career and established philosophers, entrepreneurs, policymakers, computer scientists, and a speech by the Greek prime minister, Kryriakos Mitsotakis. As part of the event, the ancient historian Josiah Ober and I co-wrote a white paper on how an Aristotelian framework affords the best approach to the ethics of AI, illuminating such matters as work, democracy and global tech regulation[ref]Ober, J. and Tasioulas, J. (2024). ‘The Lyceum Project: AI Ethics with Aristotle’. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.4879572.[/ref]. The evening ended with a youth dialogue on the hallowed site of the ancient Lyceum that was followed by an inspiring performance of Sasha Waltz’s ‘In C’ by the Conservatory’s graduating class of dancers. These two concluding events served as powerful reminders that the future is in the hands of the younger generation and that humans have capacities for grace, beauty, and emotion far beyond any automated system[ref]Ox.ac.uk. (2024). The Lyceum Project: AI Ethics with Aristotle | Ethics in AI. [online] Available at: https://www.oxford-aiethics.ox.ac.uk/lyceum-project-ai-ethics-aristotle-0.[/ref].
This event exemplified, for me, how philosophers might devise formats that foster inclusive and rigorous engagement with the largest questions confronting democratic publics. And it was gratifying to see that the Greek prime minister’s High-Level Advisory Committee on Artificial Intelligence, on which I was privileged to serve alongside another philosopher, Vasso Kindi, subsequently recommended an annual global AI ethics summit to be held in Athens, one drawing on the rich traditions of philosophical inquiry[ref]Special Secretariat of Foresight. (2024). A Blueprint for Greece’s AI Transformation – Special Secretariat of Foresight. [online] Available at: https://foresight.gov.gr/en/studies/A-Blueprint-for-Greece-s-AI-Transformation/.[/ref]. In a world in which a small number of powerful actors increasingly dominate the public discourse on questions of great human significance, it is important for philosophers to seize such opportunities for different voices to be heard.
Notes
This chapter is a shortened and updated version of the following blog post: John Tasioulas, ‘The role of the arts and humanities in thinking about artificial intelligence (AI)’.