Moral agency: we can’t talk about it but we can’t be silent about it either

Some thoughts about responsible AI.

“Can machines be held responsible for actions that affect human beings? What limitations, if any, should guide autonomous decision making by artificial intelligence systems, computers, or robots? Is it possible to program such mechanisms with an appropriate sense of right and wrong? What moral responsibilities would these machines have to us, and what responsibilities might we have to such ethically minded machines?” (David Gunkel, The Machine Question).

Is a so called ‘autonomous’ car responsible for the damage it causes in an accident? Is the physician responsible for the wrong medication he prescribed to his patient when he followed (or did not follow) the outcome of his medical expert system? Should we allow the use of autonomous unmanned flying weapon systems (aka killer robots) and who is responsible for the actions they take in the battle field?

In answer to these questions about the moral agency of artefacts Bryson (2018) concludes that constructing AI systems as either moral agents or moral patients is possible. However, she concludes, they neither are desirable. The reason for her opinion is a rather formal one: “we are unlikely to construct a coherent ethics in which it is ethical to afford AI moral subjectivity.”

She concludes with the rather cryptic: “We are therefore obliged not to build AI we are obliged to.”.

What does Bryson mean by constructing AI systems as moral agents is possible? How is this possible and what does she mean by ‘constructing systems as moral agents‘? Is that meant in a technical sense (machine ethics) or in a social sense? Or both? (Some people talk to trees and listen to what they have to say. I can imagine a community in which this is common practice.)

A term that circles in some ethical arenas is ‘responsible AI’. I have seen EU reports in which they talk about AI as a kind of legal subjects. I worked for almost forty years on the development of natural language dialogue systems, embodied conversational agents, but I never saw them as a kind of person, moral subjects. Virtual humans are really not humans. ‘Social robots’ can press our ‘Darwinian buttons’ but does that make them social intelligent? What is going on? People seem to be confused by language. Can you study Artificial Intelligence (take Russell and Norvig, still one of the best on the subject) and believe that we (by means of AI) can make a person? In the 80s Pugwash, a modern sort of luddites, objected against the development of machines that could produce natural speech. They found it immoral because that would be misleading people: as if a machine knows and means what it says. Was what I was doing immoral? Was I involved in a project aiming at immoral creatures?Working on human computer interaction you know that talking machines do only work properly when the user experiences that the machine ‘means what it says’. AI is pretending being intelligent. Without pretense it wouldn’t work.

In a recent tweet Joanna Bryson said about the term ‘responsible AI’ that “I never usually use the term (I prefer “responsible use of AI”), but if you need to… (1 minute read)” She refers to a blog post of hers. There she writes:

I actually hate the term “responsible AI” because obviously it sounds to most people like the AI itself is responsible. But if you’re going to use the phrase anyway…

“Responsible AI is not a subset of AI. Responsible AI is a set of practices that ensure that we can maintain human responsibility for our endeavours despite the fact we have introduced technologies that might allow us to obfuscate lines of accountability.”

I completely agree. (Note: P.P. Verbeek says technology blurs the borders of what is human.)

But I asked: what about ‘intelligent machines’ then? Isn’t that also a term that we shouldn’t use? She said

“If we use the definition of “intelligence” developed in the 19C to answer “which animals are more intelligent”, then intelligence is generating action from context (doing the right thing at the right time.) So thermostats & plants are intelligent by this def.”

Is a thermostat responsible for what it does? I asked. In my opinion it is not possible to have in reality separate intelligent acting without responsible acting. Of course we can distinguish the intelligent and the moral aspects of acts. But in reality they do not occur alone: if you have intelligence you have morality as well and vice versa.

In answer to that Bryson replied:

“Yes. Cf. the papers. It’s not intelligence we care about, it’s not agency, it’s MORAL agency, who is responsible. If you make “intelligence” do too much duty, you can’t talk about the components of responsibility. Longer:” [follows a reference to her 2018 paper that I tried to understand]

“But anyway, if when you say “intelligence” you mean “moral agent” then you can’t separate intelligence from responsibility by definition. Again, that’s not getting into moral trouble, that’s just incoherence. That’s why I advised using the simple definition.”

“Sorry, no. As soon as we separate responsibility from humans we run into all kinds of problems, whether you want to call them moral or not. Here’s another paper if you’re done with the previous one”

[a reference to a paper from 2017: Of, for, and by the people: the legal lacuna of synthetic persons.]

David Gunkel added that he liked Bryson’s answer and gave another reference to read.

“Correct. And in the interest of carefully parsing the terminology, I can recommend Paul Ricœur’s remarkably insightful essay on the subject: “The Concept of Responsibility: An Essay in Semantic Analysis.” “

So it’s all about terminology and definitions. It’s all about language use, I thought.

“… doing the right thing at the right time”. What does that mean when we talk about what a thermostat does? Who decides what is “the right thing”?

Let me see. I started to read the recommended papers.

These are my thoughts about ‘intelligent’ and ‘moral’ artefacts.

Ethics is about those situations where people ask each other why do you do this? This is the question after the values that count and the way we weigh the values involved when we decide to act. People use norms related to an order of values. Values and norms are socially accepted moral measures, e.g.: be honest, be fair, be reliable, be respectful, have compassion towards others.

Descriptive ethics describes in what situations people ask each other why they do the things they do and when the response leads to acceptance or not. There is no instance outside a community that decides what values the members of a community will care for. This doesn’t deny the fact that some people in some communities say that there is some authority that tells them what is right or wrong. Religious as well as theological opinions are part of ethical practices. Some people have power over others.

There are conflicts. Not all members in a community agree on all values, on what is best to do. Not all communities agree that making an end to someone’s life is not a good thing to do.

Bryson writes her study concerns normative and descriptive ethics.

Normative ethics is the study of ethical behavior, and is the branch of philosophical ethics that investigates the questions that arise regarding how one ought to act, in a moral sense.

Normative ethics is distinct from descriptive ethics, as the latter is an empirical investigation of people’s moral beliefs. In this context normative ethics is sometimes called prescriptive, as opposed to descriptive ethics.

So she not only discusses conditions for ethics as a social construct, but she presents her own opinion about what she values as a personal contribution to that social construct.

Regarding responsibility we should at least distinguish two notions of responsibility: retrospective and prospective, respectively called ex post and ex ante responsibility by Dieter Birnbacher (2001). The first I would call accountability.

Responsibility is a social category. You are responsive towards others that ask you why. People hold other people responsible for what they do. If someone wants to be taken seriously by others as a person he should take responsibility for his actions. Acting responsible is something that people strive for. Sometimes we do not succeed in that. That doesn’t make us less moral.

When we ask someone why he does what he does, we take him for someone that in principle can give a reason for his action (we take him as a person in a Kantian sense). We take it that what he does is not caused by some external cause. Because he is mentally ill for example, or forced by some external force. The why question is about his or her behavior in as far as it is under his or her control. That is a difficult notion. In how far are we autonomous actors? Isn’t what we do always determined by physical, social and cultural determinants? Yes, but we can also have influence on these determinants. We can take a stance towards rituals and cultural norms. But there are limits. We can’t be at two different places at the same time.

“The conditions that must be met for ex ante responsibility to be imputed to an agent or group of agents include once again that the necessary conditions of voluntary action must be met. Further, the relevant agents must have the capacity to do what is called for. They must have the intelligence to understand the future consequences of their actions and have the requisite information.” ( Stan van Hooft in Ricoeur on Responsibility).

We should be carefull not to confuse having a free will with freedom of choice. Offering someone to choose between this or that doesn’t make him or her free.

Of course we are never completely sure about the consequences of our decisions and the things we do. We should do our best. This is a moral expression. Why should we do our best? How do we know what is good or best? We might be able to tell what a good painter is. But what is a good man? Isn’t that abuse of language as Wittgenstein convincingly shows in his Lecture on Ethics.

We consider someone a good painter because of the aesthetics of his paintings. It doesn’t mean all his works are as good as the best. We call a man good when his deeds are good. Aesthetics and ethics are alike: there is no mathematics for it.

Being responsible is a virtue. It means taking care for valuable things. Other persons in particular. And one self, the most familiar other. (cf. Paul Ricoeur). Responsibility is not something we can hand over to someone else. It sticks like chewing gum. A care robot cannot take over our responsibility for the patient.

Joanne Bryson (2018) argues that “the core of all ethics is a negotiated or discovered equilibrium that creates and perpetuates a society”.

“The descriptive argument of this article is that integrating a new capacity like artificial intelligence (AI) into our moral systems is an act of normative, not descriptive, ethics. Contrary to the claims of much previous philosophy of AI ethics (see e.g. Gunkel and Bryson 2014), there is no necessary or predetermined position for AI in our society. This is because both AI and ethical frameworks are artefacts of our societies, and therefore subject to human control.”


“the moral status of robots and other AI systems is a choice, not a necessity.”

“The responsibility for any moral action taken by an artefact should therefore be
attributed to its owner or operator, or in case of malfunctions to its manufacturer, just as with conventional artefacts”.

I think I agree with her on this.

“I intend to use here only the formal philosophical term moral agent to mean “something deemed responsible by a society for its actions,” only moral patient to mean “something a society deems itself responsible for preserving the well being of,” and moral subjects to be all moral agents or moral patients recognised by a society.”

Now, about intelligent machines.

“When I talk about something as intelligent, I mean only that it can detect contexts appropriate for expressing one of an available suite of actions.” (Bryson)

Is a machine able to detect contexts appropriate for expression some action? What do we mean by that?

The problem is language use. We use ‘intelligence’ in multiple senses. An intelligent machine we call ‘intelligent’ but not in the same sense as we use it in ‘intelligent man’. Just like a formal ‘language’ is a different ‘language’ than a natural ‘language’ or a programming ‘language’. There is a subordinate relation between these uses. (It is not ambiguity.) When we say that a stone falling is computing the speed it has with which it hits the ground, we realize that it is a different language use then when we say that I compute the speed of the stone when it hits the ground. But we can use the falling stone to compute the outcome of the computation I make because and as far as the mathematics I use is according the physical laws of the falling stone. So it makes sense then to say that the falling stone computes the outcome of the mathematical formula (‘sort of computes’ D.C. Dennett would say). But there is a whole historical intellectual development in between the two senses (see about the development of the technical idea Jan Hollak: Hegel, Marx en de cybernetica, or Van Causa sui tot automatie) and a complex historical and technological development before we could say and make people believe that a computer thinks and does something useful by doing that. Making machines work is not only a technical enterprise, it is a social (and moral) enterprise too. We cannot design acceptance in a technical way.

We should not forget what is behind this language use. A logical circuit is a logical circuit only because we built it so that it can be seen as representing a Boolean algebraic relation between input and output signals. The logical “if then” circuit is the objective form of the self-reflection of the experimental stance we take towards nature. Technology is the other side of experimental science. When experiments show that If I do A then nature does B then if I want B I can bring that about by doing A.

(Reading D.C. Dennett explanation of how a computer works, I wonder if he sees how it ‘works’. You simply cannot explain how it works the way he does it, by spelling out how a Turing machine or register machine functions.

When Jacques Derrida writes “I thought I would never manage to submit to the rules of a machine that basically I understand nothing about. I know how to make it work (more or less) but i don’t know how it works.”, he expresses being aware of the two different senses of ‘work’ when we describe what a machine does. (From: Paper Machine, Standford University Press, 2005)

Wittgenstein said that we can’t explain the difference between robot and man and that this is not because there is no difference, but because of the limitation of our language. “Tell me what a machine can’t do and I make it doing it.” people say when you tell them machines can’t do everything that humans do. )

“Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas.” (Henrik Skaug Sætra, 2020, in a paper about algorithmic governance). Similar expressions about the performance of intelligent computers compared with human performance can be found in many scientific papers as well as in popular media. Computer are better in recognizing emotions than humans. If we believe the newspapers.

If we say that a machine can perform some task better than humans we should realize the multiple senses in language use. Machine performance isn’t the same as human performance. When we say a machine cleans cloths better than we can, we should be aware that we use the machine, our invention, to wash our cloths. It is not different from saying that washing cloths with two hands and warm floating water is better than with one hand in a bucket with cold water. And then that the warm water better washes our cloths than we do. Of course we make clever use of properties we discovered that nature affords. That is what technology always is.

Computing is what is done mechanically by nature. Therefore every task that we want to be done by a machine itself must first be modelled in a computational way. Before we could built a machine that performs mathematics we had to mathematize doing mathematical calculations (meta-mathematics). (The falling stone can only perform one type of computation for us. Not all possible ones. We need a Turing machine for that. And, as Luciano Floridi rightly observes, we need to see functions as arguments of functions.)

The same holds for ‘behaviors’. ‘Socially behaving’ robots are only possible when we have identified ‘behaviors’, abstract from the person behaving. (See the Introduction of Erving Goffman, Interaction Rituals: essays on face-to-face behavior (1967)) .

What are the minimal requirement for an agent so that he can take part in a social encounter “and have an orderly traffic of behavior emerge.”? Goffman asks in Replies and Responses. Reading Goffman you never know whether he is talking about humans or artificial agents. It simply doesn’t matter, because of the abstract subject he focuses on:

Not then men and their moments. Rather moments and their men.

Like we abstract words and sentences and propositions from people speaking and expressing themselves and their opinions, we abstract ‘behaviors’ from the human bodies and project them on robots. And it works when people recognize the behaviors as meaningful. Read Frans de Waal’s report about his experience when he encountered Hiroshi Ishiguro’s humanoids (Bezoek aan de Griezelvallei, Psychologie Magazine). By simulating the way apes and children turn their heads and gaze when someone points at something, the same emotion is generated. Technology makes clever use of natural laws. Not only of the laws of physics.

The speaking robot is not speaking in the same sense as the person speaking. If we don’t see that we run into questions like who is speaking? As if there is always a speaker when there are sounds heard that sound like someone is speaking. It’s just the illusion of someone presence. An image. (Read Heidegger’s essay Hebel, der Hausfreund about what the ‘Sprachmachine’ tells us about what happened with Sprache.) Research in persuasive technology has shown that a picture of someone ‘looking at you’ may already have influence on your behavior.

There was a guy in the US that said a lot of things on twitter that were simply not true. He uses a basic quality of a human invention: language: the possibility to say things that does not refer at all. Talking about “How to do things with words” we use language as providing building blocks for technically creating virtual worlds.

AI works, talking machines work, they are useful, because they suggest to talk, to compute, to be intelligent. If we don’t take them seriously they stop being useful.

Bryson mentions an ethical principle that says that “robots should not have deceptive appearance—they should not fool people into thinking they are similar to empathy deserving moral patients.” But how can we prevent that some people will forget that it’s just a machine. The social robot’s working is based on deception.

The thermostat does not decide in the same sense as a human being decides. The social intelligence of a social care robot that refuses to set the thermostat higher after a request of the elderly with the words “what about going for a walk if you’re cold?” or the search engine that filters the news based on the user profile, or the autonomous weapon system that ‘decides’ to fire when detecting a bomb shell (after deliberating the appropriateness of such an action, the chances for collateral damage, etc.), they have been programmed by us to act this way. The fact that they have been made to ‘learn’ from experiences and that it becomes hard to predict what they will do in particular situations doesn’t mean they are more human, more moral.

Notice that these ‘agents’ perform ‘moral actions’ in the sense defined by Bryson, i.e. they satisfy the following:

  1. There is a particular behavioral context that affords more than one possible action for that agent.
  2. At least one available action is considered by a society to be more socially beneficial than the other options, and
  3. The agent is able to recognize which action is socially beneficial—or at least socially sanctioned—and act on this information.

Bryson claims that we “can certainly build AI to take moral actions”.

“But this in itself does not determine moral agency. The question is, who would be responsible for those actions? An agent that takes a moral action is not necessarily
the moral agent—not necessarily the or even a locus of responsibility for that action. A robot, a child, a pet, even a plant or the wind might be an agent that alters some aspect of an environment. Children, pets, and robots may know
they could have done ‘better.’ We can expect the assignment of responsibility for moral acts by intelligent artefacts to be similarly subject to debate and variation. Moral responsibility is only attributed to those a moral community has recognised as being in a position of responsibility.

“Should we produce a product to be a moral agent?” Bryson asks.

“Would it be moral for us to construct a machine that would of its own volition choose any but the most moral action?”

Who is ‘we’? We, the society I guess. But how do we decide? Is this really a decision we make? I believe we find ourselves in a historical situation created by our political economy driven by technology that produces artifacts that are considered more and more ‘autonomous’. Moreover, it is not the designer that determines how his product is received by society. Considering an artifact (a car for example) as ‘autonomous’ is the result of a specific stance we take towards the artifact. We are responsible for taking that stance and for the consequences. A practitioner is never forced to follow the advice of a medical expert system, but he should at least consider the advice it gives and motivate why he did or did not follow it in a specific case. The system after all doesn’t know his patient. Medical science is not about a specific individual, “not about Socrates” (Aristotle, Ethics). It’s about diseases, categories.

I very much agree with Bryson, and with Dennett, that the danger of AI does not come from AI itself but from the people that ascribe too much power to the machines.

What holds for ‘responsible’ machines holds for ‘intelligent’ machines as well. The terms are misleading. It’s what we mean and do, not the words we use what counts.

What is a word?

A word is “essentially historical” (Maarten Janssen and Albert Visser in Some words on word, 2002). “A use of a word presupposes a historical connection to earlier uses of a word.”. The different senses of the uses of the word ‘intelligent’ or ‘language’ are historically related. You cannot separate the intelligence of the machine and the intelligence of the user/designer of the machine without destroying the meaning of ‘intelligent’. But that is what people do when they compare computers with humans. As if the machine exists as machine outside the relation.

What does that involve, the ‘use’ of a word? In a footnote, they give an important comment on this notion of ‘use’ of a word. Specially relevant for those that are looking for the ‘author’ of synthetic speech.

“The notion of use should be taken here cum grano salis.”

“A man is taking money from an ATM. He reads: “do you want to know your saldo?” Clearly the man is reading words, In fact the word you denotes the man. However, the machine is not uttering this question, neither are the programmers of the machine. They just brought it about that similar questions would be asked anytime the machine was used. Everytime it’s a different question however since addressee and question time vary. In such an example one would like to say: ‘the words get used without there being an utterer/ a user’.” Indeed, there is no speaker. There is no act of speaking. Words are used here as tools, as images to bring about some effect, the same effect as is meant in the original, historical situation. The machine isn’t aware of this situation. A panel shows a map containing an arrow that says “you are here”. ‘You’ refers to the reader in front of the panel. How does the author know? It only works and it is only meaningful when the panel is placed on the spot that is marked on the map. The reader knows about this presupposition.

Facework (Goffman, Levinson, Levinas) is what I am doing after my retirement. After working almost forty years on artificial intelligence as a mathematician I am writing an apology. But to answer the question why did you do this? I have to be clear about what it is that I was doing. To clarify we need to go back to the historical roots of AI: how did we come here? Then I realize we are just a drop of water in a historical stream that runs in the flow of time. What I can contribute now is to hold up this flow a bit by sailing against the stream and listen to what the words have to say.

Bryson takes a rather technical stance towards ethics. She seems to see it as a technical project for the society. Ethics can be designed. We can build moral agents but we better don’t do it, she says. Because there is no coherent ethical system in which we allow them as moral agents.

Ethics is not a choice we make.

“For as far as Levinas is concerned, we can never as such decide on who is and who is not a moral agent or moral patient – who has a face: we cannot choose our morality. Ethics is not a choice we make, but rather a relationship or space of meaning we already find ourselves in.” (Niklas Toivakainen in: Machines and the face of ethics)

I agree with Bryson that artificial agents are not moral agents in the sense humans are. But my stance is based on what it means to be a technology, on the technical idea itself: an outside objectivation of an abstract creation of our mind, a combination of natural forces (not just based on physics also biology) in which we express control over nature.

There is a lot of confusion about the metaphysical status of AI. This is caused by the fact that one does not see that words like ‘intelligence’ and ‘moral’ and ‘language’ are used with different senses in different contexts. The paradox of technology is that it works because of this multiple senses, because it transfers meaning from one (original) context of use to another (technical) context. See the example of the ATM machine above.

Moral technology: wovon man nicht sprechen kann darüber kann man auch nicht schweigen.


Birnbacher, D. (2001) ‘Philosophical foundations of responsibility’, Responsibility: The many faces of a social phenomenon. ed. A. E. Auhagen and H.-W. Bierhoff, London: Routledge, 9-22.

Bryson, J.J. (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology (2018) 20:15–26.

Ricoeur, P. (2000) ‘The Concept of Responsibility: An Essay in Semantic Analysis’ in The Just, trans. D. Pellauer, Chicago: The University of Chicago Press.

Published by


Rieks op den Akker was onderzoeker en docent kunstmatige intelligentie, wiskunde en informatica aan de Universiteit Twente. Hij is gepensioneerd.

Leave a Reply