Could a large language model be conscious? – A review

If the machine was a conscious being, it would fight for its own truth to the death.

Will we soon witness a robot that flees after causing a fatal accident because it is (apparently) afraid of being caught and sentenced to a long prison sentence?

A robot that satisfies this description is taken to be aware of the situation it finds itself in. Why would he be afraid? Why does he not want to be imprisoned?

The witness that comes with such a report will certainly answer that the robot doesn’t like being imprisoned. These robots are not only aware of the world, they are sentient beings as well. They cannot only imagine different worlds, they prefer some worlds over others. They value the world around them as well as their own being.

Do LLMs have consciousness? This is the question thoroughly analysed by David Chalmers in a recent essay in the Boston Review. He argues that “within the next decade, we may well have systems that are serious candidates for consciousness.”

What makes him believe so?

“What is or might be the evidence in favor of consciousness in a large language model, and what might be the evidence against it? That’s what I’ll be talking about here.”

The question if LLMs or robots have consciousness (are conscious beings) is a tough question. The question is not whether currently existing robots have consciousness, but whether the technology offers the possibility of making robots that have consciousness in the future. This means that we enter the fantasyworld of creative technological thinking. Thinking about this, we experience that the question touches the limits of thinking itself and calls it into question. We feel that the question brings up something that cannot be clearly stated in the language of the existing order.

This is why some say that thought can never be only argumentative, but must always be testifying and poetic. The problem tears thinking between mathematics and mysticism, between technology and fantasy.

But it’s not just a matter of language. It’s not a ‘language game’. It is a deadly serious issue who is responsible for the harm caused by our ‘autonomous’ intelligent instruments, cars, weapons, etc. Some people say that if we attribute consciousness to robots and consider them as being accountable we in fact protect the industry and the owners, the entities that are in fact responsible. It is a matter of power who decides in the end what language game we are playing. In the field of AI different life worlds, the world of producers, of the big companies, of the marketeers, of the politicians, of the ethicists, of the end users, meet. In each of these life worlds the members play their ‘language game’. The meeting we witness now is too serious a matter to be a game.

But these are not the issues Chalmers considers in his essay. He focuses on the question about consciousness. I believe we can’t talk about being consciousness without talking about morality, without talking about the powers that meet in the AI business, politics and ethics. Chalmers plays in this meeting the ‘language game’ of the scientific philosophers of mind.

Scientists not only rely on logic, but above all on observation. But what do they observe? Scientists want to measure. But how and what? And how do they report on what they observe? What you ‘see’ is very much dependent on the method you use and the language you speak. Are rocks aware of each other’s presence? Is that a weird question? They attract each other. They obey the laws of Newtonian mechanics. Can we speak here of a rock’s behavior depending on its awareness of the presence of the other rock? Or is that too anthropomorphic thinking/language? And what about the little plant that folds its leaves together at the slightest touch. Descartes was particularly interested in it because of its ‘emotional’ behavior. He wanted to show the mechanism behind this behavior. The outside world is after all one big mechanism. (Mimosa pudica, is the Latin name of this ‘sensitive plant’.)

It is because we can trust that the falling stone obeys the mathematical law of Newtonian mechanics that we can use the physical process as a computing device that computes the outcome value of the function determining its speed at any height given the initial height of the stone as input. We could say the stone is doing math. It is a primitive device. Analogue, indeed. But note that every electronic machine, every combination of digital logical circuits deep down basically is a continuous physical process. That’s what we find if we look deep enough inside our LLMs. This answers one of the questions Chalmers poses in his essay: what do we find if we look deep down into LLMs? Any chance that we find something we could call ‘consciousness’? Only bones (‘nur Knochen’), Hegel answered, when asked if we will find a goast when looking into our brains.

Morality

Why does it matter whether AI systems like social robots, autonomous cars and weapons are conscious (or have consciousness)?

“Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.”

“We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability.” 

Can a robot feel responsible and can it be held accountable for what it does? For Kant being responsible for one’s actions is the property that distinguishes a person from a thing. If we do not consider someone to be responsible for what he does, we do not take him as a real person having a free will. The question is: is a robot a person or a thing? Some people see them as slaves. But not human slaves.

Chalmers does not approach the problem from this practical and moral side, but from the bottom up: the question of what is consciousness and what are the characteristics of a sentient or conscious being.

LLMs and social robotic systems generate text which is increasingly humanlike. “Many people say they see glimmerings of intelligence in these systems, and some people discern signs of consciousness.”

Do they see ‘glimmerings of intelligence’ in the falling stone that computes it’s velocity obeying the mathematics of Newtonian mechanics? Or in the sentient plant that folds it’s leaves when touched by an unknown intruder? I see intelligent language use in the text on the information panel that says “You are here” pointing at the spot on the city map that is supposed to correspond to the location in the real world where I am located when reading the text.

Chalmers is interested in much more complex instances of intelligent language use both in today’s LLMs and their successors. The idea is that consciousness requires complexity.

“These successors include what I’ll call LLM+ systems, or extended large language models. These extended models add further capacities to the pure text or language capacities of a language model. (…) Because human consciousness is multimodal and is deeply bound up with action, it is arguable that these extended systems are more promising than pure LLMs as candidates for humanlike consciousness.”

The reasoning parallels that of Turing in his famous “Can machines think?”. Turing proposed an imitation game to answer the question, assuming there is a difference between men and machine. The question he considers is what devices could play the role of the machine in his game. Also Turing was not concerned with the question of whether the state of the art digital machines (in 1953) can think. The question was whether it is conceivable that in the future machines can no longer be distinguished from humans while playing the game. It is therefore about the potential of the ideal machine, that he mathematically defined by his Turing machine. The model that plays a role in thinking about computability as the model of the honest die plays in statistical thinking. Turing’s future digital devices that may play the role of the machine in his game include of course current LLM based chat programs like ChatGPT. By the way: Turing did not discuss the question what entities could play the role of the human being in his imitation game. That would again lead him to discuss the difference between man and machine: how are they ‘generated’, ‘constructed’? For Turing a ‘human being’ is simply the product of a ‘natural’ process. Period. No technology is involved in the making of human beings.

Will LLM+s pass the Turing test? Is the Turing test a viable test for consciousness? There are serious doubts. The problem remains on what grounds do we decide who may play the role of men and the role of the machine in the Turing test.

What is consciousness?

“Consciousness and sentience, as I understand them, are subjective experience. A being is conscious or sentient if it has subjective experience, like the experience of seeing, of feeling, or of thinking. (…) In my colleague Thomas Nagel’s phrase, a being is conscious (or has subjective experience) if there’s something it’s like to be that being.”

To me this means that a being has consciousness if we can somehow identify ourselves with that being. Which means that a conscious being shows ‘behavior’ that makes sense, i.e. that you can understand as meaningful for the being. So consciousness is a relational thing.

It also means that a conscious being is in a sense ‘autonomous’, it moves by itself, it is not moved by forces from the outside. It shows some shadow of free will. ‘Autonomous’ doesn’t mean independent. So consciousness is a relational thing but the terms of the relation have some ‘autonomy’, some objectivity of their own.

“Importantly, consciousness is not the same as human-level intelligence. In some respects it’s a lower bar. For example, there’s a consensus among researchers that many non-human animals are conscious, like cats or mice or maybe fish. So the issue of whether LLMs can be conscious is not the same as the issue of whether they have human-level intelligence. Evolution got to consciousness before it got to human-level consciousness. It’s not out of the question that AI might as well.”

Consciousness is subjective experience according to Chalmers: “like the experience of seeing, of feeling, or of thinking”. What is missing here is that being conscious is always being conscious of ‘something’. Intentionality is the characteristic of the state of consciousness. We experience, we feel, see, something. And we think about something. Language, the human expression of thinking ‘as thinking’, reflects this intentional relation in its character of being meaningfull.

“I will assume that consciousness is real and not an illusion. That’s a substantive assumption. If you think that consciousness is an illusion, as some people do, things would go in a different direction.”

Chalmers way of approaching the problem is along the lines of old-fashioned classical metaphysics and some tacitly assumed ontology. On the one side there are persons that we consider conscious beings, and on the other side there are things, like stones and tables. They don’t have consciousness. And then there are (intelligent) machines, like robots and LLMs. How do they fit in this ontology?

No distinction is made between LLMs, which are models, and working algorithms like ChatGPT. With a machine we can interact, it has a real interface, with mathematical (statistical) models we can not physically interact. We can only think about them.

Chalmers approach is property-based instead of relational.

Chalmers offers an operational definition and is looking for distinguishing features X, properties that we can use to put things in the appropriate category: conscious or not conscious.

Some people criticize the property-based approach. According to David J. Gunkel and Mark Coeckelbergh, the problem of intelligent machines challenges ancient metaphysics and ontologies. They argue for a ‘deconstruction’ (Derrida) of the historical thought patterns that shape the debates on this subject. They don’t see engineering challenges to construct consciousness, in the first place, they see mainly philosophical, ethical and legal challenges instead.

A relational approach does not compare humans and machine as if they are separate existing entities, that are conceptually independent of each other. Without man a machune is not a machine. Machines are human constructs for human use. They are outside mathematical physical objectivations of the human mind. The result of self-reflection of human thinking. Mathematics had to reflect on itself, meta-mathematics is required before machines became langage machines and could be programmed. The relation between machine and men is comparable with the relation between a word as physical observable sign and the meaning it has for us, that is for the reader that recognizes the word. Meaning of a token is not something that can be reduced to some physical properties of the token. Meaning is in the use of words, not something that exists as meaning outside language use. Information technology is based on a correspondence between thinking processes and physical processes. 

“I should say there’s no standard operational definition of consciousness. Consciousness is subjective experience, not external performance. That’s one of the things that makes studying consciousness tricky. That said, evidence for consciousness is still possible. In humans, we rely on verbal reports. We use what other people say as a guide to their consciousness. In non-human animals, we use aspects of their behavior as a guide to consciousness.”

But, how can a verbal report be evidence for consciousness? And: how can ‘aspects of their behavior’ be a ‘guide to consciousness’ ? Isn’t it begging the question if you take these properties as features? Don’t you tacitly assume what you want to conclude? What is it that makes you see some fenomenon as intentional behavior of a conscious being and not just as a mechanical proces?

“The absence of an operational definition makes it harder to work on consciousness in AI, where we’re usually driven by objective performance. In AI, we do at least have some familiar tests like the Turing test, which many people take to be at least a sufficient condition for consciousness, though certainly not a necessary condition.”

I dare to disagree. The Turing test shows how far we are in simulating conversational behavior that we consider an indicator of consciousness.

Evidence for consciousness of LLMs: Chalmers property-approach

“If you think that large language models are conscious, then articulate and defend a feature X that serves as an indicator of consciousness in language models: that is, (i) some large language models have X, and (ii) if a system has X, then it is probably conscious.

There are a few potential candidates for X here.”

Chalmers considers four.

X = Self-reports

“These reports are at least interesting. We rely on verbal reports as a guide to consciousness in humans, so why not in AI systems as well?”   Chalmers concludes and I agree that reports do not provide a convincing argument for consciousness.

X = Seems-Conscious

“As a second candidate for X, there’s the fact that some language models seem sentient to some people. I don’t think that counts for too much. We know from developmental and social psychology, that people often attribute consciousness where it’s not present. As far back as the 1960s, users treated Joseph Weizenbaum’s simple dialog system, ELIZA, as if it were conscious.”

This is an interesting comment, Chalmers makes. ELIZA shows how easy it is too come up with a conversational algorithm that is convincing to it’s users (sometimes, for some time). LLMs are much more complex and they simulate not only a Rogerian psychotherapist, but if they work convincingly they work on precisely the same principle: functional language use.

X = Conversational Ability

“Language models display remarkable conversational abilities. Many current systems are optimized for dialogue, and often give the appearance of coherent thinking and reasoning. They’re especially good at giving reasons and explanations, a capacity often regarded as a hallmark of intelligence.

In his famous test, Alan Turing highlighted conversational ability as a hallmark of thinking.”

See above for my comment on the Turing test.

X = General Intelligence

“Among people who think about consciousness, domain-general use of information is often regarded as one of the central signs of consciousness. So the fact that we are seeing increasing generality in these language models may suggest a move in the direction of consciousness.” 

Chalmers concludes this part of the analysis.

Overall, I don’t think there’s strong evidence that current large language models are conscious. Still, their impressive general abilities give at least some limited reason to take the hypothesis seriously. That’s enough to lead us to considering the strongest reasons against consciousness in LLMs.

Arguments against consciousness.

What are the best reasons for thinking language models aren’t or can’t be conscious?

Chalmers sees this as the core of the discussion. “One person’s barrage of objections is another person’s research program. Overcoming the challenges could help show a path to consciousness in LLMs or LLM+s.

I’ll put my request for evidence against LLM consciousness in the same regimented form as before. If you think large language models aren’t conscious, articulate a feature X such that (i) these models lack X, (ii) if a system lacks X, it probably isn’t conscious, and give good reasons for (i) and (ii).”

X = Biology

“Consciousness requires carbon-based biology.”

“In earlier work, I’ve argued that these views involve a sort of biological chauvinism and should be rejected. In my view, silicon is just as apt as carbon as a substrate for consciousness. What matters is how neurons or silicon chips are hooked up to each other, not what they are made of.”

Indeed, functions and information processes can be implemented in whatever material substrates. What matters is structure and structural correspondence between the physical processes and certain cognitive processes as we model them.

X = Senses and Embodiment

A meaningfull text refers to something outside the text. What is that ‘something outside’ ? Some people think we are imprisoned in language, but when they express this thought they mean something with it. How are the symbols that LLMs generate grounded in something outside the text? Living beings have a number of senses that connects them with the world outside.

“Many people have observed that large language models have no sensory processing, so they can’t sense. Likewise they have no bodies, so they can’t perform bodily actions. That suggests, at the very least, that they have no sensory consciousness and no bodily consciousness.”

Note that Chalmers introduces here variants of consciousness, ‘sensory’ and ‘bodily’ consciousness. Later on we will also have ‘cognitive’ consciousness.

Thinking about sensory perception in technical systems we draw a line between what belongs to the system itself and what is outside the system. What kind of border line is this?

Computers are good in playing chess. They beat human world champions. But how good are they in playing blind chess? What’s the difference between a computer playing blind chess or chess? Thinking about this it seems like it matters through what kind of device information enters the machine. A blind chess player may not use his eyes or whatever device to see the actual state of the playboard at all times he pleases. He only has his memory to visualize internally and update the state of the game he is playing when a set is done. But for a technical device like a robot what differences does it make if we do not attach a video device? The only way to simulate the difference is by specifying the memory function.  

If a robot has a body where does the body of the robot ends? What is the border between the body and the outside world?

In “Can Large Language Models Think?” Chalmers argued that “in principle, a disembodied thinker with no senses could still have conscious thought, even if its consciousness was limited.” An AI system without senses, a “pure thinker” could reason about mathematics, about its own existence, and maybe even about the world. “The system might lack sensory consciousness and bodily consciousness, but it could still have a form of cognitive consciousness.”

Indeed, the computer that plays chess is actually playing a mathematical game with exact rules for manipulating symbols. For us it ‘reasons about the world’ of chess because for us the symbols it manipulates implemented in some physical process refer to the pieces on a chessboard.

Chalmer’s ‘pure thinker’ is “a (possibly nonhuman) thinker without sensory capacities”. For Chalmers it seems to be obvious that a pure thinker “can know a priori truths e.g. about logic, mathematics.”. However, without embodiment, without sensory perception, without a world thought and experienced as being outside the mind, there would be no mathematics. Itis not the content of the sensory perception that is the sensory basis of mathematical thought, but the immediate extensiveness of the perception. Reality obeys the principle of structurability which makes that everything has a quantitative moment, by which it is countable, measurable, structurable. It is by our embodiment that we experience direct physical contact with the world outside us. This experience is present in every sensory perception. We perceive this working by which the experience is physically possible as an effect on our body. Together with this effect we experience the extensiveness of this effect. Without this grounding of mathematical thought in sensory perception it is hard to understand the ubiquous applicability of mathematics.

“LLMs have a huge amount of training on text input which derives from sources in the world.” Chalmers argues “that this connection to the world serves as a sort of grounding. The computational linguist Ellie Pavlick and colleagues have research suggesting that text training sometimes produces representations of color and space that are isomorphic to those produced by sensory training.”

The question is for whom these ‘representations’ exist. Consciousness is always consciousness of something that exists somehow distinguished from the act or state of consciousness. It means at least that the conscious being is aware of this distinction, i.e. that there is something out there it is aware of.

It will be clear that the challenge of the embodiment feature is closely related to the following feature.

X = World Models and Self Models

“The computational linguists Emily Bender and Angelina McMillan-Major and the computer scientists Timnit Gebru and Margaret Mitchell have (in their famous Stochastic Parrots paper) that LLMs are “stochastic parrots.” The idea is roughly that like many talking parrots, LLMs are merely imitating language without understanding it. In a similar vein, others have suggested that LLMs are just doing statistical text processing. One underlying idea here is that language models are just modeling text and not modeling the world. 

This amounts to saying that LLMs do not know the facts. They do not know what is truth.

Chalmers comment is interesting. He observes that “there is much work on finding where and how facts are represented in language models.”

This comments suggests that Chalmers considers ‘facts’ as objective truths, or objects. Like theorems in mathematics. As if it can be decided by the engineers what the facts are. The issue of power that I mentioned before pops up here. What Chalmers seems to forget is the role that the user, the reader of the generated texts plays. It is the reader that gives meaning to the text.

AI has no problem in generating (creating, if you wish) huge amounts of videos, texts and music by remixing existing fragments. But it needs humans to evaluate its quality, to make sense of it. The proof of the pudding is in the eating. Not in the making.

As the producers of ChatGPT of OpenAI rightly state: it is the responsibility of the user to check the value of the texts generated by the machine. This is the very reason they give a warning not to use it uncritically in critical applications.

What is truth is not the result of an opinion poll. It is not by means of statistics that we decide what the facts are. To give an example from my personal experience. If Google includes articles written by my son (who happens to have the same name as I have) in my publication list, it doesn’t matter how many people using Google copy the faulty references, the truth differs from what LLMs and all Google adepts ‘believe’ it is. It is well known that ChatGPT isn’t very reliable in references to the literature. This is of course an instance of its ‘unreliable connection’ with the world in general.   

X = Unified Agency

“The final obstacle to consciousness in LLMs, and maybe the deepest, is the issue of unified agency. We all know these language models can take on many personas. As I put it in an article on GPT-3 when it first appeared in 2020, these models are like chameleons that can take the shape of many different agents. They often seem to lack stable goals and beliefs of their own over and above the goal of predicting text. In many ways, they don’t behave like unified agents. Many argue that consciousness requires a certain unity. If so, the disunity of LLMs may call their consciousness into question.”

A person is a social entity, a unity of mind and body. A human being doesn’t only have a body, it is a body. The type of relation between mind and body is at stake.

The identity of a machine is a mathematical identity implemented in a physical world. The machine is a working mathematical token. There are many tools and machines of the same type. They share the same mathematical identity, but they differ in matter. Like two pennies, or two screwdrivers. Technical instruments exists as more of the same kind. They are not unique. We assign identity to the social robot, give it a name as we do with our pats, the way we assign identifiers or unique service numbers to citizens.   

Chalmers concludes this part with:

For all of these objections except perhaps biology, it looks like the objection is temporary rather than permanent.

The AI engineer says: “Tell me what you miss in current AI systems and I tell you how to build it in.”.

The idea is that by this process of adding more and more features AI will eventually reach a stage where we can say that we managed to realize an artificial conscious being.

It witnesses the typical mathematical stance that the engineer takes. The idea is a mathematical entity that exist as a limit of a real proces. As if we could produce mathematical circles from physical matter in a circle fabrique.

Chalmers conclusion

In drawing a general conclusion Chalmers is clearly walking on eggs.

“You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.

It seems entirely possible that within the next decade, we’ll have robust systems with senses, embodiment, world models and self models, recurrent processing, global workspace, and unified goals. 

It also wouldn’t be unreasonable to have at least a 50 percent credence that if we develop sophisticated systems with all of these properties, they will be conscious.“

He mentions four foundational challenges in building conscious LLMs.

  1. Evidence: Develop benchmarks for consciousness.
  2. Theory: Develop better scientific and philosophical theories of consciousness.
  3. Interpretability: Understand what’s happening inside an LLM.
  4. Ethics: Should we build conscious AI?

Beside these ‘foundational challenges’ Chalmers mentions a couple of engineering challenges, such as: Build rich perception-language-action models in virtual worlds.

And if these challenges are not enough for conscious AI, his final challenge is to come up with missing features.

The final question is then:

“Suppose that in the next decade or two, we meet all the engineering challenges in a single system. Will we then have a conscious AI system?”

Indeed, not everyone will agree that we do. But, Chalmers is optimistic about what engineering can offer: if someone disagrees, we can ask once again: what is the X that is missing? And could that X be built into an AI system?”

“My conclusion is that within the next decade, even if we don’t have human-level artificial general intelligence, we may well have systems that are serious candidates for consciousness.” (Chalmers)

My ‘conclusion’ would be that Chalmers’ conclusion will recur after each decade in the future. I believe so because Artificial Intelligence is an Idea, an ideal if you wish, that technology tries to realize and that big AI-enterprises tries to sell on the market as being the ideal that we have to strive for. An idea that will remain an ideal until we will have other ideals we strive for. That will not be before we have answered the question what it is that we strive for.

If the machine had consciousness, it would fight for its own truth to the death.

Histoire se répète.

When the indian inhabitants in North America saw for the first time a steam boat coming down the Mississippi river, they thought it was a living creature having a soul.

When Descartes and Lamettrie came up with their mechanical ‘bête machine’ and the theory of l’homme machine’ it lasted a few centuries before the heated debates about the difference between men and machine faded out and men became just men again and a machine just a machine. With the LLMs and the talking social robots the same debate recurs. It won’t take long before also this heated discussion about conscious machines will cool down and a machine will again be just a machine and a human being a human being. The difference between the situation now and the situation in the 17th and 18th centuries is that AI is nowadays promoted by powerfull commercial enterprises that control the thinking of the masses addicted to information.

The attemtps to make us believe that machines (LLMs) are (potential) conscious beings that are able to know the world and that we can hold responsible for what they do are supportive to the powerfull forces that keep us addicted to the information they produce. Addicted to an religious ideal image of man that the future of AI would bring us.

As long as we are free men we will never accept that machines are taken responsible for what they ‘do’. As we also do not take any God responsible for what happens in the world.

Death of an Author

In his post “The Future of Writing Is a Lot Like Hip-Hop” in The Atlantic of 9 may 2023 Stephen Marche reports his findings during his cooperation with ChatGPT in writing their novel Death of an Author. In that report he comments on how users ask ChatGPT things. “Quite quickly, I figured out that if you want an AI to imitate Raymond Chandler, the last thing you should do is ask it to write like Raymond Chandler.” Why not? Because “Raymond Chandler, after all, was not trying to write like Raymond Chandler.”

I believe this refers to the core insight why AI is not human. It behaves like humans. At least AI tries to behave like them. But humans do not try to behave like human beings. They even do not behave ‘like human beings’.

I mean that we make a path while walking. The path is the result, the history of this act. The path is the walking abstract from the real act of walking. The real act of following a path always involves and presupposes the original act of making a path.

What we can learn from Marche’ report is that AI is not so much of a machine but more of a tool. A tool requires for its succesfull use handcrafship and a lot of experience from the human user. There is not one path, there are many you have to choose from.

Notes and references

A 2020 survey of professional philosophers, around 3 percent accepted or leaned toward the view that current AI systems are conscious, with 82 percent rejecting or leaning against the view and 10 percent neutral. Around 39 percent accepted or leaned toward the view that future AI systems will be conscious, with 27 percent rejecting or leaning against the view and 29 percent neutral. (Around 5 percent rejected the questions in various ways, e.g. saying that there is no fact of the matter or that the question is too unclear to answer).

David Bourget and David Chalmers (2023). Philosophers on Philosophy :the 2020 PhilPapers Survey, Philosopher’s Imprint, January 2023, https://philpapers.org/archive/BOUPOP-3.pdf

David J. Chalmers (2023). Could a large language model be conscious? Within the next decade, we may well have systems that are serious candidates for consciousness. Boston Review, 9 Augustus 2023.

Mark Coeckelbergh (2012), Growing moral Relations: critique of moral status ascription. Palgrave MacMillan, 2012.

Mark Coeckelbergh (2014). The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics. Philos. Technol. (2014) 27:61–77.

David Gunkel (2018). The Other Question: Can and Should Robots have Rights? Ethics Inf Technol 20, 87–99 (2018).

David J. Gunkel (2023). Person, thing, robot. A Moral and Legal Ontology for the 21st Century and Beyond. The MIT press, forthcoming, September 2023

Published by

admin

Rieks op den Akker was onderzoeker en docent kunstmatige intelligentie, wiskunde en informatica aan de Universiteit Twente. Hij is gepensioneerd.

Leave a Reply