David J. Gunkel is an American academic and Presidential Teaching Professor of Communication Studies at Northern Illinois University. He teaches courses in web design and programming, information and communication technology and cyber culture.
Gunkel has an impressive list of books to his name, including An Introduction to Communication and Artificial Intelligence, Robot Rights, The Machine Question, Of Remixology, but also (together with Paul A. Taylor) Heidegger and the media. The long-awaited Person, Thing, Robot – a moral and ethical ontology for the 21st century and beyond was recently published by MIT Press. He is currently touring universities in the US and Europe to give lectures and publicize his book and ideas about the rights of robots. Gunkel is an enthusiastic speaker and active at X. I became familiar with his work through Twitter.
A central theme in his work is the issue of whether robots, intelligent social machines, should be regarded as moral ‘agents’ and whether they should also be granted rights in a legal sense, as humans, animals, rivers, and organizations also have. Gunkel claims that the phenomenon of AI, and social robots in particular, is disrupting our thinking frameworks. Where should we place the robot ontologically? Is the robot a thing or a person? His answer is: the robot does not fit in there. We need to rethink the old metaphysics. Back to Plato.
The term Gunkel uses for this project is deconstruction. The concept is central to his book Deconstruction. The term comes from Jacques Derrida, next to Emmanuel Levinas, one of the two French thinkers whose ideas Gunkel measures his thoughts by. Gunkel’s deconstruction project consists of two phases, an analytical phase and a synthetic phase, and one could say that in the first he is mainly in dialogue with Derrida and in the second with the ideas of Levinas, in which the ethical (the Other) is prior to ontology (Being itself). The notion of ‘face’ (Dutch: ‘gelaat’, not ‘gezicht’) plays a key role. (*) Here is a link with the technical ‘interface’ of man and machine: the (programming) language is that interface.
The ethical issue could be phrased as: does the machine have a face (‘gelaat’)? Does the robot appeal to us in a moral sense? It is known from human-computer interaction research that the user tends to treat the machine as a ‘social agent’ in interactions. Giving all kinds of anthropomorphic characteristics (a face) reinforces this tendency. The technicians do not teach the robot to speak and understand our language with the aim of misleading the user into thinking that the robot is one of us, but because it benefits the operation, interaction and ease of use. The user-unfriendliness of the AI and the robot, on the other hand, lies in the fact that the robot cannot be held accountable. That is the practical problem of AI. Can we leave the thing to itself? It is therefore exciting to hear what Gunkel has to say about this. Can the robot one day become one of us and be brought to justice? I will come back to it later.
Shared interest
I share my interest in the phenomenon of artificial intelligence and language with Gunkel. The subject of technology, its historical development as part of anthropology, has kept me busy since my studies in mathematics and computer science in Twente. I am mainly inspired by my graduation teacher, the logician and philosopher Louk Fleischhacker. I attended lectures with him in Mathematical Logic, Philosophy of Mathematical Thinking and Information Technology. Through these lectures I became acquainted with the work of the great philosophers, Aristotle, Hegel, Frege. We studied the work of Hegel together and he introduced me to the work of the philosopher Jan Hollak: Hegel, Marx and Cybernetics (1963), Van Causa sui tot Automatie (1966). These insights are still of great value to the debate about automation. In Amsterdam I participated in the Anthropology of Technology study group led by Maarten Coolen, who obtained his PhD on this subject.
At the end of 1978 I graduated from the Theoretical Computer Science department of the Technical University of Twente (now University of Twente) on the formula Z(Z) = Z(Z), where Z = λx.x(x) is the lambda term that stands for the self-application function.
The Z(Z) to the left of the = sign should be read as an application (or a command thereto) of the self-application function Z in itself as an argument. The Z(Z) on the right is the result of this operation. So it is a ‘dynamic equality’. In principle, not much different from more well-known ‘equalities’ such as 4/8 = 1/2 or 5 + 7 = 12. What is shown on the left is an (assignment for an) operation, but also an indication of the result. What is shown on the right is the result again, but in normal form. In Z(Z)=Z(Z) the operation therefore turns into the operation itself, ad infinitum. Just like an unloaded engine continues to run on its own. Completely useless, but self-application is found everywhere where people try to understand automation or life mathematically. In the programmable machine the mathematical signs work; their meanings are realized therein, which is reduced to a normal form that we can understand. (ChatGPT attempts to normalize our language by imposing the historical articulation of ideas as a norm on the reader.)
So my graduation project was about ‘the self-application of the self-application as a mathematical expression of the automaton as the external objectification of the self-reflection of the reflection of the mind’.
I worked as a mathematics and physics teacher and obtained my PhD on a topic in theoretical computer science (compiler construction). I then worked as a researcher and teacher in the field of AI and (Bayesian) statistics, where I studied the work of the physicist E.T. Jaynes (Probability Theory: the logic of science) and Judea Pearl (Causality, The Book of Why). Our students learned how to train networks that could generate Shakespearean texts. We used Bayesian networks for automatic dialogue act recognition. My lectures were called Conversational Agents and Formal Analysis of Natural Language, about which Louk once remarked: “If you want to formalize language, you have to formalize the whole person, so that will take a while.” We now know how true this is. The conversational agents soon became ’embodied conversational agents’ and robots acquired more and more human features, after non-verbal gestures were also ‘formalized’ (see the ‘facework’ of the sociologist Erving Goffman). We built virtual suspect characters that could replace the expensive human actors playing the role of real suspects in police interrogations in serious games (Bruijnes 2016). However, we are not yet finished with ‘the construction of the artificial human’.
When we have a conversation with ChatGPT, we still have to think of ‘the person behind the curtain’. I worked in various European research projects, all of which focused on the interaction, use and interface of humans and intelligent machines. In the last project we developed and tested a serious virtual reality game for children with diabetes to teach them to manage their daily lives in a playful way. The notion of play is important for the discussion about the independence of technology and robot rights in particular.
Louk obtained his PhD on quantity with Aristotle and Hegel, which he tries to reconcile in his own unique way. He then wrote his book Beyond Structure. Fleischhacker points out the tendency of philosophers towards ‘mathematism’, the view that structurability is the essence of observable reality and that knowing something means that you can give it a structure and make a mathematical model of it.
“The enterprise, of which this book is a report, consists of an attempt towards a systematic ‘deconstruction’ of mathematism.” This is what Louk Fleischhacker writes in the introductory chapter of his Beyond Structure (1995) about the power and the limits of mathematical thinking.
Deconstruction
Beyond Structure is an attempt at ‘systematic deconstruction of mathematism‘, with Fleischhacker pointing out that deconstruction itself also entails a metaphysical position that should not remain implicit.
In their article ‘ChatGPT: Deconstructing the Debate and Moving It Forward’, authors Marc Coeckelbergh and David J. Gunkel attempt to revive the debate about the meaning of ChatGPT through a ‘deconstruction’ of old metaphysical contradictions to which both sides are stuck. What does deconstruction mean?
In Derrida, ‘deconstruction’ refers to the attempt to show the context-dependent history of philosophical texts. These texts are regarded as the traces of thought constructions rather than as the names of transcendental principles. The deconstruction of a way of thinking therefore comes down to showing how it came about. (Fleischhacker, note on page 17).
“Broadly speaking, deconstruction means here (and in this paper) that one questions the underlying philosophical assumptions and core concepts, rather than merely
engaging with the existing arguments.” (Coeckelbergh en Gunkel, 2023).
Questioning the underlying assumptions therefore involves rereading and reinterpreting historical texts. An analysis of the historical moments in the development of mathematics that ultimately led to information technology will involve both a deconstruction of mathematism and a deconstruction of the classical metaphysical contradictions that dominate the debate about information technology. In Gunkel subject-object, but especially thing-robot-person. Fleischhacker, Beyond Structure is mainly about the limits of mathematical thinking, which also mark the limits of the ‘applicability of mathematics’ and of (information) technology. To do this, we must delve deep into the history of modern Western thought. Descartes, Leibnitz, Hume, Hegel, Frege and Wittgenstein are our most important interlocutors. They all often thought mathematically. Descartes’ metaphysics is essentially mathematical. His dualism is characterized by a strict separation between two substances: the thinking I and, in contrast, reality, which is essentially extension (res extensa).
The Dutch philosopher H. Pos, in the foreword to Het Vertoog over de Methode (1937), the Dutch translation of Discours de la Methode (1637), calls Descartes’ metaphysics a mathematical metaphysics. But Leibnitz’s metaphysics also has something mathematical. And in Kant we come across statements that indicate that he saw the amount of mathematics in the natural sciences of his time (with Newtonian mechanics as a paradigm) as a measure for its knowledge content. Modern Western philosophy after Kant also finds it difficult to escape mathematical thinking, even though all kinds of ‘postmodern’ and ‘structuralist’ views on knowledge and language claim to have freed themselves from Cartesianism, the strict mathematical dualism.
The mathematician states that reality is such and so (“Die Welt ist alles was der Fall ist”) and never retraces his steps. Any form of self-reflection of this thinking, in which subject and object of thinking are thought of as a relationship, leads to paradoxes: the concept of the set cannot be expressed mathematically as a set (Russel). Hegel tries to understand mathematical physics as an adequate expression of the concept of nature. The physicist himself does not do that. Hegel sees the essential characteristics of mathematical thinking and distinguishes it from historical and philosophical thinking, but as a system builder he does not seem to be able to completely escape mathematics. In the meantime, in his Logik he lays the conceptual basis for the concept that would dominate our lives from the beginning of the 20th century to the present day: information: the expression of the qualitative in a quantitative way. Does Gunkel escape the temptation of mathematical, technical thinking? To what extent is his project similar to Fleischhacker’s? Does he come up with a new metaphysics? A new metaphysics that fits our present technological era will express a more free relation to history. A relation in which we are not any more dominated by our interpretations and wordings of the past, interpretations that we now use to legitimate our current stance and behavior towards the other, as we see in the worlds tragedic conflicts.
For Louk, mathematics finds an essential limit in life itself. Postmodern philosophy finds its key concept in intersubjectivity: interaction of personal perspectives on reality determined by the individual background of lived life. In Hegel, technology passes into the relationship of man and woman (see Jenaer Realphilosophie).
The first AI scientists saw their goal as creating a person. It is the modern version of an age-old tradition: man wants to make himself a human being through technical means. According to Hollak, automaton is the external objectification of the technical idea as such. Descartes idea of God (God is Causa sui) is the self-projection of the autonomously thinking human being. The AI is the (provisional) end product of the technical project of modern man, which is based on the counterfactual postulate that man is a machine, with the aim of realizing (implementing) a mathematical construction, a formal model of (thinking) human behavior. This is the key to understanding the importance that theologians attach to artificial intelligence: “God is dead and technology is his corpse.” Descartes’ God as Causa sui is the (projection of) modern enlightened man and he tries to realize this in the robot.
Robot rights?
Gunkel’s message is that we should not wait any longer to work on an answer to the question of whether the robot is a moral agent and should be granted rights in a legal sense. He is not alone in this. We also hear this sound from Max Tegmark’s Future of Life Institute. Concerned scientists and ethicists even called for a moratorium on AI development. The European Community has established rules for the Ethics of AI development. And the US also has laws regarding the participation of robots (delivery robots) and automatic cars in traffic. There are also rules for autonomous weapon systems. In short: the robots are already among us and there are already rules and laws for specific situations in which they are used. However, the question Gunkel asks goes further and is more fundamental in nature. Gunkel is concerned with the issue of whether the robot is ‘a person in a legal sense’ or not.
The robot as a social relation and as a cognitive relation
A key concept in Gunkel’s notion of the social robot is that what the robot is is determined by the way in which we relate to the robot. Mark Coeckelbergh also points this out.
Gunkel speaks of a ‘relational turn’ in our thinking about the relationship with others. It is not the case that we reason intellectually based on the presence of certain properties (such as having consciousness) that we first recognized as to whether the other person belongs to our moral circle. It is exactly the opposite.
“In social situations, then, we always and already decide between who counts as morally significant and what does not and then retroactively justify these actions by “finding” the essential properties that we believe motivated this decision-making in the first place. Properties, therefore, are not the intrinsic prior condition for moral status. They are products of extrinsic social interactions with and in the face of others.” (David J. Gunkel, 2022)
In an extensive review of Person, Thing, Robot by Abootaleb Saftari we read:
“Finally, the concept of ‘relation’ remains largely unexplored in Gunkel’s argument. It feels like a mystery, a “black box,“ with only a faint outline suggesting its social nature. This lack of clarity provides ground for further critique. One could argue that even if we agree with Gunkel’s relational perspective, our tendency to treat things as objects, to objectify them, might itself stem from our relational interactions with them.”
This is an important point. Gunkel thinks from the use of the machine. His comment that the robot is what it is in relation to humans relates to this relationship of use. The robot is a social agent. As a member of the Human Computer Interaction group, in which we created conversational agents, I have not only been involved in research into the social relationship with the robot, but especially into the cognitive human-machine relationship. The relationship is seen from the maker’s perspective. Man invented the machine. It is a realization of the technical idea, defined by Hollak as follows.
The technical idea is that abstract form of understanding in which man expresses his mastery of nature through an original combination of its forces.
That technical idea has developed during the history through interaction with its realisations. A development that runs ‘parallel’ with the development in the relationship between mathematics and nature. Information technology presupposes and is an expression of self-reflexive meta-mathematics, the mathematics of mathematical thinking.
The machine, and the programmable machine or automaton is a self-reflexive form of a machine, is a cognitively relative notion.
The intentional correlate
The Dutch philosopher Jan Hollak has shared his thoughts on the phenomenon of the ‘thinking machine’ and the ‘conscious machine’ with his readers in several articles. In his famous inaugural speech From Causa sui to Automatie he places the following footnote.
“If this is constantly mentioned in connection with mechanisms of ‘reflection’, ‘self-reflection’, etc., then this obviously never refers to the subjective act-life of human thinking (machines have no consciousness), but always only to its intentional correlate.” (In: Hollak and Platvoet, footnote on p. 185)
In this footnote, Hollak refers an entire bookcase full of philosophies that assume the possibility of the existence of the ‘thinking machine’ to the realm of fables.
In Meeting on neutral ground, a reflection on man-machine contest, the mathematician and logician Albert Visser says:
“After all, machine and program are intentional notions. So to understand the machine, we need to understand man.” (Albert Visser, 2020).
To understand the machine we must understand the human, because the machine is an intentional concept.
However, for many people it is not at all ‘obvious’ that the machine ‘has no consciousness’. The term ‘intentional correlate’ comes from a movement in philosophy that is not very popular among scientists and philosophers. It is an important notion, on which several philosophers have devoted entire books (Searle, 1983; Dennett).
So it’s about understanding understanding. The machine is an expression of our understanding and we should understand that.
The object of the intentional act (consciousness is always consciousness of something, we always think something, a thought) is called in the phenomenological tradition the ‘intentional correlate’ of the act. When I think of the lamp, the idea is the intentional correlate of my thought. We make a distinction between the state of the lamp: it is on or off, and the state of the lamp as a representation of the state of the technical system of which it is a part. The control lamp of the switch in the car has a function in a working system, in the car, in a machine. Its state is a state as I understand it: the lamp is either on or off, as part of a system. That state as a state of a system is not something that is inherent to the lamp in itself. We think so. For those unfamiliar with the technical construction, the lamp is only what it immediately is in the experience. For those who don’t know what arithmetic is, the calculator does not exist.
We can therefore only consider the machine meaningfully in relation to humans because it only exists in a cognitive relation to humans as a machine. (Just as the scribbles on paper represent words only to those who know what language is.) The essence of the machine is the technical idea, the concept.
We can distinguish between man and machine, but not separate them. Just as we can distinguish the inside and outside of a cup, but we cannot really separate them. They belong together as the two relata of a relationship. They do not appear separately in reality next to each other. Now, in the human-machine relationship, both also have independence. They appear outside each other as ‘things’ in reality. Man stands opposite the machine and can operate the machine through physical contact. This makes many people forget that the machine is only a machine in relation to humans. When we talk about that thing as a machine and use it as a machine, we are talking about a cognitively based relationship that is objectified in a material form: the thing is an expression, a realization of a concept, a design. Without that relative moment to the human who designed the thing, the thing is not a machine, but merely a physical process without meaning.
I therefore object when headlines once again state that “AI performs better on a certain task X than the human expert.” Or that “the computer makes better decisions than humans.”
The computer cannot make any decisions at all, it works on the basis of combinations of logical electronic circuits whose function is the representation of a logical rule of thought in a mathematical form: if A then B otherwise C. It is important not to identify freedom with freedom of choice. The machine is programmed to make a choice, but is not free to determine the meaning, value, of the factors that determine the choice. The drone does not know the value of its target, the death and life of the enemy.
On the use of the term ‘intentional correlate’.
The mistake that people often make is to consider the nature of a machine only in terms of content (‘materialiter’) and not also in terms of form (as nature of the machine, ‘formaliter’). The light on the dashboard is in a state: ‘on’ or ‘off’, which does not simply indicate whether the light itself is on or not (which, as Austin rightly points out in Other Minds, is an ‘absurd thought’) it is a state that refers to a technical design, a system, in which being ‘on’ or ‘off’ has a function. For example, it is a switch that works as an interface for the user to control the air conditioning.
The moral status of the robot
For Gunkel, the social, practical relationship with the robot is the starting point for the ethical, moral and legal status of the robot. The question is whether the robot will ever be able to take responsibility must, in my opinion, be answered with a firm no. This does not alter the fact that the robot that participates in social traffic must ‘adhere’ to social rules. The ‘behavior’ of the robot may be the cause of an accident, but the robot cannot be held liable for its consequences. It is because of the robot’s technical cognitive intentional mode of being that the robot is not a moral subject and cannot be regarded as a person in a legal sense. The robot cannot derive any rights from the fact that animals or organizations have certain rights. As a technical design, they do not fit into this classification of living organisms.
As far as the robot’s participation in social intercourse (work, play) is concerned, we must distinguish between the internal rules of this intercourse that the robot must adhere to, and the external rules that determine the conditions under which a robot can function as an ‘autonomous’ player and may participate in the ‘game’. Ultimately, that decision will have to be made by a human and cannot be left to a robot. But perhaps Gunkel is of the opinion that this is not impossible in the future and that the robot will therefore one day decide for itself whether it can and will participate in certain games.
It is one of the characteristics of mathematism to see life as a game, or life form, and to conceive language purely functionally: the meaning of words is determined by their use in a language game that belongs to a life form. We have now entered Wittgenstein’s world of thought. His strict seperation (not just distinguishing) between Sagen and Zeigen is a sign of his mathematical attitude, a legacy of Frege, who strictly seperated as different ‘Gegenstände’ signs and their meanings. Seeing language as purely functional, as an instrument, and losing sight of the verbal character of language, is characteristic of the idea underlying those views that see the AI machine in the form of a chat-agent like ChatGPT as a truly intelligent thinking machine that has consciousness, would have. The motive is that the machines seems to use language and that language in humans is the expression of thinking as thinking. But the machine does not ‘use language’ the way humans use language. Man tries to express in words reality as he experiences it, at the same time creating language. Machines can’t do that. They are a reflection of the historical products of those attempts at articulation, without taking the historical character into account and without understanding it as historical.
Cybernetics and the question of AI’s ownership
A subject that Gunkel pays a lot of attention to is the relationship between text and author. He then returns to texts by Plato and the discussion about the relationship between spoken and written language. In the latter the author would not be present. He cannot answer questions about the text. The classical notion of ‘author’ should be subjected to a revision (deconstruction) due to the phenomenon of the language-generating machine. Strikes by authors and artists testify to the unrest caused by programs such as ChatGPT: people are afraid of losing their jobs due to the use of AI. Just as the spinners and weavers working from home defended themselves against the arrival of the spinning Jenny and the factory production of woven fabrics. There is no stopping the development of technology. We’ll have to learn to live with it. Until the tide turns the ship.
Authorship is a form of intellectual property. In our super-individualistic society (the US seems to be even worse than Europe) this is directly linked to one’s own identity, income and future. Information technology is also disrupting that structure. What do I mean by that?
When we buy a washing machine, we buy a finished product. After five years of use, the machine still works just as it did when we first used it. The machine may break down at some point. It will be repaired and it will last another year. The factory receives feedback from repairers and reviews from users to benefit the developing of an improved version of the machine. In the world of ICT it is the same, but different. OpenAI has offered a product to users with ChatGPT that is a ‘self-learning’ chat box. The product is not finished. It is a minimal realization of a concept that develops in and through use. The users are co-developers of the system. The dialogues with the machine are stored and serve as new material from which the system learns. The development loop: design, implement, test, redesign, use and redesign is short-circuited in open AI. This can be done using the various types of dialogue acts people use in dialogue.
In a dialogue we can distinguish different virtual dialogue ‘channels’, each of which has its own language and its own tone. Over the primary channel, questions are asked and answers are given. In addition, there is a feedback channel through which information is sent relating to the way in which questions and answers are valued by the speakers (including hedges). Feedback about what the recipient thinks of an answer is educational for the questioner. The system, as it were, listens to its conversations, it looks at ‘face’, and learns from them, just as people do in a conversation. We see that in this phase of technical development, design, evaluation, feedback and use of the system are realized as aspects of a dialogue through interaction with the system. In ChatGPT’s open information technology, the social interaction relationship and the cognitive relationship of the machine have become intertwined. In the machine, the understanding of social interaction through language is explicitly realized in a technical manner.
Who is the author of the conversation? Who can claim rights to the knowledge that emerges in the conversation? These are issues raised by the technology of AI that will change social labour structures of our society.
Death of an Author
In his post “The Future of Writing Is a Lot Like Hip-Hop” in The Atlantic of 9 may 2023 Stephen Marche reports his findings during his cooperation with ChatGPT in writing their novel Death of an Author. In that report he comments on how users ask ChatGPT things. “Quite quickly, I figured out that if you want an AI to imitate Raymond Chandler, the last thing you should do is ask it to write like Raymond Chandler.” Why not? Because “Raymond Chandler, after all, was not trying to write like Raymond Chandler.”
I believe this refers to the core insight why AI is not human. It behaves like humans. At least AI tries to behave like them. But humans do not try to behave like human beings. They even do not behave ‘like human beings’.
What I mean is that we should see the reconstruction as a reconstruction and not as the original. The original as original disappears in and through formalization, through reconstruction. Man is not a ‘social agent’. Then we would identify man with a certain historical form of it. That is the core of the discussion about all kinds of bias (gender, culture) of open AI systems such as ChatGPT.
David Gunkel has made an important contribution to the development of the understanding of technology. This allows us to take a step further in our relationship with ourselves and others.
I am grateful to David Gunkel for his open mind
References and notes
(*) Note:
A Dutch collection of essays by Emmanuel Levinas is entitled Het Menselijk Gelaat. The Dutch words ‘gezicht’ and ‘gelaat’ are often considered synonyms. The English ‘face’ is a translation of both the Dutch ‘gezicht’ and the Dutch ‘gelaat’. I sense a difference that is lost in the English translation. The French ‘visage’ has the same problem. You can draw a ‘gezicht’, depict a ‘gezicht’ and give a robot a ‘gezicht’, but not a face. The word ‘inter-face’ (also ‘user inter-face’), which stands for the component of a system that ensures the technical interaction between user and system, contains the word ‘face’. It is how the instrument presents itself to the user. It provides visibility into the state of the process and also includes the levers and buttons for controlling the system. The formalized ‘natural language’ is the interface of the chat systems. Only man has a face (‘gelaat’) in the Levinasian sense. Erving Goffman’s sociological studies On Face-work focus on ‘politeness’, respect for others in social interaction and how to make an ‘agent’ able to participate in social encounters.
Bruijnes, Merijn (2016). Believable suspect agents: response and interpersonal style selection for an artificial suspect. PhD Thesis University of Twente (2026).
In cooperation with the Police Academy we analysed footings of police interrogations with real or played suspects in order to model their interactive behavior. We used the computational models to synthesize virtual suspect characters that could replace real human actors. We focused on the role of ‘face’ and the effects of face threathening acts and other factors like character on the dynamics of the interrogation.
David J. Chalmers (2023). Could a large language model be conscious? Within the next decade, we may well have systems that are serious candidates for consciousness. Boston Review, 9 Augustus 2023.
Coeckelbergh, M., and Gunkel, D. 2023. ‘ChatGPT: Deconstructing the Debate and Moving It Forward‘ in AI & Society. Online first 21 June 2023.
Maarten Coolen (1992). De machine voorbij: over het zelfbegrip van de mens in het tijdperk van de informatietechniek. Boom Meppel, Amsterdam, 1992.
Maarten Coolen (1987). Philosophical Anthropology and the Problem of Responsibility in Technology. In P. T. Durbin (Ed.), Philosophy and Technology, Vol. 3: Technology and Responsibility (pp. 41-65). Dordrecht: Reidel.
“Information technology must be conceived of as the objectification of the modern self-concept of man as an autonomous being.”
Fleischhacker, Louk E. (1995). Beyond structure; the power and limitations of mathematical thought in common sense, science and philosophy. Peter Lang Europäischer Verlag der Wissenschaften, Frankfurt am Main, 1995.
Frege, Gottlob (1892). Uber Sinn und Bedeutung. Opgenomen in: Gottlob Frege: Funktion, Begriff, Bedeutung, Uitgave: Vandenhoeck & Ruprecht in Gottingen, pp. 40-65, 1975.
Frege voert de term ‘Gedanke’ in voor de inhoud van een oordeelszin: ”Ein solcher Satz enthält einen Gedanken.” “Ich verstehe unter Gedanke nicht das subjektive Tun des Denkens, sondern dessen objektiven Inhalt, der fähig ist, gemeinsames Eigentum von vielen zu sein.” (Voetnoot p. 46).
“Warum genügt uns der Gedanke nicht? Weil und soweit es uns auf seinen Wahrheitswert ankommt. Nicht immer ist dies der Fall. Beim Anhören eines Epos z.B. fesselen uns neben dem Wohlklange der Sprache allein der Sinn der Sätze und die davon erweckten Vorstellungen und Gefühle. Mit der Frage nach der Wahrheit würden wir der Kunstgenuss verlassen und uns einer wissenschaftlichen Betrachtung zuwenden.” (Frege, Über Sinn und Bedeutung, 1892, p. 48)
Of het ons “gleichgultig” is dat ChatGPT ons een werkelijkheid voorstelt die waar is of niet, hangt daarvan af of we deze machine als kunstwerk dan wel ‘wetenschappelijk’ beschouwen.
Goffman. Erving. (1967). Interaction Rituals: essays on face-to-face behavior. 1967.
Gunkel, David J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.
Gunkel, David J. (2014). The Rights of Machines–Caring for Robotic Care Givers. Presented at AISB 2014. Chapter in the Intelligent Systems, Control and Automation: Science and Engineering book series (ISCA,volume 74).
Gunkel, David J. (2017). The Other Question: Can and Should Robots have Rights? In: Ethics and Information Technology, 2017.
Gunkel, David J. (2023). Person, Thing, Robot A Moral and Legal Ontology for the 21st Century and Beyond, MIT Press, Open acces, September 2023.
“Ultimately, then, this is not really about robots, AI systems, and other artifacts. It is about us. It is about the moral and legal institutions that we have fabricated to make sense of Things. And it is with the robot— who plays the role of or occupies the place of a kind of spokesperson for Things— that we are now called to take responsibility for this privileged situation and circumstance.” (p. 184)
Georg W.F. Hegel (1969). Jenaer Realphilosophie – Vorlesungsmanuskripte zur Philosophie der Natur und des Geistes von 1805-1806. Uitgave Johannes Hoffmeister, Verlag von Felix Meinder, Hamburg, 1969.
Heidegger, Martin (1977). The Question Concerning Technology and Other Essays. Trans. William Lovitt. New York: Harper & Row (1977).
Technology is not a technique. It is a way of ‘Entbergen’, make being.
Hollak, J.H.A. (1968). Betrachtungen über das Wesen der heutigen technik. Kerygma und Mythos VI Band III, Theologische Forschung 44, 1968, Hamburg, Evangelischer Verlag, pp 50-73. Dit is de vertaling van het Italiaanse artikel (Hollak 1964) Ook opgenomen in de bundel Denken als bestaan, het werk van Jan Hollak (Hollak en Platvoet, 2010).
Hollak, J.H.A. (1964). Considerazioni sulla natural della tecnica odierna, l’uomo e la cirbernetica nel quadro delle filosofia sociologica, Tecnica e casistica, Archivo di filosofia, 1/2, Padova, 1964 pp. 121-146, discussie pp 147-152.
Hollak, Jan en Wim Platvoet (red.) 2010. Denken als bestaan: Het werk van Jan Hollak. Uitgeverij DAMON, Budel, 2010. In deze bundel het transcript van de opname van het Afscheidscollege over de hypothetische samenleving door Jan Hollak gehouden in Nijmegen op 21 februari 1986. Ook de inaugurele rede Van Causa sui tot Automatie is hierin opgenomen.
Levinas, E. (1987). Collected Philosophical Papers. Trans. Alphonso Lingis. Dordrecht: Martinus Nijhoff.
Levinas, E. (1971). Het menselijk gelaat. Vertaald en Ingeleid door O. de Nobel en A. Peperzak. Ambo, 1969, Bilthoven. Hierin: Betekenis en zin, pp.152-191. (Vertaling van La signification et le sens. in: Revue de Métaphysique et de Morale 69 (1964), 125-156.)
Levinas, E. (1951). L’ontologie est-elle fondamentale? In: Revue de Métaphysique et de Morale 56 (1951) 88-98.
“Can things have a face? Isn’t art an activity that gives things a face? Levinas asks this question in his essay “Is ontology fundamental?” Completely in the spirit of Levinas, the answer must be negative. For who can give a face to things that have no face of themselves? The face is not something that appears to us after meeting the other. You don’t give a face. As if we have the power to do that! The face means resistance to the power of technology, which wants to give things a face for the benefit of the functioning of deception. Because without the suggestion that it means something to people, the machine does not work.
Toivakainen, Niklas (2015). Machines and the face of ethics. In: Ethics and Information Technology, Springer, 2015.
“…my concern here is with why we aspire to devise and construct machines that have a face, when the world is filled with faces.”
I agree with Niklas Toivakainen when he says “that understanding our ethical relationship to artificial things cannot be detached from a critical examination of what moral dynamics drives our technological aspirations.” (Toivakainen, 2015).
Don’t we have time ourselves to engage with the elderly, that we make social robots, artificial parrots, seals, dogs for that?
Albert Visser (2020). Meeting on neutral ground. A reflection on man-machine contests. In: Studia Semiotyczne (Semiotic Studies) t. XXXIV, nr. 1 (2020), pp. 279-294.