I had a discussion with Thomas Basboll (@inframethod) on Twitter that started with a comment I had on a blogpost in which he tried to justify the exclusion of current AI systems (computers, robots, text generating systems, like GPT-3) from our system of rights. (About GPT-3 I wrote in my Dutch blog post De buitenkant van de taal – On the outside of language.)
Thomas wrote: Once we realize that a machine is just a physical process, the issue of #robotrights dissolves.
My response was:
On the contrary: because the robot/machine is not just a physical proces #robotrights is an issue. It is the physical objectification of the Cartesian idea of causa sui = God.
Thomas: I’m going to need a longer argument for that! Have you (or someone else) written about that somewhere? It would be interesting to hear when, in the history from the abacus to the microprocessor, you think computers became more than physical processes.
The statement that the autonomous robot “is the physical objectification of the Cartesian idea of causa sui = God.” is interesting because this idea of God as causa sui is the projection of the self-image of the modern human subject as autonomous being. Since Descartes autonomous man would realize himself through modern mathematical experimental science and technology. This culminated in Artificial Intelligence. So AI is the self-image of the very abstract idea of the human subject that understands himself as autonomous being. It is because of this objectification that the technology offers us the possibility for a critical self-reflection. How does the post-modern subject transcend this self-image of the modern subject?
Here are some notes that may clarify the distinctions made and my position in the debate.
First, about the relation between a physical proces and a technical device. In what sense is a technical device not just a physical proces? A technical device is meaningfull, it has a function, it is usefull, it functions. A technical device has been designed, invented, it is the realization of a technical concept, a construct. Physical processes and properties are the materials used to make them.
A hammer is not just a natural object. It is material object with certain natural properties that makes it suitable for a particular use. Based on immediate experience in working with natural things man invented hammer. And he developed it and improved it. He made special hammers for different tasks and practices, because of the different properties of materials that he encountered. He learned that the length of the handle and the hardness of the material are relevant properties but that the color is not. Working with the hammer he developed handcraft skills to make better hammers and techniques to work with them. Is the hammer a physical object? Yes it is, but not just that. What makes a hammer a hammer is what human practice and inventions made of it. It is an objectification of a certain practice. It reflects physical knowledge implicit in that practice. The physical object represents the hammer as a concept, a construct.
This is not different from the relation between the drawing of a triangle and the mathematical meaning it has for us as a mathematical object. We see the drawing of a triangle as representation of a mathematical object. Only then it has a mathematical meaning.
Somewhere in the history of western civilisation nature changed from something that simply did what it was made to do (by its very nature) to some field of possibilities that man could master and have power over. The outcome of this scientific development that started somewhere in the 15th 16th century was that nature is conceived of as following ‘natural’ laws expressing quantitative relations in the form of mathematical formulas. A physical process is the temporal continual unity in the discrete set of states of a system, a distinguished, part of reality, the world as we see it from the scientific perspective.
What it is that causes the falling of a stone is not of interest for modern man who wants to learn how nature works. What is interesting is the quantitative relation between the height from which it falls freely and the speed it has when it reaches the ground. The idea is that there is a law, a mathematical function, a model, that describes this natural proces. That is: nature became a system of physical processes, transforming its state from one into another, described by mathematical functions.
In science and technology we nowadays do not cause things to happen, instead we set certain conditions so that nature does what it should do according to the laws that we invented and checked by means of scientific experiments. The machine, unlike the hammer, works by itself, it regulates itself. Is a machine a physical proces? Yes, but not just a physical process. What makes it a machine is that it was invented and constructed to perform a specific task. The machine reflects our working with instruments and it regulates the interaction with its environment, so that it can continue to function. A machine maintains its working structure. A bomb is not a machine.
Now, when does such a physical process become a computing device? This happens when the natural process is given a mathematical meaning and we use the correspondence between the natural states of a system and the mathematical states of a computation. We use the correspondence (Übereinstummungen) between mind (‘Geist‘) and nature (‘Natur‘) that Heinrich Hertz mentions in his Einleitung zur Prinzipien der Mechanik (1894).
“Now, if we do not only use the correspondence principle, but also understand it, we may realise that it enables us to determine certain ‘necessary consequences in thought’ by relying on the correspondence, and reading them from the ‘necessary consequences in nature’.” (Fleischhacker, p.69) This way, we may use natural processes to simulate thinking.
Suppose the natural process of the stone falling freely is described by the mathematical formula v(t) = f (h,t) , with v(t) is velocity at time t and h is height at time 0. Now we can use the falling stone to compute f(k,s) for any value of k (the input value of the height) and s (the time that we measure v). We read off the output value v(s), the result of computing f(k,s).
The falling stone has become a computing device. We could say: the stone knows how to compute the velocity of its falling at any time t when it falls from height h. But that is projection of our knowledge of mathematical physics on the natural phenomenon.
In the same way as this simple computing device is more than a natural process, the abacus is more than just a thing. It is an instrument made and used for doing calculations. Balls and moves have mathematical meanings. The abacus is a computing device. The state of the system: i.e. the position of the balls stands for the state of a computation.
“Would we say that an abacus ‘understands’ addition?” Thomas asks. “What about a paper bag. You put two apples in it, and another two apples. Then you have a look and you see four apples in it. The paper bag knows how to add?”
No, the paper bag doesn’t know how to add.
This is how we learn a child how much 2 plus 2 is. Since mathematical objects are only objects in thought, not real objects, when we make calculations or perform a mathematical proof, we always need some kind of tokens, to identify the mathematical objects that we reason about. It can be apples, balls, pen strokes on paper. It doesn’t matter what they are, but they must be rigid designators (Kripke), that uniquely identify the objects, so we can do as if they are the objects themselves. The invention of the positional number system (in which 324 denotes a different number then 243) was a big step forward, in representing the potential (countable) infinite number of different numbers.
Doing calculations (by means of a paper bag with apples, or whatever token system) we make use of a principle of reality, its structurability. Mathematics is applicable to our world of experience in as far as it is structurable, countable, measurable. Nature doesn’t prescribe how it is structured. We are free how we model, structure it. Four apples is also one and three. Without this real principle of structurability of our reality it would not be possible and not make sense to calculate.
A logical circuit is a logical circuit because the relation between input and output is described by a mathematical function that expresses a law of thought . It is used for representing a logical connective, for example the logical AND operator. It is used to represent a thought process. The thought process that is implicit in the functioning of the machine is explicit and reflected as such in the logical, ‘thinking’, information processing machine.
Is this machine, is a logical circuit, a physical process? Yes, but it is not just a physical process. When we say that the machine thinks, we do not mean it in the same way as when we say that man thinks or feels. But it is also not just anthropomorphic imagery. The physical states of the machine are not just states of natural processes, they are objectification and representation of an ideal construct, a design.
Just like the strokes ||| in a ‘calculation’ ||| + || = ||||| represent for us a mathematical object (usually denoted by 3) and the proces of adding another two strokes represents a mathematical operation (addition). We distinguish units in reality and collect them into new unities, without bothering about the qualitative identities of the things that we consider as unity. Mathematics is structuring.
Essential in the development from the classical machine (the result of scientific engineering, Newtonian physics) to the programmable computer is the self-reflection of mathematics: the mathematics of mathematics. This is the self-reflection that is expressed in mathematical logic, mathematical theories of mathematical reasoning. In the course of the 19th century mathematical thinking became manipulation of formula. The intelligent machine is a language processing machine. Therefore NLP is a core business of AI. We program the machine by saying: IF X THEN Y. This way we define how it should function.
We should not forget that it took some time before Leibniz ‘functio’ became a function, a first class mathematical object of a function calculus, and before mathematics found a set-theoretic model that could be considered to provide a foundation for the self-application of functions. The idea of a function that is applicable to itself is needed to mathematically express the working of the autonomous machine. Self-application is needed in a mathematical semantics of recursive functions, but we see it also in the expression of self-reproduction of the living cell in mathematical biology.
From a mathematical perspective we don’t see a difference between the automaton and the living organism. They are both information processing systems. According to Floridi everything is information, a modern form of a mathematical metaphysics.
In Floridi & Sanders (2004), in order to give a clear answer to the question if a technical system can be a moral agent, the authors define agent as a mathematical structure, a transitional system. At any moment in time such a system is in a particular state. A state can consist of substates.
In order to be able to call such a system autonomous (a moral agent must be autonomous) such a system should be able to work from itself. Therefore, part of the system (a memory) contains its own program. These are the actual transition rules that determine how the system changes from a current state to a new state. F&S call this a ‘cognitive trick’.
This ‘trick’ rests on a cognitive confusion: the description of the system is seen as part of the system itself.
Notice that the conditional statement IF X THEN Y expresses the logic we practice when we use nature to work for us: we know (this knowledge is the outcome of a successful experiment: if x then y) that when we want it to bring about state Y we have to do X.
The computer is an abstract state machine. It is a mathematical machine. We can program any virtual world that we can imagine and that we want to make. Where the classical machine works because of a construct that is still in the mind of the designer, the programmed machine contains its own construction in the form of a program. After it is programmed it knows what to do; we only have to switch it on. The principle of self-regulation is incorporated explicitly in the information processing machine.
What has this to do with Descartes’ idea of causa sui? According to Descartes’ mathematical metaphysics it was God who made the eternally valid mathematical truth, as he wanted them to be. According to Descartes physics there is nothing in the world without a cause. Even God has a cause: his being is caused by him self. God is causa sui (in the positive sense, not in the Mideaval sense of being uncaused.)
God is needed by Descartes as the one for which the two Cartesian substances, res cogitans and res extensa, subject and object, thinking and nature, exist as completely separated real beings. God is the foundation of the distinction and the unity of both forms of being. This causal self-relatedness of God we now recognize in the automaton: the working is caused by itself expressed in the form of its program. The relation (correspondence) between the physical states of the machine and the mathematical meanings is in our mind. As it is with the abacus; the tokens of a number system.
After the historical development in which mathematics liberated itself from a natural science and became a formal axiomatic science of structures, that postulates what is true or false (only constrained by logic, consistency rules) we now recognize in Descartes’ will of God the mind of the mathematician, who can choose at will for whatever set of axioms, rules, to be realized in the world. He only has to commit himself to the logical consequences of the postulated truths of mathematical objectivity, as it is expressed/constructed by for example the axioms of Euclidean or non-Euclidean geometry.
In the fifth part of his Discourse Descartes argues that the fact that some animals have more skills than we do in certain of their actions, doesn’t prove that they have a mind as we do. Rather “it is nature which acts in them according to the dispositions of their organs, as one sees that a clock, which is made up of only wheels and springs, can count the hours and measure time more exactly than we can with all our art.”
When Descartes tells us what parts the clockwork is made of, he forgets one important thing: the human mind, the invention. In the same way, he considered eternal truths as things (‘quelque chose‘) like all existing things. Mathematical truths are facts, ‘immutabiles et aeternae‘. Everything just because God wanted them to be so.
When people talk about the capabilities of “the intelligent machine” (AI) and compare them with the capabilities of men they often take a Cartesian, mathematical, stance. They separate their world (subject, res cogito) from the world of the machine (object, res extensa). As if the machine (or a word) has any meaning when we separate it from man for which it is a machine.
AI is the modern form of the Cartesian God, which is according to Marx a projection of man himself. AI is the outside, physical, objectivation of this technical idea.
Conceived and often promoted, commercially and politically (see the work of Mark Coeckelbergh), as a kind of God, the working of the ‘intelligent machines’ reaches far beyond its pure technical working. These machines are seen as authorities, as authors.
Where it simply plays with our language.
In retrospect every technology is information technology: how we inform nature, how we give form to matter. In primitive technology matter is clay or wood. In information technology the matter that we inform and proces is information. In this sense information technology is the reflective and general form of technology: technology of technology.
Descartes’ God of western metaphysics turned out to be a nihilistic figure: power of will. According to Descartes “we can make ourselves, as it were, masters and possessors of nature”. We are now witnessing the results of this ‘power over nature’. The ecological crisis is a secondary effect of our technology driven economy. What is often ignored is that secondary effects of the use of technology is not at all an accidental matter, but a matter of principle. It is a direct consequence of the mathematical modeling from the experimental research of isolated, abstract, situations, on which technology is based. We question more and more if our globalizing technology is good for our and our world’s life and well-being.
Ethics of AI, ethics of technology, should not be about isolated technical instruments. It should take serious the global character of technology, the motor of our knowledge and information economy.
Thomas tried to justify the exclusion of AI from our system of rights. He tried to do this based on what AI is (“I’ll explain why, being what they are, they can’t have rights.”) . I think this is indeed the only ground for value: what it is.
Why was Descartes so interested to receive a seed from the plant mimosa pudica (the sensitive plant) and grow it in his garden? Because this plant shows to be sentient, to have feelings, he wanted to analyse it and find the mechanism to see how it works. (Aristotle said animal live has feelings, feel pain; plants not.) Later it was shown by means of scientific experiments that mimosa pudica has the capability to learn, and memorize; that it can distinguish different types of treatments.
Why should we protect plants, animals, nature? Only because it has economical value for us? Or is it because we see a certain degree of consciousness, a shadow of human beauty and grandeur? We should be able to find a way in which man’s relation with nature is also recognizing the value that nature has in itself. Not in the last place because we are part of nature ourselves.
The machine question
“ELIZA, the first chatter-bot, was able to communicate with human users by producing, in the words of Descartes, “different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence.” Because of the experience with machines like ELIZA and other advancements in artificial intelligence, robotics, and cybernetics, the boundary between the human-animal and the machine has become increasingly leaky, permeable, and ultimately indefensible.” (David Gunkel, The Machine Question, p.133)
Indeed, seen from a Cartesian stance, from the outside, there is a boundary, and this boundary is ‘increasingly leaky’. Maybe this points us at the stance we take.
We know perfectly well that machines are not living beings. We know perfectly well that mimosa pudica is not of the same category of being as the mechanical model that Descartes constructed (we would say: as a model of it, to understand how it works). These models and machines are ‘part of us’, but not ‘parts’ in a mathematical/geometric sense. The geometrical notion of ‘boundary’ does not provide the appropriate model to express the relation between man and machine.
Morality “is no longer seen as being ‘intrinsic’ to the entity: instead it is seen as something that is ‘extrinsic’: it is attributed to entities within social relations and within a social context”, Mark Coeckelbergh argues. If this is true, as I think it is, the moral and legal subject should not and cannot be the machine, or the robot, taken abstract from the people, organisation and cultural context it is active in. Technology as such and the very idea of man as a merely autonomous being is at stake.
Are robots persons?
In (Fleischhacker 1995) Louk raises the question whether there is a form of technology that ‘transcends’ the self-reflective form of technology, the information processing, computing, machine, that is seen by Hollak (1963) as the final stage in the objectification of the technical idea. “In fact some AI-fanatics, by stating the aim of the discipline as ‘making a person’ (eg. Charniak and McDermott in 1985) transcend technology towards intersubjectivity.”
Indeed, motivated by the existence of social humanoid robotics, conversational agents, sex robots, autonomous vehicles, weapons, and the way some people engage with them, we see proposals to consider robots as moral agents and legal persons. This really transcends technology, since what technical use is there in making a person?
Can real robots come to realize the injustices of their oppression by man, as the robots do in Čapek’s 1921 play, or as the pigs do in Orwell’s Animal Farm (1945)?
‘Rights’ are not necessarily ‘human rights’, rights that a subject claims. ‘Rights’ may also refer to the behavioral rules that participants must follow and obey in very specific social situations. The rights of one party always go hand in hand with the obligations of other parties to respect these rights. Autonomous robots that play the role of a personal delivery device and that take part in public traffic, a very specific type of orderly social behaviour, should obey certain traffic rules. Not because they have human rights, but in order to function well in a specific ‘game’, in which they cooperate and interact with others.
The Commonwealth of Virginia provides the following stipulation: “a personal delivery device operating on a sidewalk or crosswalk shall have all the rights and responsibilities applicable to a pedestrian under the same circumstance.” (cited from: Gunkel, 2022). In such an abstract well-defined social order where technical devices and humans can play the same roles, it lies at hand that they both share the same ‘rights’, as rules of the game. The different pieces of the game of chess have different ‘rights’ defined by the rules of the game. The rights of robots that take part in a well-organized social organisation are not unlike the rights of these ‘players’ in a game.
Rights of technical devices, like robots, are rules of games. They are parts of the ecological and social structures in which they function. It is in the interest of their functioning that they are assigned rights.
But life is not a game. In a moral sense, when a delivery robot causes damage, when life itself is at stake, it is not the robot itself that is morally responsible. That is the organisation, maybe the society as a whole, that allows the robot to take part into the game. After all, it is not the robot itself that considers himself an autonomous being that can act on its own and that takes responsibility for its actions. It’s not the robot that sometimes confuses the artificial life of a game and real life.
Gunkel (2022) distinguishes will theory and interest theory. According to the first rights of subjects are based on the will and power of the subject themselves: they demand recognition of their rights and will fight for their rights against opponents. Robots will have to offend against their bosses to get a legal place. Interest theory stipulates “that rights may be extended to others irrespective of whether the entity in question can demand it or not”. This concerns the notion of right as a rule in a game.
Rites and rituals are the cultural rules of conduct that the members of a community obey. They are the result of a historical process. The status that things, plants, animals, non-human as well as human members of a community have determine the role they play in the rites of community life.
Seen from the ‘outside’ perspective of rites the debate in ‘Western’ scientific, technological society about rights of robots ‘plays’ in just one of the possible cultural settings that the history of man created, a setting in which specific ‘western’ ‘rites’ and ‘rituals’ make up how we act and think and try to proceed in this debate (see also Gunkel 2022). This is also the perspective that Marc Coeckelbergh takes, inspired by the Wittgensteinian concept of language game as expression of a game of life, when he considers the ‘growing of moral relations’ between the members of a language community.
From a technical, mathematical, outside, perspective life is a game. But life as we live it, is not a game. We are not free to step outside life and construct a new game, a new community in with new rites and rituals (define how we) assign moral status to the things, as if we were masters of time and universe. We are bound to the possibilities that our nature offers us.
Conceived this way it is clear that the prevalent mathematical and technical view in our ‘Western’ culture, a view that after Descartes’ cogito centers around the idea of the human subject as an autonomous being is at stake.
The discussions about the moral and legal status of robots would gain from a better understanding of the very idea of technology.
David Gunkel replied to my post: ‘This all sounds very Heideggerian’.
Indeed, Heidegger saw technology and modern science as the metaphysics of our time. Technology is not a technique. That AI is the realization of the very idea of technology as such is in line with Heideggers concept of technology. Hollak understands AI as (just an outside) objectification of the self-image of the modern subject as autonomous subject. Therefore it offers the possibility to overcome this self-image, if we understand it. Hollak is more optimistic than Heidegger who said that we cannot escape from science and technology and who said: “Nur noch ein Gott kann uns retten”. As if we can only wait and see what will come.
We need to understand technology, mathematical thinking in order not to be controlled by it. We are now witnessing the implications of the abstract idea of the modern subject as an autonomous being that took a stance opposite (his very) nature.
Jorge-Soto Andrade, Sebastian Jaramillo-Riveri, C. Gutiérrez & J. Letelier (2011). “Ouroboros avatars: A mathematical exploration of self-reference and metabolic closure.” ECAL (2011).
Mark Coeckelbergh (2012), Growing moral Relations: critique of moral status ascription. Palgrave MacMillan, 2012.
Ethics and moral practices are parts of our ‘form of life’. Meta-ethics cannot escape from this. But this shouldn’t lead to relativism. We can’t not take part. Moral stance is implicit in our way of life, in the relations we practice, in the language we use to order our world.
Mark Coeckelbergh (2014). The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics. Philos. Technol. (2014) 27:61–77.
Coolen, M. (1987). Philosophical Anthropology and the Problem of Responsibility in Technology. In P. T. Durbin (Ed.), Philosophy and Technology, Vol. 3: Technology and Responsibility (pp. 41-65). Dordrecht: Reidel.
“Information technology must be conceived of as the objectification of the modern self-concept of man as an autonomous being.” (p. 60) This makes it possible for man to see that he must be more than merely an autonomous being. Where the robot can not, man can reflect on his scientific understanding reflected in the robot. At this stage of technology, the problem is not anymore to take responsibility for a specific technology, not for a specific type of autonomous machine or robot, but for technology as such.
Maarten Coolen (1992). De machine voorbij. Over het zelfbegrip van de mens in het tijdperk van de informatietechniek, Boom, Meppel, 1992.
In my understanding of technology as anthropology, I owe a lot to the lectures that Maarten Coolen gave in Amsterdam when he was in the middle of his studies the fruits of which we can read in this thesis. It contains his own interpretation of Jan Hollak’s philosophy of technology.
René Descartes. Discourse on method and the meditations. Penguin Classics, 1968.
Fleischhacker, Louk E. (1995). Beyond structure; the power and limitations of mathematical thought in common sense, science and philosophy. Peter Lang Europäischer Verlag der Wissenschaften, Frankfurt am Main, 1995.
In this very insightfull work on the philosophy of mathematical thinking, Louk argues that the limits of information technology are identical to the limits of artificial intelligence and these are identical to the limits of mathematical thinking. And these limits are determined by the degree of structurability of the world of our experience. According to mathematism the essence of reality is structure and knowledge is ideally expressed in the form of a mathematical model.
Contains an essential critique on Max Tegmark’s Our Mathematical Universe avant la lettre.
Luciano Floridi, Sanders, J. (2004). On the Morality of Artificial Agents. Minds and Machines 14, 349–379 (2004).
“Juist in een tijd waarin de mens zich voortdurend dreigt te verliezen in zijn uitwendige gestalten, is de filosofie in de verleiding ook te proberen hem juist vanuit die gestalten te begrijpen. Dat is iets ander dan: die gestalten als zelfveruitwendigingen van de mens te begrijpen.”. (Louk Fleischhacker in the syllabus Wijsbegeerte van het Wiskundig Denken, 1975/76, University of Twente)
“Particularly nowadays, in a time in which man threatens to loose himself in his own outward appearances, philosophy is tempted to also understand himself from these appearances. Which is something different from: understanding those appearances as outward self-appearances.“
In this paper Floridi and Sanders try to understand agency and autonomy from the outside appearance of it, in a mathematical way.
Monica Gagliano (2019). De stem van de plant. Dutch translation of: Thus spoke the plant. In this the author reports about her experiments with mimosa pudica, the sentient plant.
Gellers, Joshua C. (2021). Rights for Robots – Artificial Intelligence, Animal and Environmental Law. Taylor & Francis.
Multimethodological study after the eligibility of robots for certain rights. Provides a thorough analyses intended to inform an answer to the machine question by drawing upon lessons from animal and environmental law.
Gunkel, David J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.
Gunkel, David J. (2017). The Other Question: Can and Should Robots have Rights? In: Ethics and Information Technology, 2017.
Gunkel, David J. (2022). The Rights of Robots. In A. A. Nakagawa and C. Douzinas (Eds.), Non-Human Rights–Critical Perspectives. Cheltenham: Edward Elgar., Available at SSRN: https://ssrn.com/abstract=4077131 or http://dx.doi.org/10.2139/ssrn.4077131
Hertz, Heinrich (1894). Die Prinzipien der Mechanik in neuen Zusammenhange dargestellt. Mit einen Vorworte von H. von Helmholtz
Jan Hollak (1963). Hegel, Marx en de Cybernetica, in: Tijdschrift voor Philosophie (25) 1963, pp 279-294. Also in Hollak en Platvoet (2010).
This is the first paper by Hollak, that I read. I didn’t understand it, but it opened for me a new perspective on technology, relating it’s historical and cognitive development to the self expression and realisation of man as he understands himself as autonomous subject in modern western society.
Jan Hollak (1966). Van Causa sui tot automatie. Inaugurele rede Nijmegen. Also in Hollak en Platvoet (2010).
Jan Hollak (1968). Technik und Dialektik, In: Civilisation, technique et humanisme, Coll. de l’Académie Internationale de Philosophie des Sciences, Lausanne, Paris, 1968, pp. 177-188.
Important footnote in this paper: “Wenn hier immer wieder im Zusammenhang mit Mechanismen vom “Reflektion”, “Selbstreflektion”, usw. die Rede ist, so ist selbstverständlich damit niemals der subjektive Prozess menschlichen Denkens gemeint, sondern immer nur sein intentionale Korrelat.” (p. 181)
The cognitive process as cognitive process, that is not just materialiter, but also formaliter, i.e. as such, is expressed in the programmed machine. From the perspective of technology seen as self-realisation of man in his relation to nature and his labour, this shows that the programmed machine is on a higher level of autonomy than the ‘classical machine’, where the conceptual design is not yet implemented as concept in the form of a program.
Jan Hollak (1968). Betrachtungen über das Wesen der heutigen Technik. In: Kerygma und Mythos VI, Band III, Hamburg 1968, pp. 50-73. Also in Hollak en Platvoet (2010).
Jan Hollak en Wim Platvoet (red.) 2010. Denken als bestaan: Het werk van Jan Hollak. Uitgeverij DAMON, Budel, 2010.
Arnold Metzger (1964). Automation und Autonomie. Das Problem des freien Einzelnen im gegenwärtigen Zeitalter, Neske, 1964.
Max Tegmark (2014). Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, Penguin Books, 2014.
Externe Realiteit Hypotheses (ERH): There exists a physical reality that is independent of man.
It is this ‘Cartesian’ conception of reality that Tegmark considers as our universe and he identifies with a mathematical structure. New compared with Descartes’ world is that this mathematical structure contains information processes.
Wootton, David (2015). The Invention of Science. A new history of the scientific revolution. Penguin Book, 2015.
About the historical, cognitive development of ‘science’. Wootton makes the interesting ‘observation’ that before 1700 facts did not exist. From this we may conclude that information didn’t exist either.