Prof. D.C. Dennett and the Power of the Computer

D.C. Dennet is a scientific philosopher. He is deeply concerned with questions about the human mind, consciousness, free will, the status of men and machine in the world of creatures. The stored program computer is a key concept in his philosophy. (For a bibliography see
https://adamwessell.wixsite.com/bioml/daniel-dennett ).

“Thinking is hard.” We think a lot but we often follow paths that mislead us from the truth. In “Intuition pumps” (2013) Dennett collected a large number of stories, thought experiments that he developed in order to think properly to find answers to nasty questions. Reading the book is a good way to introduce yourself into the rich world of one of the most important thinkers of today. I follow Dennett from the time he published The Mind’s I (1981) together with D. Hofstadter (I got a copy from my students when I left high school, where I teached mathematics and physics, back to the university.)

As I said, the computer plays a key role in Dennett’s thinking. I always felt that there is something wrong with the way he thinks about the computer but it was always hard for me to understand what it exactly was and most importantly how I could understand how his idea about the computer fits in his philosophy. In this essay I try to explain where I believe Dennett is missing an important point when he explains “where the power of the computer comes from”. As we will see this has consequences for the way we answer questions about the possibility of artificial intelligence, and what we mean by that. Yet, I cannot agree more than with Dennett’s position regarding the important practical, moral, issues how we have to deal with what we conceive as “autonomous” and “intelligent” systems.

According to Dennet we do not need “wonder tissues”, to “explain” the working of the human mind. When we understand what a computer can do and if when we see how computers work we will eventually see that we do not need to rely on “magic” to understand the human mind. (The atheist Dennett is very sensitive for every argument that smells like coming from a religious source. Which is a good thing,)

He explains his students how the computer works in order to unveal the secrets of the power of the machine. By showing the students where the power of the computer comes from he tries to make clear that the evolution of the machine eventually leads to a computer that equals the power of the human mind.

Where does the power of the computer come from? Or how does a computer work?

From the time I was a student (I studied mathematics and computer science in the 70s at the University of Twente in the Netherlands) this question kept me busy. How do we have to think properly to find an answer. I read many texts that describe the working of the computer. I teached students how to program computers in various types of programming languages. I teached them in courses called “Compiler Construction” how to implement higher order programming languages. I programmed computers in order to allow people having a conversation with the computer in Dutch or English. I gave courses in formal language theory, mathematical logic, computability theory, machine learning, and conversational analyses. I teached my students to program a Universal Turing Machine or Register Machine, the basic mathematical models of the stored program computer, precursors of all modern computers.

But I always felt that being able to program a computer, and being able to teach others how to program a Turing machine or a Register Machine does not mean that you can give a satisfying answer to the question: how does a computer work?

From Louk Fleischhacker, my master in Philosophy of Mathematics and Technology, I learned that a satisfying answer to the question how the computer works is hard to give without understanding mathematics, without understanding what it means to compute something. The computer would not be possible without a fundamental idea in metamathematics: that the language of arithmetics can be constructed as mathematical structure itself and that the arithmetical and logical operations can be formalized as operations on a formal language. This language becomes the interface, a programming language, to the mathematical machine.

There are at least two types of answers to the question how a computer works.

There is the technical answer. Dennett gives a technical answer. He explains in a very clear way how the register machine works by showing and teaching his students how to program the register machine using a very simple programming language with only three types of instructions. Step by step he explains what the machine does with the instructions. After he has explained how the machine can be programmed to add two numbers he asks his reader to be aware of the remarkable fact that the register machine can add to numbers without knowing what numbers are or what addition is. (I emphasize “without knowing” because it is a central idea in Dennett’s thinking: many creatures show intelligent behavior without knowing.)

Technical answers like this never satisfied me. They do not explain what we exactly mean by phrases like “what the machine does”.

As an answer to “how does a computer work?” I sometimes gave my students the following demonstration.

I hold a piece of paper for my  mouth and I shout “Move!”. The moving of the paper I then explained by saying:  “you see, the paper understands my command.” In a sense (Dennett would say “sort of” understands!). In what sense? Well, the meaning of the word confirms the effect of the utterance of the word: the paper moves as if it understands what I mean with uttering the word. This is an essential feature of the working of the computer. Note that the movement of the piece of paper is conditional on my uttering of the word. There is a one-to-ne correspondence between the meaning of the word and the effect of uttering it.

The computer is a “language machine”. You instruct it by means of a language. The hardware is constructed so that the effect of feeding it with the tokens satifies the meaning that the tokens have. Therefore the programmer has to learn the language that the machine “sort-of” understands. The program is the key, the machine is the lock that does the work when handled with the proper key.

What has this to do with mathematics? Well; what is typical for mathematics is that mathematical expressions have an exact and clear meaning: there is no vagueness. There is a one-to-one correspondence between the effect of uttering the word and the physical effect caused by it, that represents the meaning of the word.

An demonstration I gave people in answer to the question “how does a computer compute the sum of two numbers?” runs as follows. I demonstrate how a computer computes 2 plus 3. First I put 2 matches on an overhead projector. Then I put another 3 matches on a second projector. Then one by one I move the three matches from the second projector to the first project. And look: the result can be read off from the second projector: five matches.

Explanation: the two and three matches stand for the numbers 2 and 3 respectively: there is a clear unambiguous relation between the tokens (the three matches) and their meaning, the mathematical object (the number 3). The moving of the 3 matches to the first projector stands for the addition operation: a repetition of adding one until there is no match left on the second projector. The equality of the 2 and the 3 as seperated units (representing the numbers 2 and 3) on the one hand and the whole of 5 matches (representing the number 5) is a mathematical equality. You might say that I execute a conditional branching instruction when doing the demonstration: if there is a match on the second projector take a match and put it on the first projector; else stop and read off the result. But also my execution is conditional on the procedure that I follow. In the stored program computer there is no difference in status between the program parts, the statements, and the numbers. The difference between statements or operators and numbers or operants is only in the minds of the designer and the programmer.

I think most people did not took my demonstration as a serious answer to the question how a computer works. But I believe it shows an essential feature of the computer. A feature that Dennett misses when he tries to explain the power of the computer.

According to Dennett the power of the register machine is in the conditional branching instruction. This construction tells the machine to check if a certain register contains the number 0 and then take a next step based on the outcome of this check. What is so special about this instruction?

“As you can now see, Deb, Decrement-or-Branch, is the key to the power of the register machine. It is the only instruction that allows the computer to “notice” (sorta notice) anything in the world and use what it notices to guide its next step. And in fact, this conditional branching is the key to the power of all stored-program computers, (…)’’ (From: Intuition Pumps and other tools for thinking. The same text – without the bracketed sorta notice – can be found in Dennett’s lecture notes The secrets of computer power revealed , Fall 2008).

What Dennett misses, and what is quite essential, is that every instruction is a conditional instruction. Not just the Deb instruction. The End instruction, for example, only does what it means when the machine is brought in a world state that makes the machine execute this instruction. Eventually this is the effect of our act of instructing the machine. When we instruct the computer by pressing a key or a series of keys the computer “notices something in the world” and acts accordingly. For example by stopping when we press the stop button. This is precisely the feature I try to make clear by my first demonstration with the piece of paper. The set up demonstration (the piece of paper held in front of the mouth) is such that it “notices” the meaning of the word “move”. How do we know? Because of the way it responses to it. We see that the computer responds in correspondance to the meaning and goal of our command and we say that it “understands” what we mean. Every instruction is conditional in the sense that it is only executed when it is actually given. Indeed, the machine does not know what it means to execute a command. A falling stone doesn’t know Newton’s law of mechanics. Does it? And yet, you might say that it computes the speed that it should have when it touches the ground. Sort of.

Yet, the conditional instruction is special in the sense that it is the explicit form of the conditional working of the machine.  It assumes the implicit conditional working of the instructions we give to the computer. Just like the application of the formal rule of modus ponens assumes the implicit use of this rule. (see Lewis Carrol’s funny story “What the tortoise said to Archilles”). We call a logical circuit logical because the description of the relation between the values of the input and output of the circuit equal that of the formal logical rule seen as a mathematical operator.

We distinguish a sentence from the act of someone expressing the sentence and meaning what it says. Somewhere in history of mankind this distinction was made. Now we can talk about sentences as grammatical constructs, objects that somehow exist abstract from a person that utters them in a concrete situation. Now we talk about “truth values” of sentences, we study “How to do things with words”; words and sentences have become instruments. Similarly, we analyse “conversational behaviors” (such “tiny behaviors” like head nods, eye gazes) as abstract gestures. And we synthesize gestures in “social robots” as simulations of “human conversational agents behavior”. Many people think that we can construct meaningfull things and events from meaningless building blocks if the constructs we built are complex enough. Complexity is indeed the only measure that rests for people that have a structural world view, a view that structure is basically all there is. (In Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, Max Tegmark posits that reality, including life!, is a mathematical structure.)

Many people, including Dennett, think about the computer as something that is what it is abstract from the human mind, abstract from the user and the designer. As if the machine is what it is without the human mind for which it is what it is and does what it does. However, the real power of the computer is in the mind of the human who organises nature in sich a way that it can be used as representation of meaningfull processes.

The Turing test does not test how intelligent a machine is. It tests if the human mind is able to construct a machine that is able to make other minds believe that it is intelligent. This has consequences for the question who is ultimately responsible for what machines do. It has consequences for what we mean when we talk about “autonomous machines” or “artificial intelligence”.

Dennet sees the machine and the human mind as distinct realities that can exist seperately. For Dennett there is no fundamental difference between the computer that “sort of” understands and the human mind that “really” understands. The difference between the two is only gradual: they are different stages in an evolutionary proces.

Can robots become conscious? Dennett answers this question with a clear yes. In a conversation with David Chalmers about the question if superintelligence is possible Dennett posits:

“(…) yes, I think that conscious AI is possible because, after all, what are we?
We’re conscious. We’re robots made of robots made of robots.
We’re actual. In principle, you could make us out of other materials.
Some of your best friends in the future could be robots.
Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons.
One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful.” (For the whole conversation (recorded 04-10-2019): https://www.edge.org/conversation/david_chalmers-daniel_c_dennett-is-superintelligence-impossible)

Can machines think? Dennett would answer this question with a clear yes, too. After all: people are machines, aren’t we? But he doesn’t consider this question as really important. The real challenge of artificial intelligence is not in this type of “philosophical” questions.

According to Dennett the real challenge of AI is not a conceptual but a practical one.

“The issue of whether or not Watson can be properly said to think (or be conscious) is beside the point. If Watson turns out to be better than human experts at generating diagnoses from available data it will be morally obligatory to avail ourselves of its results. A doctor who defies it will be asking for a malpractice suit.”

“The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.” (D.C.Dennett in: The Singularity—an Urban Legend? 2015)

I cannot agree more with Dennett’s than with this. As soon as machines are considered autonomous authorities they stop being seen as usefull technical instruments. They are considered Gods, magical masters, then. For me this is a consequence of the fact that machines are what they are only in relation to the human mind for which they are machines.

A.M. Turing, D.C. Dennett and many more intelligent minds are products of evolution. Machines are products of evolution as well. But there is a fundamental difference between natural intelligence as we recognize it in nature as a product of natural Darwinian evolution, and artificial intelligent machines that are invented by human intelligence.

As soon as we forget, for whatever reason or by whatever cause, this important difference will disappear.

D.C. Dennett, Intuition Pumps and other tools for thinking, W.W. Norton Publ.,2013. Translated in Dutch: Gereedschapskist voor het denken. Uitg. Atlas Contact, Amsterdam/Antwerpen, 2013.

L.E.Fleischhacker, Beyond Structure: the power and limitations of mathematical thought in common sense, science and philosophy. European University Studies 20(449), Peter Lang, Frankfurt am Main, 1995.

Published by

admin

Rieks op den Akker was onderzoeker en docent kunstmatige intelligentie, wiskunde en informatica aan de Universiteit Twente. Hij is gepensioneerd.

Leave a Reply

Your email address will not be published. Required fields are marked *