Thursday, May 07, 2026

The Products of Productive Skill

 Richard Dawkins recently got some attention due to having spent a weekend with Claude (which he renamed Claudia) and deciding that it must be conscious. Plenty of people have been making fun of him for it, but it's worth thinking about a bit more seriously.

The fundamental obstacle that has always been in the way of 'artificial intelligence' or 'artificial consciousness' is that computers do not directly imitate intelligence or consciousness at all. This goes back to the beginning. Computers were developed by using machinery to imitate not human thinking but logical systems, an abstract tool that human minds produce and construct in order to facilitate specific aspects of thinking. Computers do not imitate the human mind; they imitate products of the human mind. And of course, the imitations can be made arbitrary good. Machinery imitating a logical system can act according to the operations of the logical system in ways far better than we can -- precisely because our thinking is not a logical system but something far more obscure that can build logical systems. They can do better acting according to logical system because they are logical systems. We are not, and so we do not do it as well.

This is not any less true of generative transformers or LLMs. They do not imitate human thinking. They imitate products of human thinking. Human beings do not think in text; they think and communicate their thinking, and can make texts of various to facilitate various aspects of thinking and communicating. Generative algorithms statistically compress a vast collection of mathematically described texts in such a way that, given an input text, they can extrapolate a related output text. Given a sufficiently large body of a certain kind of text, they can easily construct an analogous text that is mathematically related in the entire space of mathematically described texts.

Thus there is a sense in which Dawkins is right. Faced with the output of these programs, you are in fact interacting with consciousness. Everything it produces is the sort of thing produced by consciousness. Where he goes wrong is in assuming that this is a sign of the program being conscious. The program is not directly imitating consciousness. It is imitating a tool that conscious human beings produce and construct for their use. It is an imitation of one kind of product of consciousness, based on a mathematical description of a vast number of such products. 

We have to be careful here. It is entirely possible that in imitating the products of human thought we might sometimes indirectly imitate something about the processes of thinking itself. But it is important to grasp that this is entirely incidental to what we are actually doing with computers; when it happens, it is for some other reason than anything we are doing in computing and programming. What we are doing with computers is imitating, in a machine, the products, the constructs, the results of the human mind. We are never directly imitating the human mind. This should be quite obvious, even if for no other reason than that common views of how the human mind work have massively changed multiple times in ways that are simply not replicated by the history of computing. (The limited parallels have generally gone the other way, with people speculating that some aspect of what we do in computing has parallel in human thinking. Most of these analogies have failed, although some, again, may have something to them.)

We can thus expect to be here again. Human intelligence produces many products today of which there were no traces at all two thousand years ago. Two thousand years from now, human intelligence will produce, in massive quantities, products of which we have no inkling. And people will eventually make machines to imitate, and to produce imitations of, those products of the human mind, as well. No doubt people will also then gasp, and say, "This shows intelligence!" And, of course, so far they will be right. It shows our intelligence.

It is a pecularity of human art or productive skill (ars, techne) that the ability to make something can be shifted to make imitations of that something. The miracle of machinery is that you can use human productive skill to create structured processes and abstract designs that can themselves be imitated by physical objects in structured organizations, and the miracle of modern robotics and computing is that some of these structured process and abstract designs can be processes and designs facilitating the making of structured processes and abstract designs. We can make tools to facilitate making tools, and make physical systems that imitate those tools. There is no intrinsic limit to how far we can go with this. No doubt centuries from now we'll be making tools that make systems of tools for designing entirely new systems of tools for all sorts of arbitrary ends, and so on and so forth.

But in all of it, we will be imitating the products of art, skill, intelligence, consciousness, mind. If it gets us any closer to understanding art, skill, intelligence, consciousness, mind, it will be by accident, because none of these things are what we are directly imitating when we are doing anything with computing.