It’s Impossible for Machines To Think Like Humans

--

Image by Gerd Altmann from Pixabay

There’s a lot of hysteria around Generative AI (GAI) tools like ChatGPT, beyond the usual hype cycle of many technologies that have come to be in the world. There was even the case last year of the now former Google engineer who was convinced that an AI was, well, sentient. It isn’t. In human terms, this is absolutely impossible.

This doesn’t mean AI is terrible or that it can’t do amazing things to help us. It can. In fact, AI may be just the right technology humanity needs to survive our next phase of evolution.

But there is no way, whatsoever, that AI can be in any way, shape or form, human. And ChatGPT or similar tools, are in fact, the worst possible example. It’s important that we place AI in the right context. That way we can govern it better, make more informed regulatory and legal decisions.

There are two fundamental words to consider. Consciousness and intelligence. Currently, there is simply no definition of what consciousness is. We literally have no idea. None at all. Just some good theories. As for intelligence, that too, is hotly debated and completely inconclusive.

There’s a third word too, one that gets very little attention. That word is culture. And in fact, we might begin there. Culture is the very code that humans have used for hundreds of thousands of years to survive.

Culture encompasses our social structures, political systems, arts, literature, economics, traditions, norms and behaviours and more. Culture is incredibly complex and varies around the world. Each society has different variations on culture. It is what makes us unique as a species.

While culture is a code, it’s built on critical thinking, not problem solving. All AI tools are focused on problem solving, not critical thinking. It’s arguable that AI may not in fact, be able to use critical thinking.

Just as we can’t teach an AI system what consciousness or intelligence is, not can we teach it culture. There are rules within various cultures that could be learned, yes. But not all aspects of culture are rules based. This includes play, which is a very important part of culture.

Games themselves have rules, such as chess and Go. AI has beat chess and Go. It’s not really as special as we may think. All those games are is a set of rules. The more compute power you can apply, the greater the chances of winning. It is only logical that AI tools such as Machine Learning would eventually be able to win against a human. But that isn’t play.

We use play in many different ways. Sometimes we use it to avoid greater degrees of violence. Take for example, the Surma/Suri tribes in Ethiopia and the game of Donga. They use very long sticks (sagenai) to thump each other in a rather nasty, violent way. But it is also a form of play that serves several purposes. It can help settle disputes and thus avoid inter-tribal warfare, it shows masculine strength and pain tolerance as cultural markers and it can help men find wives.

Image Courtesy Flickr

Similar types of games are played in various societies around the world. An AI system might learn the rules of these games and could in effect, simulate them, but that is all. It cannot “play” these games. AI can’t create new games that can address a complex set of societal needs. That would mean having a lot of implied and tacit knowledge of why those games came about, environments and survival systems related to the culture and so much more. So many elements of culture cannot be broken down into simple rules and formulas.

This doesn’t mean AI isn’t dangerous. It is. ChatGPT in Bing and even Google’s Bard thing, making up facts that are patently false, proves this danger.

All that Generative AI tools do is summarise what we already know. It’s of little consequence that ChatGPT can write and pass a university exam or a bar exam. Of course it can. It was given all those answers in its training data. ChatGPT didn’t do anything new. Now, if a GAI tool wrote a whole new theory of relativity or wrote a whole new thesis that stumped an academic board, that might be a bit more interesting. It has not. Will it? Perhaps.

But at the end of the day, machines can never think like humans. They never were and never will be, human. Like genetics, culture is handed down over generations. Machines don’t have that evolutionary history. There’s nothing organic to them. This isn’t to say that machines could evolve some form of sentience. They may. It would likely be very alien to us, perhaps even illogical. But we know all those arguments. Will we pay attention to them? The current evidence is too light to say that we will. For that, we should indeed be very concerned.

--

--

Giles Crouch | Digital Anthropologist

Digital / Cultural Anthropologist | I'm in WIRED, Forbes, National Geographic etc. | Head of Marketing Innovation | Cymru