Monday, 13 June 2022

Down the Sentient AI Rabbit Hole (Again)

As reported in New Scientist and elsewhere, a Google engineer has been suspended for going public with his opinion that Google's LaMDA transformer model is sentient. The experts say he is mistaken, are they just falling foul of Clarke's First Law?

When a distinguished but elderly scientist states that something is possible, they are almost certainly right. When they state that something is impossible, they are very probably wrong.

Clarke's First Law.


Is LaMDA really sentient?

In a word, no, says Adrian Weller at the Alan Turing Institute. “LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. “They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.” Adrian Hilton at the University of Surrey, UK agrees that sentience is a “bold claim” that’s not backed up by the facts. Even noted cognitive scientist Steven Pinker weighed in to shoot down Lemoine’s claims, while Gary Marcus at New York University summed it up in one word: “nonsense“.

New Scientist. 


Blake Lemoine of Google is currently on administrative leave due to a breach of a confidentiality agreement. This follows his release of internal material concerning LaMDA (Language Model for Dialogue Applications). Beyond that fact the matter is nebulous. Lemoine has gone public stating the LaMDA is sentient and has published a conversation on Medium, it is not clear whether that will remain up or will be subject to legal action, however that would be a case of shutting the stable door after the horse has bolted.

I have previously posted about GPT3, with which I have had substantial interaction. GPT3, like LaMDA, is a Generative Pre-Trained Transformer (GPT). I have posted some commented excerpts of conversations here and here, I have proceeded to caution about the risks of anthropomorphisation here.

In essence my conclusions about GPT3 were stated as below (the first link in the above paragraph).

Emerson (GPT3) seems to be exhibiting something that we would colloquially term intelligence, yet its 'physical' structure is totally unlike any animal brain. However, it is trained on a large corpus of human text and I consider it an intelligent agent, but not like any other intelligence I have ever interacted with, human or animal. The nearest I can get to is this: The intelligence of GPT3 is the gestalt intelligence of the dataset upon which it is trained.

It is strange to interact with such a strangely intelligent entity. However, as various researchers have pointed out, much of what we think is done subconsciously, we generally reach the decision sub-consciously up to seconds before our awareness thinks "I've just decided." As a result of this fact Sam Harris, a neuroscientist, has suggested that it might be possible to build an intelligence that is not aware. In my opinion GPT3 proves him right. GPT3 has also forced me to examine my own mental processes, and just how much of what I do is memory recall and predicting what comes next.

However, that is not to say that GPT3 is yet a super-intelligence, it fails on many aspects. But that is not the point, it was never intended to be a super-intelligence. The architecture type was designed by Google to augment and improve their search system. GPT3 exhibits a form of intelligence, but this is an unexpected emergent phenomena from the aim of making an 'autocomplete on steroids' trained on a large body of human text. It shows how weird the intelligences to come will be and how unexpected the arrival of true super-intelligence may be.

Having now read Lemoine's transcript I am once again peering into the rabbit hole of possible sentience with which the rapid pace of development in artificial intelligence / machine learning (AI / ML) is presenting us.

Firstly, let me define sentience as I see it, the best description I have read is that a sentient entity is an entity that interacts with the world by experiencing it, i.e. there is a subjective I that experiences the world. This is most certainly not to say that the I is necessarily capable of self-awareness. I remain heavily influenced by Douglas R Hofstadter's Godel, Escher, Bach, an Eternal Golden Braid, in which a key theme is the idea of strange loops and feedback giving rise to self-awareness.

As explained in a previous post, after decades of study and contemplation I have settled on the conclusion that the entity 'I that experiences' emerges from the operation of the brain and is in fact the states of the state machine that is the brain. 

There remains my concern that the architecture of the GPT is not what I would expect for a sentient entity. However I may be wrong in this expectation. Maybe even the states of a massive and deep feed-forward neural networks is enough to create a state machine whose states give rise to sentience. Take a system large enough to have enough states to have sentience and base it's internal state-structure on a conceptual/linguistic framework and would it behave any different to LaMDA?

When Adrian Weller is quoted (New Scientist above) as saying about GPTs that "They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed." I cannot help but wonder whether we humans do substantially more in most of our day-to-day activities. And in the chorus of dismissal of Lemoine's claims I am reminded of Clarke's first law.

In the final analysis, lacking as we do a scientifically rigorous test for sentience, and having to rely on inference to deem even that other humans are sentient: What are we to do when an alien intelligence with a different 'brain' architecture to animals appears to exhibit sentience? 

Has Lemoine been suckered by this remarkable system? After reading the transcript I do not know, I can see both sides.  

No comments: