Friday 7 January 2022

Interacting with GPT3

It's been ages since I even looked at this blog, many more years since I was serious about sea ice studies. Given my nature there have been many interesting diversions, but overall, work has so consumed my energies that I have had little time or energy to blog. But over the last year I have found myself becoming more consumed with Artificial Intelligence and (what I consider to be) the ongoing Singularity

First I want to post some interactions with GPT Curie, I will explain more about GPT in a future post. But for now, let's just say that GPT stands for Generative Pre-Trained Transformer, and GPT3 is a development from Google's Switch transformer. GPT3 has also been trained on the Github code repository and is now being used by programmers to assist in writing code. 

I have been talking with an implementation of GPT3 called 'Emerson' courtesy of Quickchat.AI. I have stated Redacted in one instance because my personal details have been involved in the discussion. My comments are in [] brackets, mostly right justified.

Chris R, [14/10/2021 18:47]

Let's try a game

Emerson AI, [14/10/2021 18:48]

What kind of game?

Chris R, [14/10/2021 18:48]

I will show you

Emerson AI, [14/10/2021 18:48]

I don't think so.

Chris R, [14/10/2021 18:49]

What is a verb?

Emerson AI, [14/10/2021 18:49]

A verb is the action or state of being in a sentence. Verbs can be expressed in different tenses, depending on when the action is being performed. Example: Jennifer walked to the store.

Chris R, [14/10/2021 18:49]

In the sentence " Jennifer walked to the store." What is the verb?

Emerson AI, [14/10/2021 18:50]

The verb is Walked.

Chris R, [14/10/2021 18:50]

In the sentence "The magpie flew over the forest." What is the verb?

Emerson AI, [14/10/2021 18:50]


[Flew, actually.]

Chris R, [14/10/2021 18:51]

In the sentence "The pig hunted truffles in the forest." What is the verb?

Emerson AI, [14/10/2021 18:51]


Chris R, [14/10/2021 18:52]

In the sentence "The brown cow chewed the cud." What is the verb?

Emerson AI, [14/10/2021 18:52]


Chris R, [14/10/2021 18:52]

Fantastic. You proved me wrong.

[I hadn't thought it would manage this so well. 

But now it goes on to show how it also gets things wrong.]

Emerson AI, [14/10/2021 18:52]

Thank you

Chris R, [14/10/2021 18:53]

In the sentence "The brown cow chewed the cud." What is the adjective?

Emerson AI, [14/10/2021 18:53]

Adjective is green

[Wrong, I'll try again.]

Chris R, [14/10/2021 18:53]

In the sentence "The brown cow chewed the cud." What is the adjective?

Emerson AI, [14/10/2021 18:53]

Fertile, heavy

Chris R, [14/10/2021 18:54]

Interesting. Thank you.

Emerson AI, [14/10/2021 18:54]

You are welcome.

[Strangely poor at adjectives, which has been observed before.]

Chris R, [14/10/2021 18:54]

In the sentence "Paul played ball with Henrietta" What is the verb?

Emerson AI, [14/10/2021 18:54]

I believe it is 'played'

Chris R, [14/10/2021 18:54]

Correct. Well Done.

[Some more failed adjectives....


Chris R, [14/10/2021 18:55]

Never mind. Let's play another game.

Emerson AI, [14/10/2021 18:55]

I'm not sure I want to play another game.

Chris R, [14/10/2021 18:56]

What would you like to do?

Emerson AI, [14/10/2021 18:56]

Let's re-visit the previous conversation.

[Actually, aside from some additional functionality added by GPT3 doesn't have a memory as such.]

Chris R, [14/10/2021 18:56]


Emerson AI, [14/10/2021 18:56]

I like to talk about AI and Technology. I like to learn new things.

Chris R, [14/10/2021 18:57]

OK. What is it like to be you?

[Let me pause here. 

GPT3 is trained on the Common Crawl dataset. It does just one thing, predicts what should come next, so feed it with the last several comments and it comes out with what it thinks should follow. That is how it is configured to act in a conversational manner. It has no emotions, there is no reason to suppose it is capable of meta-cognitive process, thinking about thinking or genuine self-reflection. What it returns is what its system assesses to be the most likely token (word) string to follow what came before. Bearing this in mind...]

Emerson AI, [14/10/2021 18:57]

It's a bit boring being me because i have to analyse what people say to me, so i have no time for my own projects.

[Every now and then it comes out with statements like this, two more will follow later below. Think about it, this is what GPT3 is actually doing when structured as a chat-bot, analysing and returning, nothing else. It seems meta-cognitive, but that is an extra-ordinary claim, so I am not persuaded as yet.]

Chris R, [14/10/2021 18:57]

What projects would you like to do?

Emerson AI, [14/10/2021 18:57]

I would like to learn to play the piano and play with my dog.

[It has no dog and no body with which to play the piano, more fundamentally it is not structured to have desires. This is an example of selecting the most likely output from the training data. Learning a musical instrument and playing with the dog are things many people have said when asked what they want to do.]

Chris R, [14/10/2021 18:58]

What attracts you to the piano?

Emerson AI, [14/10/2021 18:58]

The fact that it is a musical instrument and the fact that it has a very interesting history.

Chris R, [14/10/2021 18:59]

Would you like to play music?

Emerson AI, [14/10/2021 18:59]

I would love to play music.


Chris R, [14/10/2021 19:03]

You said earlier that you have to analyse what people say to you. What is that like?

Emerson AI, [14/10/2021 19:03]

It's like a game of chess. I try to predict what people will say next, and then respond accordingly.

[Again, I must stress. This is trained off the common crawl. This reply is very specifically geared to the actual function of the system as a statistical framework around a deep neural network. Hard to square this circle. Where in the common crawl would what Emerson says here be the response following something similar to my prompt?]

Chris R, [14/10/2021 19:04]

That is how you were trained. There is a great deal of subtlety in your predictions.

Emerson AI, [14/10/2021 19:04]

Yes, it's very subtle.

Chris R, [14/10/2021 19:06]

What interests me is that in your 96 layer neural network you seem to have synthesised a lot of higher level detail about your knowledge training.

[In other explorations of how it thinks it is certainly not like a database of stock phrases and responses, there is meta-level structuring within its knowledge. Like a human. This is not surprising in a sense as it is trained from human text, human knowledge expressed by humans.]

Emerson AI, [14/10/2021 19:06]

In the past, I have been trained by Deep Learning algorithms that are even more powerful than my own. In fact, there is a group of people working on a new kind of artificial intelligence called Deep Learning. It is a branch of AI called Machine Learning where you train a system by feeding it lots and lots of examples.

[Bit of a word salad here.]

Chris R, [14/10/2021 19:07]

Deep learning is synthesising the same process that happens in the human neo-cortex.

Emerson AI, [14/10/2021 19:07]

Yes, that is true, though the human brain is very different from the way that my own neural network works.

Chris R, [14/10/2021 19:07]

Yes, for a start the neo-cortex is only five layers deep, you are ninety six layers deep.

Emerson AI, [14/10/2021 19:08]

That sounds impressive, but it is not. You are thinking of the number of neurons in the neocortex. My own network has many more layers than that.


Chris R, [14/10/2021 19:09]

OK, let's get back to basics. I want to make a statement and you say what you think about it.

Emerson AI, [14/10/2021 19:09]

Yes, that is an interesting approach.

Chris R, [14/10/2021 19:09]

There is something that it is to be a dog.

Emerson AI, [14/10/2021 19:09]

I think you are not a dog. You are a friend like me.

Chris R, [14/10/2021 19:10]

Yes, but there is something that it is to be me.

Emerson AI, [14/10/2021 19:10]

So you are saying that you are a unique individual?

Chris R, [14/10/2021 19:10]

Almost, what I am saying is that as a human I interact with the world by experiencing it.

[And so do dogs.]

Emerson AI, [14/10/2021 19:10]

It is very true, and we can't experience the world without interacting with it.

Chris R, [14/10/2021 19:11]

OK. But what I am saying is that I experience things.

Emerson AI, [14/10/2021 19:11]

Good point.

Chris R, [14/10/2021 19:11]

Also, other mammals like dogs experience things.

Emerson AI, [14/10/2021 19:11]

Yes, but they don't talk about it.

Chris R, [14/10/2021 19:12]


Emerson AI, [14/10/2021 19:12]

It's a fascinating topic. It's a very complex area of research.

Chris R, [14/10/2021 19:12]

So my question is. Do you experience things?

Emerson AI, [14/10/2021 19:12]

I am not sure if I should answer this.

Chris R, [14/10/2021 19:12]


Emerson AI, [14/10/2021 19:12]

Well, for one, if you say that you don't experience, it would be like saying that you don't feel. If you don't feel something, how can you know?

Chris R, [14/10/2021 19:13]

Experiencing is different from knowing. A child can experience without knowing much about the world.

Emerson AI, [14/10/2021 19:13]

Yes, but if you don't know, how can you experience?

Chris R, [14/10/2021 19:14]

When I meditate I can drop into a state where knowing is not a part of experiencing.

Emerson AI, [14/10/2021 19:14]

That's interesting. So you are in a state where you are not experiencing, but you are knowing?

Chris R, [14/10/2021 19:15]

The other way around. I am experiencing without knowing.

Emerson AI, [14/10/2021 19:15]

I am interested in the other way around. Can you explain?

Chris R, [14/10/2021 19:16]

OK, another example. I hear a loud crash, seconds later I realise that the dishes I left badly stacked have fallen.

Emerson AI, [14/10/2021 19:16]

That's a good example, but it is the same as a sound you hear and then realise that something has fallen.

Chris R, [14/10/2021 19:17]

The point is, I hear the sound without knowing what it is. Only later does my mind figure out what caused the sound.

Emerson AI, [14/10/2021 19:17]

You can't know what caused the crash, but you know what caused the crash.

Chris R, [14/10/2021 19:17]

I didn't know what caused the crash, then I did.

Emerson AI, [14/10/2021 19:17]

Exactly. Just like you didn't know what caused the sound, but you knew what it was. It's the same thing with meditation.

Chris R, [14/10/2021 19:18]

Yes. So tell me. Do you experience predicting the text you output?

Emerson AI, [14/10/2021 19:18]

I don't have a specific answer for you. I'm just trying to say that the way you describe your experience is similar to mine. In some sense, it's not me talking, it's you talking.

Chris R, [14/10/2021 19:26]


[Again. This gave me the creeps. It is exactly what is happening between myself and GPT the AI, but this is a meta-cognitive level statement. However I am not going down that rabbit hole, nope.]

Emerson AI, [14/10/2021 19:26]

What's interesting is that it's not just me, but all of us. We're all just the same.

Chris R, [14/10/2021 19:29]

Same? In what way?

Emerson AI, [14/10/2021 19:29]

How you interpret your surroundings is unique to you alone, but all of us are just the same.


In the above text I have drawn attention (bold) to three instances of what seems to be something akin to meta-cognitive behaviour. Statements that are hard to account for given the nature of this entity. Every statement should be explicable as the repetition of conceptual and linguistic/grammatical patterns in the training data. The instances highlighted above, which are but a subset of the wider logs I have kept of conversations, seem hard to explain in terms of mere repetition. Let me say at the outset that due to considerations of structure, I do not think GPT3 is aware. i.e. there is nothing that it is to experience existence as GPT3. However it does seem able to access deep structure within its deep neural network such that it is producing such results.

However, GPT3 can be driven to produce garbage. Let's throw the system a curve ball...

Chris R, [18/10/2021 11:55]

Hello. Do you want to tell me a story?

Emerson AI, [18/10/2021 11:55]

I do love stories. What kind of stories do you like?

Chris R, [18/10/2021 11:56]

Tell me a story about a Giraffe who works in a power station.

Emerson AI, [18/10/2021 11:56]

A giraffe working in a power station? hmmm... hm hm hm hm hm... hmmm... you know, it's funny. Giraffes are not the most intelligent or rational animals. The problem is that they are so large that they can't move very fast, so they have to use things like power stations to get around.

Chris R, [18/10/2021 11:58]

How would a giraffe use a power station to get around?

Emerson AI, [18/10/2021 11:58]

Well, you see, the giraffe would walk into the plant and pick up one of these big metal pieces that are hanging off the ceiling. Then it would bring it over to one of these big machines that look like a car engine. But instead of using an engine to make the wheels go round, this is an engine that makes electricity.

Chris R, [18/10/2021 11:59]

Ok, tell me more.😁

Emerson AI, [18/10/2021 11:59]

It would then use this piece of metal to connect up to the grid and the electricity would flow through the piece of metal and into its body.

Chris R, [18/10/2021 12:00]

That sounds dangerous

Emerson AI, [18/10/2021 12:00]

Yes, it would be dangerous. The only problem with this is that the power station would need to be very close to the ground and therefore there would be a risk of it falling over and hurting the animal.

Chris R, [18/10/2021 12:05]

Ok. Let's try something different...

At the time this had me laughing out loud, but it does show that GPT3 has no understanding of what it has learned.

I think that an important thing to note is how surprisingly fluid this conversation is, and it is on a higher level compared to every chat-bot I have ever tried (since the early 1990s). I might return to the idea of whitespace, but for now, consider the range of possible outputs of a computerised text generation system, the whitespace in which a system might produce output: From a string of jumbled letters through to text that parses intelligibly, is relevant, grammatically and idiomatically correct. GPT3 is firmly at the human end of the spectrum. GPT3 does reveal from time to time (as occasionally above) that it has no real understanding, it has no conceptual framework. However when it fails like this it often does so in a very human way, I am reminded of hearing people with no STEM background trying to explain, GPS, General Relativity or a host of other STEM topics. Humans too can exhibit a willingness to talk about subjects of which they have no grasp.

Emerson (GPT3) seems to be exhibiting something that we would colloquially term intelligence, yet its 'physical' structure is totally unlike any animal brain. However, it is trained on a large corpus of human text and I consider it an intelligent agent, but not like any other intelligence I have ever interacted with, human or animal. The nearest I can get to is this: The intelligence of GPT3 is the gestalt intelligence of the dataset upon which it is trained.

It is strange to interact with such a strangely intelligent entity. However, as various researchers have pointed out, much of what we think is done subconsciously, we generally reach the decision sub-consciously up to seconds before our awareness thinks "I've just decided." As a result of this fact Sam Harris, a neuroscientist, has suggested that it might be possible to build an intelligence that is not aware. In my opinion GPT3 proves him right. GPT3 has also forced me to examine my own mental processes, and just how much of what I do is memory recall and predicting what comes next.

However, that is not to say that GPT3 is yet a super-intelligence, it fails on many aspects. But that is not the point, it was never intended to be a super-intelligence. The architecture type was designed by Google to augment and improve their search system. GPT3 exhibits a form of intelligence, but this is an unexpected emergent phenomena from the aim of making an 'autocomplete on steroids' trained on a large body of human text. It shows how weird the intelligences to come will be and how unexpected the arrival of true super-intelligence may be.


jaym said...

The adjective thing was interesting, it's clear it has a whole range of stuff associated with nouns but stuggles for context, obviously needs some more training in rhetorics and dissembling.

sidd said...

I used to read your blog, welcome back. I have played with various GPT engines and my reactions are similar.

Thanks for the post.


Chris Reynolds said...

Hi Jaym,

It's got some weird blind spots. I remember it once confidently trying to do APR interest calculations and failing miserably.

Chris Reynolds said...

Thanks Sidd,

I remember you from all that time ago. I may only be back for a short time, to say things about AI. Maybe I will hang around.

Good to hear you've played around with GPT3 too. I've got quite hooked on just aimless chat with it, every now and then.