Friday 2 June 2023

Is GPT4 just a Parrot?

"GPT4 is just an Autocomplete on Steroids!" So goes the cry of pseuds across the internet. 

Let's see.

So I propose the following briefly and deliberately incomplete outline to GPT4.


I am using Office 365 Excel and am interested in having a spreadsheet set up as a series of squares to act as an environment, this is the range down to row 37 and along to column BP. There are three buttons which call the following procedures held under worksheet ENV.

Sub GoButtonClick()

End Sub

Sub PauseButtonClick()

End Sub

Sub StopButtonClick()

End Sub

In the environment: 

Time flows as increments of one second, although calculations of the states of entities can occur in real time. Think of this as like a turn based system where everyone gets to move at the same time in 1 second increments.

There are plants, which grow at a defined rate up to a set limit. The plants are fixed but can spill over into adjacent squares and can seed to start new plants. The plants are indicated on the spreadsheet by a green square or squares (worksheet cell)

There are herbivores which eat the plants.

There are predators which eat the herbivores.

For the moment let's start with the plants and the environment.

Do you need any further detail from me to write the code?

And GPT4 produced the following, upon obtaining my answers it proceeded to almost perfectly produce the whole programme, with less testing and less errors than I'd have done, in seconds vs the hour or so it would likely have taken me. I am no programmer, I'm an Engineer and Manager, so my coding is always rather rusty.


To write the code, I'll need some additional information regarding the plants' behavior and how they interact with the environment. Here are some questions to help you provide more details:

  • How do you represent plants in the cells? Is it a specific text, color, or both?
  • What is the defined growth rate of plants? Is it the same for all plants, or do different plants have different growth rates?
  • What is the set limit for the growth of each plant? Is it based on the number of cells occupied or some other criteria?
  • How do plants spill over into adjacent squares? Is there a specific rule or probability for this?
  • How do plants seed to start new plants? Is there a specific rule or probability for this as well?
  • Should the plants be placed randomly at the start of the simulation, or do you have a specific starting configuration?

Once you provide this information, I can help you write the code for the plant behavior and environment in Excel VBA.

Let's bound this, in a spectrum...

  • The chances of this being produced by random chance are astronomical, like monkeys randomly typing the first ten lines of Mid Summer Night's Dream.
  • The likelihood of producing this from memory is very unlikely: Who ever tried this in Excel VBA? Who ever produced the outline as I did leaving these specific details un-specified? Furthermore there are further questions not asked, such as how to deal with plants growing onto already occupied cells. Note, this is not the only example of this sort of behaviour I and others regularly see.
  • This is clearly not an instance of Searle's Chinese Room. This is because mere symbolic transformation cannot produce such an apt set of questions.
  • This may be an instance of the following: The system has been exposed to many training examples where unknown factors have had to be elucidated from a known framework of concepts, and to adapt to this during training has learnt to build a structure of incoming (known) relationships, then establish missing elements. However this is just one guess of a more general class: An analogue to human understanding.
  • The likelihood that this list of questions is produced by an exact analogue of human understanding is low because the information flow and physical architectures involved are so radically different.

Yet somehow GPT4 has managed to ask exactly the right questions to be clarified before it is in a position to begin coding. So let's consider the spectrum, using a traffic light, red, amber, green scale.

Random Chance. Unlikely

Trick using memory. Unlikely.

Chinese Room. Unlikely.

Some form of Machine process analogous to human understanding. Maybe.

Human Understanding. Unlikely. 

None of the above become green because we don't really know what is going on here. But it seems to me that in needing to achieve the goal of next-word-prediction the system has been forced to develop some form of process broadly analogous to human understanding, at least in terms of the relationship between inputs and outputs.

GPT4 is clearly not a parrot or an Auto-Complete-on-Steroids. But equally clearly it is not human (from wider experience of it). As with statements such as "Increasing CO2 doesn't cause planetary warming." I use dismissal employing wrong-headed notions like the 'Auto-Complete dismissal' as a time-saving device. Those who voice such notions can safely be ignored as they have nothing of use to add to the conversation. Put simply, they're "So wrong, they're not even wrong."

The interaction above clearly shows that GPT4 is doing something that can be called thinking. As with human thinking, the details of this have yet to be worked out.

BTW... GPT4 wrote the code, which required only minor changes from me to work, the largest of which was based on spatial awareness of what makes an adjacent cell, something at which a Language Model with no vision would be expected to do badly. However I got busy with other things and never got it to write in the animal's code. Maybe some day....

No comments: