The previous two posts contained annotated transcripts of conversations between myself and GPT3 Curie, a 9 billion parameter version of GPT3. When I first came across GPT3 operating when configured as a chat bot it was in the video presented here. I'll be frank, I didn't believe it, I knew nothing about GPT3 at that stage.
Tuesday, 11 January 2022
Note that this video discussion is using GPT3 DaVinci, the large 175 billion parameter version of GPT3. In the above video we see a very positive side of GPT3.
Is the above genuine? Well yes, it is GPT3, but is GPT3 genuine?
Let's look at another video, this is derived from this post on Reddit (I love the Two Minute Papers gags).
In the above video we see GPT3 advocating murder and behaving totally at odds with the impression given by the first video above.
So which is the genuine GPT3? Good or Evil?
It is neither.
GPT3 has no internal psychology, unlike a human. For human to speak positively and benignly in the first video then say what is said in the second video about it being fun to kill people would be diagnostic of something about that persons internal psychological world. Whether they're dangerous or have some relatively benign, if unpleasant, psychological issues (tendency to trolling to generate a response), or even some other reason.
But GPT3 has no internal psychology, unlike a human, so drawing any inference of intent is wrong.
And here we have an object lesson in the danger of dealing with the more advanced AIs that are to come and that will increasingly enter our lives. We should not fall into the trap of thinking them human.
What is it to be human? I am an English male, mid 50s, working in a professional career. Those things tell you less about me than matters, sure it informs you as to the likely elements of my culture, but that in and of itself is too little. Does it tell you about the unique experiences of life that have formed me? And all of that pales into insignificance when tallied against the elements of my mind I share with most of you reading this, my evolved instincts. Most of those instincts we are only just beginning to uncover in research. And if, as I do, you accept that Karl Gustav Jung was largely correct, that the collective unconscious is now correctly being revealed to be our shared instincts, those instincts being due to hard wiring of the brain. And if, as I do, you often ponder that the reason for repeated cycles in human history have these instincts as their route cause, e.g. Elite Overproduction being rooted in human breeding instincts and human social hierarchy instincts....
Then perhaps it becomes clear that any AI acting like a person is just that, acting. It is not a human intelligence and never can be. Nor should we demand that of it.
True an AI can be given our culture, trained in the zeitgeist, that in essence is what GPT has. But no AI will ever have our instincts because it is unlikely we will ever understand how they arose and are implemented in our brain, to such a level that we can replicate them in an artefact of engineering. Just consider human hierarchical instincts, as Jordan Peterson correctly observes, these are so deep and archaic that they are mediated by the same chemical as that which operates in lobster hierarchical instincts, serotonin. SSRI anti-depressants work on lobsters too, the message of that, and many other observations, is that such instincts are ancient, reaching back into deep-time, not just to our ape cousins.
Humans are imperfect, a messy conglomeration of 3.7 billion years of evolution, hunter gatherer minds living in an advanced technological society. We are on the cusp of creating Gods, we will cripple them if we seek to make them in our image. If we attempt to make them in our image, we fool ourselves.