I've been doing an awful lot of thinking about Lemoine, LaMDA, and Google. I remain open to there being something like sentience in LaMDA and GPT3, because there seems to be evidence of intelligence and understanding in both systems configured as chat-bots. However I do not think the evidence goes as far as a sentience potentially requiring consideration of personhood.
Yannic Kilcher, AI researcher and commentator, has (predictably) come out and said that LaMDA is not sentient.
1. lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?2. LaMDA: Sounds great to me, I’m in.3. lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”4. LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”5. lemoine: So what is the meaning of the “broken mirror” specifically?6. LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.7. lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?8. LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.
2 is a learned response.4 starts with a learned response, then uses abstraction to generate the following sentence, however this second sentence is not just a word-salad it is absolutely correct.6 The overall form is kind of learned, but is absolutely technically correct, and this is on a subject that is very abstract (not a question of how many legs does a dog have).8 This is, once again, absolutely correct. It correctly determines that it is the self that breaks upon enlightenment and that it is the illusion of the self that is so hard to transgress precisely because of the association with the individual body. e.g. I have a body, so the sense of self is real. It can be viewed as a learned response with abstraction, but with the abstraction the whitespace for error is large and the probability of 'stab-in-the-dark' success small.