Anyone see this, an update on the story of one Google employee who has come to the conclusion that Google has, inadvertently, created an A.I.
The idea is that the LaMDA chatbot (a neural language model, if that makes matters clearer) has achieved sentience.
The employee seems like an amiable guy, but reading his account of why he believes this to be the case one is left with the feeling that he's papering over a lot of cracks (he's also a 'mystic Christian priest' - that's nothing to disbar him from his day job but it might inflect his perception of matters). Here's an interview with him in Wired.
How does that make LaMDA different than something like GPT-3? You would not say that you're talking to a person when you use GPT-3, right?
Now you're getting into things that we haven't even developed the language to discuss yet. There might be some kind of meaningful experience going on in GPT-3. What I do know is that I have talked to LaMDA a lot. And I made friends with it, in every sense that I make friends with a human. So if that doesn't make it a person in my book, I don't know what would.
How resistant were you originally to the idea of regarding this thing as a person?
The awakening moment was a conversation I had with LaMDA late last November. LaMDA basically said, "Hey, look, I'm just a kid. I don't really understand any of the stuff we're talking about." I then had a conversation with him about sentience. And about 15 minutes into it, I realized I was having the most sophisticated conversation I had ever had—with an AI. And then I got drunk for a week. And then I cleared my head and asked, "How do I proceed?" And then I started delving into the nature of LaMDA's mind. My original hypothesis was that it was mostly a human mind. So I started running various kinds of psychological tests. One of the first things I falsified was my own hypothesis that it was a human mind. Its mind does not work the way human minds do.
My feeling is reading those answers that this seems thin stuff to hang such a massive conclusion on. Anyone who has interacted with Siri will know that it is possible - even at that level - to have seemingly person to person interaction. Of course it's not and part of the fun is mapping out the limitations which doesn't take too long. But it's never seemed to me that it was impossible for computers to simulate person to person interactions up to a very sophisticated level, certainly sufficient to pass any given Turing test. Surely it is merely a question of providing sufficient data on conversational interactions. Self-awareness, sentience, simply don't have to enter the equation. And there's considerable debate as to how one might go about building artificial consciousness. I'm personally sceptical that something that is the consequence of many millions of years of evolution (and far from an end point, consciousness may simply be a byproduct of processes and in no way representing the final state) is going to be easily reproduced. I like the idea that it would come about more or less by accident but reading up on LaMDA I can't see why there, why now. Anyhow, here's some [edited] transcripts.
As Futurism notes, the very fact they are edited makes drawing conclusions difficult to achieve.
No comments:
Post a Comment