But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we don’t really encounter the world directly.
So those things can be … sentient? I don’t want to believe that Hinton is going all Blake Lemoine on me. And he’s not, I think. “Let me continue in my new career as a philosopher,” Hinton says, jokingly, as we skip deeper into the weeds. “Let’s leave sentience and consciousness out of it. I don’t really perceive the world directly. What I think is in the world isn’t what’s really there. What happens is it comes into my mind, and I really see what’s in my mind directly. That’s what Descartes thought. And then there’s the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world?” Hinton goes on to argue that since our own experience is subjective, we can’t rule out that machines might have equally valid experiences of their own. “Under that view, it’s quite reasonable to say that these things may already have subjective experience,” he says.
“The idea is you don’t make everything digital,” he says of the analog approach. “Because every piece of analog hardware is slightly different, you can’t transfer weights from one analog model to another. So there’s no efficient way of learning in many different copies of the same model. If you do get AGI [via analog computing], it’ll be much more like humans, and it won’t be able to absorb as much information as those digital models can.
