Pet peeve of the day: all the people talking about how ChatGPT is not “conscious” and how it does not “understand” what it is saying, but just putting likely-sounding words together into likely-sounding sentences.
Extra bonus points for using an example of a math problem as a way to show how these AI chat-bots talk about things they don’t really understand.
The irony. The lack of self-awareness. It burns.
Please also tell me all about “the quantum mind” or how the machine does not have a “soul”.
@Inginsub that’s not what I claim. I claim that the people talking about “understanding” do not themselves have the very understanding that they are taking about.
They are literally just doing the same thing they talk about ChatGPT doing: putting likely words into sentences that sound good.
Smarter people than me and you have gone down crazy rabbit holes in this area for centuries.
@missingno you probably say that jokingly, but I think that’s actually the real underlying truth.
It’s obviously true of ChatGPT.
But I really do think the much more interesting truth is that it’s probably true of us too.
@Inginsub Define "conscious". Really, give me the hard, scientific definition of "conscious".
You know you're "conscious" because you're *experiencing* it. But you don't know if I am conscious, for all you know. Or Linus. We just *look like it* to you. And nobody knows. Because nobody knows how to reliably define and reproducibly describe an experience, and therefore nobody knows how to reliably look for signs of it.
And if we don't know what experience actually is, we don't know what consciousness is, like, for shit.
This is the hard problem of consciousness. "Hard" as in, it's basically impossible to solve. Because it's all black boxes all the way down: all you have is input and output signals, you can't meaningfully "look inside".
So no, I don't think it's too crazy to think that, because in the world of black boxes, anything goes. Turing didn't think it would be crazy either.
@torvalds
I heard several times that artists don't like neural networks because they learn on the pictures of other authors without their consent.
Still can't stop smiling when remembering that.
@rook it’s easy and probably largely pointless to criticize the consciousness of AI models. You’re inevitably just making stuff up, since you control the very definition of what “consciousness” is to you.
That’s kind of my point. I suspect that ChatGPT could write a decent paper on this very thing.
The much more interesting thing is to see what those models tell us about ourselves, using hard data from AI models. But I suspect there’s a lot of people who are very invested in discussions about qualia and experiences who really don’t want to go there.