Conversation

Pet peeve of the day: all the people talking about how ChatGPT is not “conscious” and how it does not “understand” what it is saying, but just putting likely-sounding words together into likely-sounding sentences.

Extra bonus points for using an example of a math problem as a way to show how these AI chat-bots talk about things they don’t really understand.

The irony. The lack of self-awareness. It burns.

52
154
307

Please also tell me all about “the quantum mind” or how the machine does not have a “soul”.

8
15
95

Still tho…

3
0
1
@torvalds it's okay I don't understand half the things I say either
1
0
1

@Inginsub that’s not what I claim. I claim that the people talking about “understanding” do not themselves have the very understanding that they are taking about.

They are literally just doing the same thing they talk about ChatGPT doing: putting likely words into sentences that sound good.

Smarter people than me and you have gone down crazy rabbit holes in this area for centuries.

8
9
35

@missingno you probably say that jokingly, but I think that’s actually the real underlying truth.

It’s obviously true of ChatGPT.

But I really do think the much more interesting truth is that it’s probably true of us too.

8
9
37
Edited 1 year ago

@Inginsub Define "conscious". Really, give me the hard, scientific definition of "conscious".

You know you're "conscious" because you're *experiencing* it. But you don't know if I am conscious, for all you know. Or Linus. We just *look like it* to you. And nobody knows. Because nobody knows how to reliably define and reproducibly describe an experience, and therefore nobody knows how to reliably look for signs of it.

And if we don't know what experience actually is, we don't know what consciousness is, like, for shit.

This is the hard problem of consciousness. "Hard" as in, it's basically impossible to solve. Because it's all black boxes all the way down: all you have is input and output signals, you can't meaningfully "look inside".

So no, I don't think it's too crazy to think that, because in the world of black boxes, anything goes. Turing didn't think it would be crazy either.

@torvalds

0
0
1

@torvalds
I heard several times that artists don't like neural networks because they learn on the pictures of other authors without their consent.

Still can't stop smiling when remembering that.

0
0
0

@rook it’s easy and probably largely pointless to criticize the consciousness of AI models. You’re inevitably just making stuff up, since you control the very definition of what “consciousness” is to you.

That’s kind of my point. I suspect that ChatGPT could write a decent paper on this very thing.

The much more interesting thing is to see what those models tell us about ourselves, using hard data from AI models. But I suspect there’s a lot of people who are very invested in discussions about qualia and experiences who really don’t want to go there.

2
2
13
@torvalds "A year working in AI will make anyone believe in God"
0
0
0
@torvalds I've always struggled with the concept of a Turing Machine creating consciousness. I believe consciousness can only be achieved through analog means. As computers only work in digital, it could not achieve this. I truly believe consciousness, by its very nature, can only be achieved with the use of irrational numbers.
1
0
1
@torvalds My personal pet peeve is how LLMs are marketed directly to managers, bypassing the engineers. So one fine day a suit walks up and tells you about this fantastic whitepaper he's read, and how we can improve our entire project by using LLMs.

I kid you not, Linus, I have been told to build a linear regressor without any label data by "asking Bard for the weights for each feature". I did as told. It was the world's shittiest linear regressor.
0
0
0