Friend of mine who's a lawyer was wondering about setting up a chatbot to provide advice to nonprofits on the governing law in the local jurisdiction. I had opinions, and then I thought “why not” and asked ChatGPT what it thought about the idea generally, and in particular, what the error rate might be.
@timbray Um, speaking as a charitable organization executive, let me assure you I can (and have) give(n) actual lawyers advice on the governing law in the local jurisdiction...
Advice on governing law is not what lawyers are for, reading over the fine print in contracts and sending assholes threatening letters is what lawyers are for. (Maybe a chatbot could help with the latter, but it wouldn't be nearly as much fun.)
Just sayin
@timbray Chatbots can't think. It didn't tell you what it "thought". It just did some stochastic pattern matching. This anthropomorphizing is at the heart of the problem with them.
@robpike Yeah, I know, I even poked around in the vector math a little bit to get a feel for it. Being able to model the structure of language in a statistically useful way is not nothing. But AFAICT nobody has much in the way of ideas yet on how to connect that to the underlying structure of reality.
@timbray Some academics are working to try to understand how they work, but even they fall to the anthropomorphic tropes by saying they are doing "LLM brain scans".
If LLMs didn't have chatty interactive interfaces but instead presented themselves as the aleatoric robots they are, people would be less fooled by them. I discovered this myself with Mark V Shaney many years ago: People are too easily fooled by the cosmetics of language to recognize true intelligence.