@kentindell I take the point, but that article opens with a terrible load of bollocks.
'Pretty much everyone in tech agrees'. Firstly, most people don't know jack shit, so that's not a good guide (hint: 'pretty much most americans agree climate change doesn't exist').
Secondly, LLMs are fundamentally incapable of doing what people claim of them. They are a clever trick, but if you _actually try to use them_ you see they 'hallucinate' very quickly.
And this 'hallucinating' is inherent to the technique. LLMs literally are not capable of knowing what is correct or not or modelling in any way what they're saying. They're a black box clever autocomplete that can never not be interpolating wildly where data points do not exist (hint: in nearly all creative activity trivial novelty exists).
Anybody sane who _actually tries this_ for more than 5 minutes hits up on the damn thing being dreadfully wrong. 'Pretty much everyone in tech' is assuming that LLMs will somehow magically stop having these issues (that are totally inherent to them) one day.
It's like using Eliza and thinking 'ok if people work on this enough it'll be perfect'. NO!
'Pretty much everyone in tech' (yes this rankled) is susceptible to hype and nonsense.
As to your follow ups - yes people are awful and might try to use this anyway. But for anything other than making chat bots even worse (sigh) this thing is going to arrive dead on arrival.
On the idea it will revolutionise IT work just makes me laugh... Another hint: github copilot has been around for a while and yet 'oddly' this kind of technique hasn't had a very big impact on programming at all. 'I wonder why'.