@CedricLevasseur @danslerush @AuthorJMac I have one useful application for AI that is also my main use case:
1. I have a Github issue with 100'ish comments with some relevant, some random nagging or generally irrelevant shit, links and shit.
2. I ask AI to aggregate the data, not add or remove new information but just give me report what is there so that I don't have to bother scrolling all the comments.
I don't get much of AI creating anything "new" for me because I do actually new stuff. It costs me time because even if AI generates me something I would need to go it through and I end up dissapointing result that I need to rewrite myself.
I've also empirically tested these theories because I don't believe in belief systems.
E.g. a platform bug makes it propose solutions that don't work because there's too many parameters that are not the source code per se like the environment it runs, and how people might want it to fly. If I give more feedback to the AI, it always ends up eventually proposing the same dysfunctional solution I started with.
But my first example: it s actually useful. I also know a startup that did a system for customer service, which does not replace it but instead quickly provides context information for the call service person, which makes the work less stressful and customers get highest possible result. Nobody likes to talk AI on phone.
Whenever there is a new tech it will take all our jobs but eventually it settles and integrates. E.g. if AI really did that it would start feeding back itself with its own data because everything would be generated by AI. Even some studies have started to shown evidence that this can cause a feedback alzheimer kind of condition for the LLM.
I did my homework on this. Thus, not worried.
:-)