I'd guess that some years and we are predestined to see a triumph of
#AI exploitation.
I'd guess there's opportunities in that area for behavioral exploitation, making it do unwanted things.
Even more so there's opportunities for scavenging "confidential leaks" from large LM's, to downright reverse engineering. Sometime in-house codebase might be seeded by mistake to a public LM...
Or hackers could exploit your network and instead of copying any data they would use LM as an indirect path to scavenge good quality enough information to meet the goals.
If I was working on offensive security like for some intelligence department, I'd put a lot of resources on exploiting the AI assistants for instance. Exploiting that functionally would open countless doors.
#infosec