One idea for fully legal
#ransomware alike software that could exploit
#AI code generation:
1. Do the initial research where the code is scavenged for the ML consumption.
2. Do the initial research on how generate meaningless code with the property that it has a signature that could be detected.
3. Create automatically and in volumes malicious and meaningful Git repositories or fake profiles that contain seemingly legit projects but actually are not.
4. License projects with GPL3.
5. Create a framework for scanning binaries from which you can detect your signature.
6. Sue all the parties with conflict with the licensing.
Some steps have open holes but I think this pattern could potentially made to work in some form.
The future of
#malware lies strongly in conning the AI. Why bother with social engineering (e.g. calling to the company) and risking yourself when you can just con the AI through the Internet. AI does not only make producing bad quality code easier - it also makes hacking systems factors easier.
Another angle would be to con AI to pick a pattern that leaves a backdoor to the implementation. People who rely on Copilot are not that likely to review the generated code, I'd guess.
#infosec