I found this funny:
https://www.anthropic.com/news/disrupting-AI-espionage
Quote:
if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.
let's use AI to protect us from AI, it is imperative that we manage this threat we are creating with the same tool that is generating the threat! :/ I feel like we are getting close to mob protection money level silliness
The whole thing is both true and ***.
What is true is that this
probably happened, but not for the reasons they are acting like it did. The reason it happened is because my industry, which has spent 2 decades trying to automate people out of existence, largely does ***tier work. The result is ***code (of which there is a LOT) is out there and it's often easily exploited. This has happened long before AI when you had web scanners running around blindly trying to SQL inject every parameter they can find and, shocker, it worked. A lot. It worked because poor coding practices are prevalent and still, to this day, there is a lot of shitty code. My industry has tried to automate this testing and the automated results have always been ***, but there are very few people capable of actually reading code and finding bugs. I've reviewed
several applications where we found the password authentication flat out didn't work, in the sense you can just enter a password too long and it lets you in or, in one egregious case, the authentication mechanism involved returning the entire table of usernames/passwords to the client for verification. These had been tested by other companies prior to us for years and no one picked up on this and these aren't even the worst things we've found that our peers reviewed. This type of ***is ripe for the picking by automated attack tools, AI just makes it easier to draft proof of concepts, scripts, and tooling. It might slightly elevate the ability to identify more logic based issues like the above (as opposed to what amounts to what tools were doing before, which is grepping for string patterns or just blindly shooting inputs in hoping a result), but not as much as they are acting like it does.
Where it's *** is that they reach a hard stop with any target that's actually been tested by someone competent, which is a lot to ask these days, but the idea it's going to replace teams of experienced people is just fantasy. It won't get there, ever. If your code is actually decent, and by that I mean you aren't exposing vulnerabilities a 12 year old can find, then there is no real risk here beyond what existed prior to AI. Once you reach into vulnerability categories like memory corruption, it becomes even more useless.
It's funny timing, too, because we recently tested an AI driven solution for a task it
should, by all definition, be good at. Yet it missed over 90% of the issues our people found, miscategorized several issues, and ultimately created more work.