Prior to AI, outside the context of crypto, it is/was often not “worth it” to fix security holes, but rather bite the bullet and claim victimhood, sue if possible, and hide behind compliance.
If automated exploitation changes that equation, and even low-probability of success is worth trying because pentesting is not bottlenecked by meatspace, it may incentivise writing secure code, in some cases.
Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time.
I hope this means fuzzing as a service becomes absolutely necessary. I think automated exploitation is a good thing for improved security overall, cracked eggs and all.
> Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time
No perversity there, in fact.
If I'm understanding the paper correctly, they're assuming that defenders are also scanning deployed contracts with the intention of ultimately reporting bug bounties. And they get the $6,000/$60,000 numbers by assuming that the bug bounty in their model is 1/10th of the exploit value.
This kind of misses the point though. In the real world engineers would use AI to audit/test the hell out of their contracts before they're even deployed. They could also probably deploy the contracts to testnet and try to actually exploit them running in the wild.
So, while this is all obviously a danger for existing contracts, it seems like it would still be a powerful tool for testing new contracts.
> whether AI agents inevitably favor exploitation over defense.
/Technology/ inevitably favors exploitation over defense.