One thing I see mentioned on reddit is that there's a lot of AI junk recently in bug bounty reports, and that AIs currently seem to have trouble distinguishing between "this is a bug" and "this is an actual vulnerability."
I have recently started bug hunting again, and asking ChatGPT questions is really frustrating (e.g. "nmap port scan less aggressive?" -> "Sorry, I can't help you with that.") Right to Google.
It also feeds into things. I'll feel like I'm so close to a discovery, and ask ChatGPT if I found sensitive data, or a vulnerability, and it always says "yes", but 90% of the time, it's not. I end up Googling away to find out what I really have.
I would never use ChatGPT for a report, or trust it with this sort of thing. You could probably ask it if editing HTML with dev tools is a security vulnerability, and it will probably say "Yes, you should immediately report that. Would you like me to draft the report for you?"
It's good for writing some short scripts, though. Just don't let it know it's for a "bug bounty". Can't believe people are just blindly trusting it.
You need an LLM trained specifically for pentesting support. TFA links to a site advertising Burp AI [0]. Looks useful for bug bounties but data policies can prevent pentesters from using it in their engagements.
Honestly, I didn't even think to look. I'm so far behind in the LLM space. I also tend to ignore any AI a company is offering, but perhaps Burp AI is good.
> data policies can prevent pentesters from using it in their engagements.
I recently watched a Jason Haddix talk[0] where he mentioned that companies like Cloudflare are watching what pentesters do, so that they can better train their AI against such attacks.
One thing I see mentioned on reddit is that there's a lot of AI junk recently in bug bounty reports, and that AIs currently seem to have trouble distinguishing between "this is a bug" and "this is an actual vulnerability."
I have recently started bug hunting again, and asking ChatGPT questions is really frustrating (e.g. "nmap port scan less aggressive?" -> "Sorry, I can't help you with that.") Right to Google.
It also feeds into things. I'll feel like I'm so close to a discovery, and ask ChatGPT if I found sensitive data, or a vulnerability, and it always says "yes", but 90% of the time, it's not. I end up Googling away to find out what I really have.
I would never use ChatGPT for a report, or trust it with this sort of thing. You could probably ask it if editing HTML with dev tools is a security vulnerability, and it will probably say "Yes, you should immediately report that. Would you like me to draft the report for you?"
It's good for writing some short scripts, though. Just don't let it know it's for a "bug bounty". Can't believe people are just blindly trusting it.
You need an LLM trained specifically for pentesting support. TFA links to a site advertising Burp AI [0]. Looks useful for bug bounties but data policies can prevent pentesters from using it in their engagements.
[0]: https://portswigger.net/burp/ai
Honestly, I didn't even think to look. I'm so far behind in the LLM space. I also tend to ignore any AI a company is offering, but perhaps Burp AI is good.
> data policies can prevent pentesters from using it in their engagements.
I recently watched a Jason Haddix talk[0] where he mentioned that companies like Cloudflare are watching what pentesters do, so that they can better train their AI against such attacks.
[0]: https://www.youtube.com/watch?v=6SNy0u6pYOc
> there's a lot of AI junk recently in bug bounty reports
See: https://news.ycombinator.com/item?id=45330378
Another example: https://github.com/obsidianmd/obsidian-importer/issues/421#i...
Step 1: Hire a bug bounty service
Step 2: Mark all bug reports as "Works as intended"