

No, I disagree.
It is not one person’s doing. That is the deflection.
I will not downplay the effect of this by saying they are the only one involved. Every maintainer so far that has locked or approved any changes that they did are equally at fault here. In fact, one of those linked articles even stated that the primary reason they locked it is because they didn’t like the amount of coverage it got. This is a failure on the community as a whole, not the individual.
edit for clarification: By failure, I’m talking more on projects that are humoring it and actually going through with it without considering the potential side effects of just blanket applying that.
Currently considering that a handful of these are locked or posted as we don’t know if we’re going to be doing this yet, I haven’t quite put them in that same sector yet, but it’s rapidly approaching it.


If the bug was actually legitimate, and was verified, I don’t think its a good idea to just wait till someone actually experiences it.
Of course this depends on the severity of the bug as well. In the case of this article, he was refusing to submit anything until he actually verified it, but he defo was using the AI as a origin of discovery.
I would prefer those types of reports over blanket AI vulnerability reports that aren’t proven. Discrediting a valid bug because it was not human generated may lessen workflow, but it’s at the cost of your software’s security and reliability.
I agree I would throw out reports that are AI driven & not proven, but if someone did the actual PoC and demonstrated actual risk I wouldn’t care if it was originally AI or not. I would just assign it based off severity like normal.