Scroll through crypto Twitter on any given bear market week and you will find at least one thread declaring audit contests dead, broken, or captured. The argument usually involves some combination of falling prize pools, AI-generated spam, platform consolidation, and researcher burnout. I have read most of these takes. I find them mostly unconvincing.
Audit contests remain one of the most powerful tools we have for securing protocols. Low contest counts during market downturns are not evidence that the model is failing. They are evidence that the market is down. Those are different claims, and conflating them is what produces all the panic.
Why contests still crush it
The researcher density argument
One contest right now - with the current low contest count - can still pull in thousands of researchers. The pool hasn't shrunk; it has concentrated. Out of that field, you can realistically get 5–10 absolute elite auditors: the kind of people who find six-figure bugs on Immunefi, who have read every EIP that touches your integration, who have seen the exact pattern you're using fail in a different protocol six months ago.
You also get hundreds of mid-level auditors who are hungry, fast, and trying to make a name for themselves. Some of the sharpest findings in contest history came from researchers on their third-ever contest, competing harder than anyone because they had something to prove. Private audits rarely replicate that energy. You get two or three experienced researchers who are excellent - but you don't get the statistical coverage, and you don't get the adversarial mindset of someone competing for a leaderboard.
More eyes under real competitive pressure is still the most reliable way to find vulnerabilities that a small, comfortable team will miss.
AI agents are changing the game - not ending it
Every serious builder is now training their own AI agents for security research. Bug bounties and contests have become the perfect public arena to benchmark them against real codebases and real adversaries. I fully expect a high volume of AI-generated reports in upcoming contests, and most of it will be noise. Duplicate findings, false positives, shallow pattern-matching that any experienced judge will discard in seconds.
But here is the thing: the same competitive pressure that produces AI spam also incentivises teams to build better agents. And better agents occasionally find genuinely novel issues. The signal-to-noise ratio will get worse before it gets better, but dismissing the entire format because of AI spam is like dismissing bug bounties because researchers submit low-effort reports. The platform moderation problem is solvable. The underlying value of the format is not in question.
The talent pipeline runs through contests
There is no faster path from "interested in smart contract security" to "working in smart contract security" than competing in public audits. New researchers level up by competing against real auditors on real codebases, under real time pressure, with real money on the line. You cannot replicate that in a course or a CTF.
Top performers get scouted. Firms, protocols, and security teams monitor leaderboards constantly. The contest format is, functionally, the most efficient hiring pipeline the industry has. Killing it - or allowing it to atrophy - removes the most meritocratic entry point the field has ever had.
The honest drawbacks
Bear market economics are brutal for new protocols
New protocols and early-stage startups face difficult funding realities during bad market conditions. Token treasuries shrink in dollar terms, VC appetite dries up, and teams that raised at peak valuations suddenly can't justify a $200k prize pool when their runway is eighteen months. So they don't run the contest. Or they run it with a pool too small to attract the researchers who would actually find the critical bugs, which is arguably worse.
Existing projects respond similarly. Launches get delayed. Non-essential work pauses. Security budgets, which are always the first line item challenged in a cost review, get deferred until conditions improve. This is economically rational and genuinely problematic. A protocol that delays its audit to save money is a protocol that ships with unknown vulnerabilities. The bear market doesn't make the code safer.
Volume creates its own problems
Thousands of submissions means thousands of duplicates, false positives, low-effort reports, and - increasingly - AI-generated spam. The triage burden on judging teams is severe and often underestimated. Experienced judges burn out. High-quality findings get buried under a pile of mediocre ones. Final report deadlines stretch unpredictably, sometimes by months, which erodes trust from both researchers and sponsors.
This is a real structural problem, and it's one that platforms have been slow to solve. Better submission tooling, stricter pre-screening, tiered reward structures that penalise duplicate hunters - all of these help at the margin. None of them have fully resolved the signal-to-noise problem at scale. It remains the format's most significant weakness.
Bottom line
I still cannot imagine why any serious protocol wouldn't run their project through a contest right before deployment - especially now, when contest counts are low. Low supply means researcher attention is less fragmented. The elite auditors who would normally be spread across four simultaneous contests are concentrated on yours. If your prize pool is competitive, you are getting more focused attention per dollar than you would at peak cycle.
Bad market conditions put pressure on new and existing projects alike. That pressure is real and the consequences for security budgets are real. But audit contests aren't going anywhere. They're hibernating, and they're evolving. The AI disruption will shake out into new norms. The bear market will end. The researchers aren't going anywhere either - if anything, the lean periods are when the serious ones separate themselves from the cycle-traders.
The panic misreads a cyclical slowdown as a structural collapse. It isn't. Run the contest.