From Scan to Fix: Closing the Remediation Loop

Written by the Rafter Team

Finding vulnerabilities is the easy part. The hard part—the part where most security programs quietly fail—is getting those findings fixed. Industry data shows that 45% of enterprise vulnerabilities remain unpatched after 12 months, and fixing third-party flaws takes a median of 11 months. The gap between "scan" and "fix" isn't a technology problem. It's a workflow problem. Findings land in dashboards nobody checks, tickets nobody prioritizes, and backlogs nobody triages.
Closing this loop means connecting scanner output directly to where developers already work—in pull requests, in their IDE, in the tools they use to ship code. According to Edgescan's 2025 Vulnerability Statistics Report and Veracode's State of Software Security 2024, the remediation gap is widening—not because scanners are failing, but because the findings-to-fix pipeline is broken.
The Remediation Gap
Security scanners have never been more capable. Organizations run SAST, DAST, SCA, and AI-powered code review across their repositories, and the result is a firehose of findings. The problem isn't detection—it's what happens next.
The numbers paint a stark picture:
| Metric | Value | Source |
|---|---|---|
| Enterprise vulnerabilities unpatched after 12 months | 45.4% | Edgescan 2025 |
| Median time to fix third-party flaws | 11 months | Veracode SoSS 2024 |
| Applications with security debt older than 1 year | 42% | Veracode SoSS 2024 |
| Median days to close half of internet-facing vulns | 361 days | Edgescan 2025 |
| New CVEs published in 2024 | 40,009 | NIST NVD |
Meanwhile, exploitation moves fast. Known Exploited Vulnerabilities (KEVs) listed in CISA's catalog get weaponized within days, but the median remediation time for a KEV is 174 days—and more than 60% are resolved after CISA's deadline.
The result is structural: vulnerability debt accumulates faster than teams can pay it down. Every quarter, new findings pile onto an already unworkable backlog. This isn't a staffing problem you can hire your way out of—it's a workflow design failure.
Why Developers Don't Fix Findings
Developers aren't ignoring security findings out of negligence. They're responding rationally to broken incentives and bad tooling.
Alert fatigue
A mid-size application might generate hundreds of scanner findings per week. When everything is flagged, nothing is prioritized. Developers learn to tune out the noise—and critical findings get buried alongside informational warnings.
Findings lack context
Most scanner output tells you what is wrong but not how to fix it. A SAST tool might flag "potential SQL injection on line 47" without explaining the data flow, the risk in context, or what a correct fix looks like. Developers stare at a finding, don't know what to do with it, and move on.
No clear ownership
Whose job is it to fix a vulnerability in a shared library? The team that imported it? The platform team? The security team that found it? When ownership is ambiguous, findings sit in limbo. Snyk's 2024 Open Source Security Report found that vulnerability remediation rates have plateaued at roughly 5% per month—a pace that can't keep up with the rate of new CVE disclosures.
Competing priorities
Feature deadlines are concrete and immediate. Security findings feel abstract and deferrable. Without a forcing function—a PR check that blocks merge, a ticket that shows up in sprint planning—remediation always loses to "ship the feature."
The compound effect of deferred remediation is severe. Veracode found that teams who fix flaws fastest reduce critical security debt by 75%—from 22.4% of applications down to just over 5%. Speed of remediation is the single strongest predictor of security posture.
Connecting Findings to Workflows
The fix for the remediation gap isn't better scanners—it's better integration. Findings need to arrive where developers already work, in formats they can act on immediately.
PR comments: where developers already review code
The highest-leverage integration point is the pull request. When a scanner posts findings as inline PR comments—on the exact lines that introduced the vulnerability—developers see them in the same review flow they already use for code quality. The finding has context (the diff), timing (before merge), and a natural owner (the PR author).
# Example: GitHub Actions workflow with inline PR comments
name: Security Scan
on: [pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run SAST scan
run: |
# Scanner posts findings as PR review comments
# on the exact lines that introduced the issue
rafter scan --format=github-pr --fail-on=high
This pattern works because it matches existing developer habits. You don't need to train anyone to check a new dashboard—the finding shows up in a workflow they already complete multiple times per day.
IDE integration: fix before you commit
Catching vulnerabilities in the IDE—before the code even reaches a pull request—is the fastest feedback loop. Language server integrations and editor plugins can highlight vulnerable patterns as you type, the same way a linter catches syntax errors.
The trade-off is precision. IDE-time scanning has less context than a full repository scan (no cross-file data flow, no dependency resolution), so it catches a narrower class of issues. But for common patterns—hardcoded secrets, obvious injection sinks, known-vulnerable API calls—the near-instant feedback is worth it.
Ticket auto-creation with severity-based routing
For findings that don't block a specific PR—like vulnerabilities in existing code discovered during a periodic scan—automated ticket creation with severity-based routing ensures nothing falls through the cracks:
- Critical/High: Create a ticket in the current sprint, assigned to the team that owns the affected code
- Medium: Add to the next sprint's backlog with the relevant team tagged
- Low/Informational: Aggregate into a weekly security digest—no individual tickets
The key is routing, not just creation. A ticket dumped into a generic backlog is barely better than a dashboard entry. A ticket assigned to the right team, in their sprint board, with a severity label that matches their SLA—that gets fixed.
AI-Powered Fix Suggestions
The most promising development in vulnerability remediation is AI-generated fix suggestions. Instead of just telling developers what's wrong, modern tools can show them what "fixed" looks like.
How it works
When a scanner identifies a vulnerability, an LLM can analyze the vulnerable code in context—understanding the surrounding logic, the framework conventions, and the specific vulnerability class—and generate a concrete code fix. This collapses the most time-consuming part of remediation: figuring out how to fix the issue.
The full loop: vulnerable code to verified fix
Here's a concrete example of a SQL injection vulnerability flowing through a scan-to-fix loop:
# ✗ Vulnerable: string concatenation in SQL query
def get_user(user_id):
query = f"SELECT * FROM users WHERE id = '{user_id}'"
return db.execute(query)
A scanner flags this as a SQL injection risk. An AI-powered tool generates a fix prompt:
# ✓ Secure: parameterized query prevents SQL injection
def get_user(user_id):
query = "SELECT * FROM users WHERE id = %s"
return db.execute(query, (user_id,))
The fix is specific to the codebase (using the same db.execute interface), addresses the exact vulnerability class (parameterized queries for injection), and is immediately applicable—a developer can review and merge it in seconds rather than researching the correct remediation pattern.
Quality and limitations
AI-generated fixes are accurate for well-understood vulnerability classes: SQL injection, XSS, path traversal, hardcoded secrets, insecure deserialization. For these patterns, the fix is often mechanical—replace string concatenation with parameterized queries, replace innerHTML with textContent, replace hardcoded credentials with environment variable lookups.
Where AI fixes get less reliable:
- Business logic flaws: An LLM can't understand your authorization model from code alone
- Complex data flow: Multi-file taint chains where the fix requires architectural changes
- Framework-specific edge cases: When the "right" fix depends on undocumented framework behavior
The human review requirement remains. AI-generated fixes should be treated like AI-generated code—plausible and usually correct, but never deployed without review. The value isn't eliminating human judgment; it's eliminating the research time that makes remediation slow.
Prioritization Frameworks
Not all vulnerabilities are equal. A critical-severity CVE in a dependency that's never loaded at runtime is less urgent than a medium-severity injection in your authentication endpoint. Effective remediation requires a prioritization framework that goes beyond CVSS scores.
Why CVSS alone fails
CVSS measures the theoretical severity of a vulnerability—how bad it could be in the worst case. It doesn't account for:
- Whether the vulnerable code is actually reachable in your application
- Whether a public exploit exists
- What data or systems are exposed if the vulnerability is exploited
- Whether compensating controls (WAF, network segmentation) reduce the practical risk
A CVSS 9.8 in a test utility that never runs in production is less urgent than a CVSS 6.5 in your payment processing pipeline. Teams that prioritize by CVSS alone waste remediation effort on low-actual-risk findings while high-actual-risk findings wait their turn in the queue.
A practical prioritization matrix
| Factor | Weight | How to assess |
|---|---|---|
| Reachability | High | Is the vulnerable code path actually executed in production? Static analysis can determine this through call graph analysis. |
| Exploitability | High | Does a public exploit or proof-of-concept exist? Check CISA KEV catalog and exploit databases. |
| Blast radius | High | What data or systems are exposed? A vuln in an auth service is worse than one in a logging utility. |
| CVSS score | Medium | Use as a tiebreaker, not the primary signal. |
| Fix complexity | Low | How hard is the fix? Easy fixes with high impact should jump the queue. |
This matrix surfaces findings that are both exploitable and impactful—the ones that actually threaten your application in production. Everything else can be scheduled, batched, or deprioritized without increasing real risk.
Reachability analysis is the single highest-leverage addition to a prioritization framework. If a vulnerability exists in code that's imported but never called, it's not exploitable—regardless of its CVSS score. Tools that perform static call graph analysis can automatically filter out unreachable vulnerabilities, often reducing actionable findings by 30-60%.
Measuring Remediation Performance
You can't improve what you don't measure. Four metrics tell you whether your remediation process is actually working:
Mean time to remediate (MTTR) by severity
Track the average time from finding discovery to verified fix, broken out by severity level. Industry benchmarks from the DORA research program show that elite-performing teams treat security fixes with the same urgency as production incidents—measuring recovery in hours, not months.
Target MTTR by severity:
- Critical: < 48 hours
- High: < 7 days
- Medium: < 30 days
- Low: Next quarterly review
Fix rate
What percentage of discovered findings actually get resolved? A fix rate below 50% means your backlog is growing—you're accumulating vulnerability debt faster than you're paying it down. Track this monthly and set a floor: no team should drop below 70% fix rate for high-severity findings.
Reintroduction rate
Do fixed vulnerabilities come back? If a developer fixes a SQL injection in one endpoint but introduces the same pattern in a new endpoint next sprint, you have a training problem, not a tooling problem. Track the percentage of findings that recur within 90 days of being closed.
Trend analysis
Individual metrics fluctuate. The trend tells the real story. Plot MTTR, fix rate, and reintroduction rate monthly:
- MTTR trending down + fix rate trending up = your process is working
- MTTR flat + fix rate flat = you've plateaued—time to change something
- MTTR trending up + fix rate trending down = backlog is winning—escalate
Building a Fix Culture
Tooling and metrics create the infrastructure for remediation. Culture determines whether people actually use it.
Security champions
Embed one developer per team who owns the security relationship. This person isn't a full-time security engineer—they're a developer who spends 10-20% of their time triaging findings, mentoring teammates on secure patterns, and serving as the liaison between the security team and the development team. Google's vulnerability management approach relies heavily on distributed ownership—pushing remediation responsibility to the teams closest to the code.
Fix sprints
Dedicate one sprint per quarter (or two days per month) exclusively to vulnerability remediation. No features, no tech debt—just clearing the security backlog. This works because it removes the competing-priorities problem: during a fix sprint, security is the priority.
Teams that run regular fix sprints consistently achieve 80%+ fix rates on high-severity findings. Teams that "fit security into regular sprints" average 40-50%.
Celebrate fixes, not just findings
Most security programs celebrate discovery: "We found 200 vulnerabilities this quarter!" That's the wrong metric to celebrate. Celebrate the team that went from 30-day MTTR to 7-day MTTR. Celebrate the team with the lowest reintroduction rate. Celebrate the developer who wrote a shared utility that eliminated an entire class of vulnerabilities.
The behavior you reward is the behavior you get. If you only reward finding bugs, you'll get great at finding bugs. If you reward fixing them, you'll get great at fixing them.
The Rafter Fix Loop
Rafter's workflow is built around closing the remediation loop—not just finding vulnerabilities, but making them fixable in the tools developers already use.
The closed-loop workflow:
- Scan: Rafter analyzes your repository for vulnerabilities—SAST-style pattern matching, dependency analysis, and AI-powered code review working together
- Find: Vulnerabilities are identified with full context—the vulnerable code, the data flow, and the specific risk
- Explain: Each finding includes a plain-language explanation of what's wrong and why it matters, written for developers, not auditors
- Generate fix prompt: Rafter generates a context-aware fix suggestion that developers can apply directly in their AI coding tool—Cursor, Lovable, or any LLM-powered editor
- Apply: The developer reviews and applies the fix in their normal workflow
- Rescan: Rafter re-scans the repository to verify the fix resolved the vulnerability without introducing new issues
- Verify: Clean scan confirms the loop is closed
This workflow is designed for the way developers actually work in 2026—with AI coding assistants handling much of the code generation. Instead of asking developers to context-switch into a security dashboard, Rafter meets them in their editor with a fix they can apply, verify, and ship.
The scan-to-fix loop collapses remediation from days (research the vulnerability, understand the codebase context, figure out the right fix, test it, verify it) to minutes (review the suggested fix, apply it, confirm the rescan is clean). That's the difference between a vulnerability that sits in a backlog for 11 months and one that gets fixed in the same PR where it was introduced.
Scan your repos at rafter.so and see the fix loop in action.
Conclusion
The remediation gap isn't inevitable. It's the predictable result of treating security findings as someone else's problem—shipping them to dashboards instead of workflows, measuring discovery instead of resolution, and hoping developers will find time between feature sprints to clear the backlog.
Closing the loop requires three things:
- Integration: Findings arrive where developers work—PR comments, IDE warnings, sprint tickets—not in a separate security portal
- Actionability: Every finding includes enough context to fix it, ideally with an AI-generated fix suggestion that can be applied in seconds
- Measurement: Track MTTR, fix rate, and reintroduction rate. Set targets. Hold teams accountable.
Fixing your remediation gap—an 8-step checklist:
- Route scanner findings to PR comments, not dashboards
- Implement severity-based ticket routing with clear team ownership
- Add reachability analysis to your prioritization framework
- Enable AI-powered fix suggestions for common vulnerability classes
- Establish MTTR targets by severity (critical < 48h, high < 7d)
- Run quarterly fix sprints dedicated to clearing the security backlog
- Track reintroduction rate to catch recurring patterns
- Measure and celebrate fix rate, not just finding count
The teams that close this loop—connecting scan to fix to verified resolution—don't just have fewer vulnerabilities. They ship faster, with more confidence, because security becomes part of the development workflow instead of an obstacle to it.