The Vibe Code Report, April 2026

We scanned 100,000 AI-generated codebases. Only 1 percent are production ready.

The most comprehensive analysis of what AI coding tools actually ship. Static analysis across every repo we could identify as vibe coded.

103,653

repos scanned

53%

average production readiness

1%

production ready

83%

with critical or significant gaps

Data pulled from public GitHub repos identified as AI-generated via Tier 1 markers (Bolt, Lovable, Cursor, Windsurf, Replit Agent, Aider, Continue, Claude Code). Scanned with 22 production readiness checks. Static analysis only.

The score distribution

How the repos landed across the four tiers.

Critical (0-35)

6%

Pre-launch risk

Significant Gaps (36-65)

77%

Will break under real users

Getting Close (66-85)

17%

Needs hardening before production

Production Ready (86-100)

1%

Actually shippable

Numbers reflect the 100K (Apr 2026) corpus. Refreshes automatically at build time when the next dataset lands.

The top failure modes

When AI generates code fast and nobody checks the boring stuff, these are the gaps.

93%

No logging / observability

When something breaks, nobody knows why. Debugging is guesswork.

91%

No timeouts on external HTTP calls

API calls hang forever when the vendor fails. Every call is a potential outage.

86%

Missing API auth guards

Endpoints are unprotected. Anyone on the internet can call them.

85%

No error boundaries

A single component error crashes the whole page.

76%

Missing DB indexes on hot filters

Filtered queries scan the whole table once data grows.

75%

No environment separation / hardcoded secrets

Development credentials leak into production builds.

66%

No rate limiting

One hostile user can drain a free tier or knock the API offline.

64%

No CI/CD config

Every deploy is manual and ad hoc. No gate between local and production.

Across 22 checks total. See the methodology for the full list.

It is not a specific tool's problem

Every major AI coding tool produces code that averages 51 to 60 percent production ready. The pattern is consistent across the category.

ToolAverage score
Lovable57%
Claude Code58%
Cursor59%
Replit Agent52%
Bolt.new51%
Windsurf58%
Trae57%
Roo60%
Continue56%
Aider56%

Full methodology and raw data below.

How we did this

How we found the repos

We searched public GitHub for repositories containing AI tool configuration markers. Any repo with one or more of these was included:

  • .bolt/ directory (Bolt.new)
  • .lovable/ or lovable.toml (Lovable)
  • .cursorrules, .cursor/, .cursorignore (Cursor)
  • .windsurf/, .windsurfrules, windsurf.json (Windsurf)
  • .replit manifest (Replit Agent)
  • .aider.* files (Aider)
  • .continue/ (Continue.dev)
  • .cline/, .clinerules (Cline)
  • .roo/, .roomodes (Roo)
  • .trae/ (Trae)
  • .codex/ (Codex)
  • CLAUDE.md, GEMINI.md, AGENTS.md, copilot-instructions.md (agent config files)
  • Repository topic or README mentions of vibe coding, vibe-coded, or specific tools

What we measured

22 production readiness checks covering:

  • Error boundaries, logging, observability
  • API auth, input validation, rate limiting
  • Timeout protection, retry logic, circuit breakers
  • Database transactions, migrations, query safety
  • Environment separation, secret handling
  • CI/CD, tests, reproducible builds
  • Security headers, CORS configuration

Static analysis only. No code execution. Public repos only.

What is in scope

  • Only JS/TS repos (Next.js, React, Node, Express, etc.) scored. Python, Go, Rust, and other language repos were skipped.
  • Private repos excluded.
  • Empty repos excluded.
  • Repos over 300 MB excluded (infra limit).

Open methodology. We will publish the scanner source and the full dataset for researchers who want to verify or extend the analysis.

Find your repo

Paste any GitHub URL and we will show you what we found.

Not in the dataset? We will run a fresh scan at useastro.com/score.

Install Score as a GitHub Action

Run the full 22-check Score on every pull request. Free. Takes 60 seconds to install.

# .github/workflows/score.yml
name: Astro Score
on: [pull_request]
jobs:
  score:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: useastro/score-action@v1

We are building what comes after Score

Score diagnoses the gaps. Astro is the platform that prevents them. An AI app builder that works like a real engineering team: requirements, planning, architecture, design, validation, execution. Built on a production-grade foundation.

Join the waitlist