AI Is Already in Your Codebase. The Question Is Whether You Chose It.

Tools & Workflows 7 min read by Girish Koliki
AI Is Already in Your Codebase. The Question Is Whether You Chose It.

95% of developers use AI tools weekly. 4.3 million AI repositories exist on GitHub. The data from open source proves that AI-assisted development is not a trend. It is the baseline. The only question left is whether you are being intentional about it.

GitHub now hosts over 4.3 million AI-related repositories. In the twelve months to August 2025, developers created 693,000 new projects using LLM SDKs. That is a 178% year-over-year increase.[1]

This is not a forecast. This is what already happened.

More than 1.1 million public repositories actively use large language model SDKs.[1] The Pragmatic Engineer's 2026 survey found that 95% of developers use AI tools at least weekly, with 75% using AI for half or more of their work.[2] GitHub Copilot alone has over 20 million active developers.[3]

If you write software for a living, AI is already in your workflow. Or it is in the workflow of the person competing against you for your next role, your next contract, or your next promotion.

4.3M AI-related repositories on GitHub
178% Year-over-year growth in LLM-focused projects
95% Developers using AI tools at least weekly

§ The open source proof

Open source is the largest public experiment in AI-assisted development. The code is visible. The contributions are traceable. The quality is debatable. And that is exactly what makes it useful as evidence.

The best AI-assisted contributions are real. Ollama, the tool that lets developers run LLMs locally, was the fastest-growing open source project by contributor count in 2024.[1] Projects like n8n surpassed 150,000 GitHub stars in 2025 by building AI-native workflow automation.[4] These are not toy projects. They are production infrastructure used by thousands of companies.

But open source also shows what happens when AI is used badly.

The term "AI Slopageddon," coined by RedMonk analyst Kate Holterhoff in early 2025, describes the flood of low-quality, AI-generated contributions that are overwhelming open source maintainers.[5] Daniel Stenberg, who maintains cURL (a tool used by virtually every internet-connected device on earth), shut down his bug bounty program because only 5% of submissions were genuine vulnerabilities. The rest were AI-generated hallucinations.[6]

The Jazzband Python project collective shut down entirely in 2026, citing unsustainable AI-generated spam as a primary driver.[7] WordPress rolled out AI contribution guidelines. LLVM introduced a human-in-the-loop policy requiring developers to stand behind the quality of their code and disclose AI usage.[6]

The pattern from open source is hard to ignore. AI-assisted code is everywhere. Some of it is excellent. Some of it is slop. The difference is not the tool. It is the person using it.

§ You will not be replaced by AI. You might be replaced by someone who uses it.

Jensen Huang, Nvidia's CEO, said it plainly at the Milken Institute Global Conference in 2025: "You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI."[8]

He was not being poetic. He was describing what the data already shows.

Claude Code went from zero to the most-used AI coding tool in eight months.[2] 75% of developers at small companies now use it. Staff-plus engineers are the heaviest adopters of AI agents, with 63.5% using them regularly.[2]

This is not early adopter territory. This is mainstream professional practice. And the gap between developers who use AI well and those who do not is widening fast.

The developers getting the most from these tools are not the ones generating the most code. They are the ones who understand what the code does, who can review it critically, and who know when to reject what the AI suggests. That is the same judgment gap that separates the useful open source contributions from the slop.

What separates useful AI-assisted work from slop

  • The developer understands the codebase, not just the prompt
  • Every AI-generated change is reviewed as critically as hand-written code
  • The developer can explain and defend their contribution when questioned
  • Testing is treated as a requirement, not an afterthought
  • The AI output is a starting point, not a finished product

§ What this means if you lead an engineering team

If you are an engineering leader, the open source data is a preview of what is coming to your internal repositories. The same forces that are overwhelming open source maintainers with low-quality PRs will show up in your team's code review queue. The same tools driving 178% growth in AI projects will be in the hands of every developer you hire next.

The question is not whether your team should use AI. They already are. 85% of developers regularly use AI tools for coding, debugging, and code review.[9] The question is whether you have done anything to shape how they use it.

That means having a clear position on which tools are approved, how AI-generated code should be reviewed, and what quality bar applies. It means recognising that the bottleneck has shifted from writing code to reviewing and shipping it. We covered the practical side of this in detail in our playbook for AI-assisted development, including the three layers that separate teams capturing real productivity gains from those drowning in unreviewed pull requests.

The teams that get this right treat AI as an accelerator for experienced engineers, not a shortcut around engineering judgment. Everyone else ends up with more code, more noise, and the same shipped output they had before.

§ Start here

Three things you can do this week.

If you are an individual developer and you are not using AI coding tools daily, start. Pick one. Claude Code, Cursor, GitHub Copilot. It does not matter which. What matters is that you build the habit of working with AI as a thinking partner, not a code printer. Use it to reason through problems, draft tests, and explore unfamiliar codebases. Then review everything it produces with the same scrutiny you would apply to a junior developer's pull request.

If you lead a team, have the conversation about AI-generated code out loud. Set expectations for how it should be reviewed. Create space for engineers to experiment without pressure to ship every AI-generated line. And read our engineering leader's playbook for AI-assisted development if you have not already. It covers the three layers that determine whether AI tools actually improve your team's output or just increase the pile of unreviewed code.

If you are still on the fence, consider this. The developer next to you is not. And the one applying for your next open role is not either.

A note from fusecup

At fusecup, we work with engineering teams navigating exactly this shift. Whether you are trying to figure out the right AI tooling strategy or working out how to restructure workflows around AI-generated code, we are happy to talk it through. No pitch. Just a practical conversation about what might work for your team.

§ References

  1. GitHub, Octoverse 2025 Report (October 2025). github.blog
  2. The Pragmatic Engineer, AI Tooling for Software Engineers in 2026 (March 2026). newsletter.pragmaticengineer.com
  3. CoinLaw, GitHub Statistics 2026. coinlaw.io
  4. Open Data Science, Top Ten GitHub Agentic AI Repositories in 2025 (December 2025). opendatascience.com
  5. Kunal Ganglani, AI Slopageddon: How AI-Generated Code Is Destroying Open Source (March 2026). kunalganglani.com
  6. LeadDev, Open Source Has a Big AI Slop Problem (February 2026). leaddev.com
  7. The New Stack, Open Source Maintainers Are Drowning in AI-Generated Pull Requests (April 2026). thenewstack.io
  8. CNBC, Nvidia CEO Jensen Huang: You'll Lose Your Job to Somebody Who Uses AI (May 2025). cnbc.com
  9. Modall, AI in Software Development: 25+ Trends & Statistics (April 2026). modall.ca