Open source is the largest public experiment in AI-assisted development. The code is visible. The contributions are traceable. The quality is debatable. And that is exactly what makes it useful as evidence.
The best AI-assisted contributions are real. Ollama, the tool that lets developers run LLMs locally, was the fastest-growing open source project by contributor count in 2024.[1] Projects like n8n surpassed 150,000 GitHub stars in 2025 by building AI-native workflow automation.[4] These are not toy projects. They are production infrastructure used by thousands of companies.
But open source also shows what happens when AI is used badly.
The term "AI Slopageddon," coined by RedMonk analyst Kate Holterhoff in early 2025, describes the flood of low-quality, AI-generated contributions that are overwhelming open source maintainers.[5] Daniel Stenberg, who maintains cURL (a tool used by virtually every internet-connected device on earth), shut down his bug bounty program because only 5% of submissions were genuine vulnerabilities. The rest were AI-generated hallucinations.[6]
The Jazzband Python project collective shut down entirely in 2026, citing unsustainable AI-generated spam as a primary driver.[7] WordPress rolled out AI contribution guidelines. LLVM introduced a human-in-the-loop policy requiring developers to stand behind the quality of their code and disclose AI usage.[6]
The pattern from open source is hard to ignore. AI-assisted code is everywhere. Some of it is excellent. Some of it is slop. The difference is not the tool. It is the person using it.