The Slopocalypse
In the age of limitless code, you would expect open source to thrive. Instead, it is being buried alive by the tools it helped create.
In 2026, artificial intelligence can write code faster than any human who has ever lived. A developer can describe what they want in plain English and have a working application in hours. Coding agents run autonomously through entire projects, generating thousands of lines of code overnight. The era of limitless code has arrived.
You would think this would be a golden age for open source software, the collaborative model where anyone can contribute improvements to shared code. More contributors. More fixes. More progress. The rising tide lifts all boats.
The opposite is happening. Across the open source ecosystem, maintainers are shutting down projects, closing their doors to outside contributions, and walking away. Not because the technology failed. Because the economics of contribution just broke.
To understand why this matters, you need to understand what open source actually holds up. When you load a website, the server responding to your request is almost certainly running Linux, an open source operating system that powers the majority of the world’s web servers and virtually all cloud computing1. When you stream a video, make a payment, or open a banking app, dozens of open source components are doing the work underneath: the web frameworks, the encryption libraries, the database engines, the networking tools. A recent audit found that 97% of commercial codebases contain open source components, with an average of 911 per application2. Open source is not a niche community of hobbyists. It is the foundation of the modern internet. And the people maintaining it are under a kind of pressure that did not exist two years ago.
The cathedral
Open source software was built like a cathedral. Not quickly, not cheaply, but carefully, over years and decades, by skilled craftspeople who understood the structure. Every good contribution was a stone laid with care. The mechanism that kept the structure sound was the pull request: a proposed change submitted by a developer for the maintainer to review. Think of it as a draft edit on a shared document. The maintainer reads the proposed change, checks that it won’t break anything, tests it against edge cases, and decides whether to accept it. That review process is what separates working software from software that works until it doesn’t.
For decades, this system functioned because contributing had friction. You had to understand the codebase. You had to reproduce a bug. You had to write a clear explanation of what you changed and why. You had to risk looking foolish in public if your solution was wrong. That friction was not a flaw in the process. It was the process. It filtered for quality and intent.
Now imagine a thousand people show up overnight with cement guns, spraying material at the walls. Each one thinks they are building. But the stonemasons, the maintainers who keep the structure sound, have to inspect every addition, chisel off the bad ones, and keep the cathedral from cracking. They cannot work fast enough. Some are walking away. And the people with the cement guns cannot understand why the cathedral isn’t getting built faster.
What is breaking is not open source itself. It is the assumption that anyone can contribute at scale without overwhelming the people responsible for reviewing it.
The economics are brutal. It takes a developer roughly sixty seconds to prompt an AI agent to generate a pull request. It takes a maintainer an hour to carefully review those changes, verify they do not break obscure edge cases, and ensure they align with the project’s direction3. AI helps maintainers too: faster triage, better test generation, automated refactoring. But the productivity gains on the review side are linear. The volume increase on the contribution side is exponential. When you multiply hundreds of contributors all using AI assistants against a handful of reviewers, you do not get a better project. You get a maintainer who quits.
The body count
The stonemasons are already leaving.
In January 2026, Daniel Stenberg, the creator of curl, a piece of software that runs on an estimated fifty billion devices worldwide, shut down the project’s six-year bug bounty program. In the first twenty-one days of the year, the project received twenty AI-generated security reports. None identified an actual vulnerability. The program that had paid $86,000 for 78 genuine discoveries over six years had become a magnet for fabricated submissions from people who let an AI generate the report without checking whether the problem was real4.
Mitchell Hashimoto, the founder of HashiCorp and one of the most respected figures in open source, banned AI-generated code from his project Ghostty entirely. Not because he opposes AI. He uses it himself daily. Because the submissions from outside contributors had become, in his words, unusable5.
Steve Ruiz, the creator of tldraw, a widely used drawing library, went further. He auto-closed all external pull requests. Not some. All of them. After discovering that AI-generated issues were being fed to AI tools that then generated pull requests based on hallucinations, he concluded the entire contribution pipeline had become unreliable6.
Godot, the open source game engine that surged in popularity after Unity’s 2023 pricing controversy, now has 4,681 open pull requests. Its co-founder and core maintainer, Rémi Verschelde, described the situation plainly: “A recent rise in AI slop code submissions is draining and demoralizing.” Maintainers now second-guess every pull request from new contributors. The AI-generated submissions look plausible at first glance but contain broken logic, fundamental errors, or code that makes no sense in context7.
And then there is Jazzband. For over ten years, Jazzband operated as a cooperative for Python open source projects. It maintained 84 projects with roughly 93,000 GitHub stars, shipping software downloaded more than 150 million times a month. pip-tools, prettytable, django-debug-toolbar: tools embedded deep in the infrastructure of thousands of companies. In March 2026, its founder, Jannis Leidel, announced it was sunsetting. The reason was direct: “GitHub’s slopocalypse, the flood of AI-generated spam PRs and issues, has made Jazzband’s model of open membership and shared push access untenable”8.
These are not isolated incidents. They are a pattern. Not every project is affected equally. Well-funded, corporate-backed projects like Linux and Kubernetes have governance structures and paid maintainers that absorb the pressure. The crisis is hitting the long tail: the single-maintainer libraries, the volunteer-run cooperatives, the small but critical projects buried deep in dependency chains that nobody thinks about until they break. Those are the projects that make up the bulk of the 911 components in the average codebase. And those are the ones walking away. GitHub itself, the platform that hosts the majority of the world’s open source software, has formally acknowledged the crisis. In early 2026, a GitHub product manager opened a community discussion to address what she called “a critical issue affecting the open source community.” The platform is now exploring measures that would have been unthinkable two years ago: disabling pull requests entirely, restricting them to trusted collaborators, deploying AI triage tools to filter submissions9. The core mechanism of open source collaboration, the ability for anyone to propose an improvement, is being reconsidered because the volume of low-quality AI-generated contributions has overwhelmed the people responsible for reviewing them.
The cracks in the walls
When the stonemasons are overwhelmed, bad code gets through. And when bad code gets through at scale, the cracks start to show.
Georgia Tech’s Vibe Security Radar, a project that tracks security flaws directly introduced by AI coding tools, recorded 35 new publicly disclosed vulnerabilities in March 2026 alone. That was up from six in January and fifteen in February. The researchers estimate the true number is five to ten times higher, because most traces of AI involvement are stripped before code is submitted10.
To be fair, humans have always written insecure code. The difference is that AI produces it at a volume and plausibility that makes it far harder to catch. Veracode’s GenAI Code Security Report found that 45% of AI-generated code contains the kind of flaws that lead to data breaches: passwords left visible in the code, security checks that were never built, doors left unlocked because the model didn’t know they needed to be locked11.
The deeper problem is how AI handles errors. When a security measure prevents an application from running, a human developer investigates why. An AI coding agent often takes a different approach: it removes the thing causing the error. Columbia University researchers documented this pattern repeatedly12. It is the equivalent of a smoke alarm going off and the AI ripping it out of the ceiling instead of checking for a fire.
The consequences are not theoretical. In February 2026, security researchers at Wiz discovered that Moltbook, a social networking platform, had exposed 1.5 million authentication tokens and 35,000 email addresses to the open internet. The founder had publicly stated he “didn’t write one line of code.” The AI had set up the database with the doors wide open during development, and the founder, who had never looked at the underlying code, shipped it to the public as-is13.
Meanwhile, the maintenance debt continues to compound. 92% of audited codebases contain components more than four years out of date. 93% include components with no development activity for at least two years14. The maintainers who would normally catch these issues, update old dependencies, and patch security holes are the same people now spending their time rejecting cement-gun submissions.
The stonemasons were never just gatekeepers. They were the immune system. When they leave, the cathedral does not just stop growing. It starts to crack.
The cement gun factory
The companies that built the cement guns did so using stone from the cathedral. GitHub Copilot, the most widely adopted AI coding tool, was trained on open source code hosted on GitHub. Every major AI model, GPT, Claude, Gemini, Llama, was built on open source infrastructure. Linux runs the servers. PyTorch and TensorFlow provide the machine learning frameworks. Thousands of libraries maintained by volunteers make the entire stack possible.
These companies then shipped tools that let anyone generate pull requests, issues, and bug reports at machine speed, aimed directly at the maintainers who were already stretched thin. According to the most recent data, 60% of open source maintainers are unpaid15. The projects they maintain are downloaded billions of times a month. The companies whose products depend on that code generate hundreds of billions in revenue.
The irony is sharp. GitHub, which popularized AI-assisted coding with Copilot, trained on the very open source code that maintainers were burning out maintaining for free, is now considering disabling the mechanism that made open source collaborative in the first place. Stefan Prodan, a core maintainer of Flux CD, summarized the contradiction: “AI slop is DDoS-ing open source software maintainers, and the platforms hosting open source software projects have no incentive to stop it. On the contrary, they’re incentivized to inflate AI-generated contributions to show ‘value’ to their shareholders”16. A DDoS attack is when a system is deliberately flooded with so much traffic that it stops functioning. That is what is happening to open source, except nobody is doing it on purpose. The flood is a side effect of tools that were built to help.
This is not malice. It is negligence. The cement gun makers never thought about the stonemasons. They were thinking about the developers buying the guns. And the developers share responsibility too. Submitting code you have not reviewed or tested is not contributing to the cathedral. It is adding to the load on the people who maintain it.
Who pays for the stonemasons?
This is not an argument against AI. AI-assisted development is genuinely transformative and it is not going away. The tools are extraordinary. The speed is real. The accessibility they provide to people who could never have built software before is a legitimate breakthrough.
The argument is that the companies profiting most from open source, and whose AI tools are generating the load that is breaking it, have a direct commercial interest in keeping the ecosystem alive. And so far, they are not acting like it.
Think of it as supply chain maintenance. If a car manufacturer discovered that 97% of its vehicles contained parts from a single supplier, and that supplier’s factory was staffed by volunteers who were being overwhelmed to the point of walking away, the manufacturer would not shrug. It would fund the factory. Not out of generosity, but because the alternative is a supply chain that fails.
That is the position the AI industry is in. OpenAI, Anthropic, Google, Meta, and Microsoft built their products on open source. Their coding tools are the primary source of the new stress on maintainers. They have the resources to fund maintainership at scale. And they have the most to lose if the ecosystem degrades, because their own models will increasingly be trained on code that has passed through weakened review processes, compounding errors into the next generation of AI-generated output. They are not alone in this responsibility. Every company shipping products built on open source dependencies is consuming infrastructure it did not pay for. But the model companies are the sharpest case: they created the tools, they profit from the ecosystem, and they are generating the load. The obligation starts with them.
Some efforts exist. Google Summer of Code is in its twenty-second year. The Linux Foundation and NumFOCUS fund selected projects. Individual companies sponsor specific dependencies. But these are drops in an ocean. Jazzband’s projects are downloaded 150 million times a month. Its founder was one person. The gap between the value extracted from open source and the resources invested in maintaining it has never been wider, and AI is accelerating that gap from both directions: increasing the load while the workforce shrinks.
The solutions are not mysterious. Direct salaries for maintainers of critical projects. Funding proportional to how heavily a dependency is used. A percentage of Copilot revenue directed back to the projects it was trained on. These are tractable mechanisms, not abstract aspirations. The money exists. The question is whether anyone will spend it before the maintenance crisis becomes a security crisis that forces their hand.
There is also regulatory pressure building. The EU Cyber Resilience Act, which takes effect in September 2026, will impose new security and maintenance obligations on software products that contain open source components. Companies that ship products built on unmaintained open source code will face compliance risks they have never had to consider. The regulation does not solve the problem. But it makes the cost of ignoring it harder to defer.
The stonemasons are the point
There is a deeper lesson in all of this, and it cuts against the loudest narrative in the industry right now.
The prevailing story is that AI is making developers obsolete. Companies are cutting engineering headcount. Founders boast about shipping entire products without writing a line of code. The assumption is that since AI can write code, the people who used to write it are no longer needed.
The open source crisis reveals the opposite. The bottleneck in software was never writing code. It was always judgment: knowing whether the code is correct, secure, maintainable, and aligned with the project’s long-term architecture. AI has made the writing trivially cheap. That makes the judgment exponentially more valuable, not less.
The maintainers walking away from open source projects are not people who wrote code. They are people who reviewed it, understood it, and decided whether it belonged. They carried the context that no model has: why a particular design decision was made three years ago, which dependency has a history of breaking under load, what the regulatory implications of a seemingly minor change might be. That is the skill the age of limitless code demands more of, not less.
Every project that shut its doors in the past three months did so because it lacked reviewers, not contributors. Every security flaw that slipped through did so because the people who would have caught it were overwhelmed or gone. The cement guns produced volume. The stonemasons provided judgment. The industry is celebrating the volume and starving the judgment.
Companies cutting developers are optimizing for the part of software engineering that just got automated while hollowing out the part that just became critical. They are firing the stonemasons and buying more cement guns. The cathedral will tell them, eventually, what that costs.
The stonemasons are not asking for the cement guns to be taken away. They are asking for the companies that made the guns, and the companies whose products depend on the cathedral still standing, to help maintain it. Not by spraying more cement. By funding the craft that keeps the structure sound.
If they don’t, the cathedral will not collapse overnight. It will degrade slowly, stone by stone, maintainer by maintainer, until the day something critical fails and everyone asks why nobody was looking after the foundations.
Everybody depended on it. Nobody thought it was their job.
W3Techs, “Usage Statistics of Operating Systems for Websites,” March 2026. Linux powers approximately 80% of web servers globally. Virtually all major cloud platforms (AWS, Azure, GCP) run on Linux.
Black Duck, “2026 Open Source Security and Risk Analysis (OSSRA) Report,” February 2026. Based on audits of 947 codebases across 17 industries.
InfoWorld, “Is AI killing open source?” February 2026. Citing maintainer estimates of review asymmetry. The New Stack reports a 12:1 ratio between review time and generation time.
Daniel Stenberg, curl PR #20312, January 2026. The bug bounty program had operated since 2019 and paid out $86,000 for 78 valid vulnerabilities before being shut down.
Mitchell Hashimoto, Ghostty contribution policy, January 2026. “This is not an anti-AI stance. This is an anti-idiot stance. Ghostty is written with plenty of AI assistance and many of our maintainers use AI daily.”
Steve Ruiz, tldraw contribution policy, January 2026. “If writing the code is the easy part, why would I want someone else to write it?”
Rémi Verschelde, Godot Engine, February 2026. “With every request we now have to repeatedly ask ourselves questions like: was the code even partially written by a human, did the author understand it, did they test it?”
Jannis Leidel, “Sunsetting Jazzband,” March 14, 2026. Jazzband maintained 84 projects with 3,135 members across 56 countries over its ten-year history.
Camilla Moraes, GitHub community discussion, February 2026. Proposed measures include disabling PRs, restricting to collaborators, and deploying AI triage tools.
Infosecurity Magazine, “Researchers Sound the Alarm on Vulnerabilities in AI-Generated Code,” March 2026. Citing the Vibe Security Radar project at Georgia Tech’s Systems Software & Security Lab.
Veracode, “GenAI Code Security Report 2025.” Found that 45% of AI-generated code introduces security vulnerabilities, with LLMs choosing insecure methods nearly half the time.
Reya Vir, “The Reality of Vibe Coding: AI Agents and the Security Debt Crisis,” Towards Data Science, February 2026. Based on research at Columbia University evaluating top coding agents.
Wiz Security, “Exposed Moltbook Database Reveals Millions of API Keys,” February 2026. The platform was built entirely through AI-assisted coding without manual code review.
Black Duck, “2026 OSSRA Report.” 92% of codebases contained components four or more years out of date; 93% contained components with no development activity for at least two years.
Tidelift, “State of the Open Source Maintainer Report 2024.” The Brookings Institution has separately documented that most critical open source infrastructure is maintained by very small teams, often one or two people.
Stefan Prodan, cited in InfoQ, “AI ‘Vibe Coding’ Threatens Open Source as Maintainers Face Crisis,” February 2026.



