Estimated reading time: 7 minutes
The rise of AI-generated contributions is forcing the open-source ecosystem to rapidly reevaluate its rules of engagement, including big players like GitHub.
AI slop, which was Macquarie Dictionary’s 2025 word of the year, is defined as “low-quality content created by generative AI, often containing errors, and not requested by the user,” and it’s making an appearance across open-source repos in pull requests (PRs) and vulnerability reports.
“AI slop is generally folks who either don’t understand the core problem, the solution crafted, or both,” said Jeffrey Paul, VP of open-source solutions at digital transformation agency Fueled.
Open-source project WordPress, which Fueled contributes to, recently rolled out AI guidelines. These include asking developers to disclose the use of AI and clarifying what is considered “AI slop,” such as large code dumps, in an effort to help maintainers.
In most cases, developers using AI tools want to help, but by not fully understanding the problem or solution, they don’t realize the additional burden put on maintainers who then have to go through hundreds, or thousands, of lines of generated code.
The impact of AI slop on the open-source community has already driven the world’s largest source code repository, GitHub, to announce several features to tackle this problem, which will put maintainers in the driving seat and could shape how open-source projects operate going forward.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Exacerbating age-old problems
Take cURL, a popular HTTP client, which has been overrun with AI slop as part of its bug bounty program. In an interview with LeadDev, its founder, Daniel Stenberg, described how submissions of AI slop gradually got worse throughout 2025, until he decided to close the program at the start of this year. Halfway through 2025, he found that about 5% of the submissions had been genuine vulnerabilities. “It hasn’t stopped it; we still get them,” said Stenberg, who now receives security reports over email or on GitHub.
Analyzing security reports is one of the toughest tasks for maintainers because a significant understanding of the code base is needed, and it’s a closed process due to their sensitivity. In the case of cURL, only seven maintainers review these reports.
RubyGems, the package manager for the Ruby programming language, is mulling a similar move because it hasn’t seen a valid report in months, said Marty Haught, director of open source at Ruby Central.
“It takes time, if you get 20 or 50 of these in a week, that could be a full day just reviewing that, where it was maybe 15 minutes previously, and so, now, what do we do?” said Haught, who’s seen 10 times the volume of reports, many of which look valid at first glance.
Reviewer time has always been a precious commodity, and AI is only exacerbating that problem, said Jason Brooks, senior manager within Red Hat’s open source program office. In Intel’s latest open source community survey, 36% of respondents listed documentation and onboarding as their top challenge, while 26% cited burnout.
“I certainly think AI slop is making [burnout] a lot worse,” Stenberg said. “My experience of it is that it feels more stupid because it feels more like fighting a machine rather than humans, and that’s just even more tiring and exhausting.”
Becoming more closed?
AI slop isn’t just impacting projects where there are financial incentives. Tldraw, a software development kit for the creation of whiteboards and infinite canvases, became overwhelmed with AI slop in late 2025 and decided to close external contributions for “the good of the project.” While the project is not open-source, it does keep its codebase on GitHub and accepts contributions under a contributor license agreement (CLA).
“We would get PRs from people who seemed like they put a lot of work into this PR, but they would never reply to sign the contributor license agreement, and it was like, ‘Man, you put all this time into this code, and then, you just never came back to the thread, like, what?’ That’s very unusual,” said Steve Ruiz, founder of tldraw, who described how, on the surface, the code looked good, but it was clear AI tools had been asked to fix the problem.
With no way of preventing who could create pull requests and a growing backlog of issues, he wrote a script to close all external requests. Now Github is introducing changes that will provide maintainers with more control, akin to what Ruiz manually implemented, including the option to limit who can submit pull requests, disable pull requests entirely or remove them from the UI, with further changes to come following a consultation process.
“The volume of contributions is only going to grow – and that’s a good thing,” said Ashley Wolf, director of open source programs at GitHub, in the blog post. “But just as the early internet evolved its norms and tools to sustain community at scale, open source needs to do the same.”
Maintainers don’t see this as a closing off of the open-source ecosystem, but rather a necessary evil to tackle a problem that’s out of control.
“I don’t see it becoming more closed, and I think that the reality is that many open source projects are not and have never been in a great position to take in all the contributions that may come in,” Brooks said.
Curl’s Stenberg has been asked about implementing a monetary deposit instead of closing the bug bounty program, but he pushed back on this as it goes against the philosophy of open-source projects being available, accessible, and transparent.
Another solution could be having some form of a network of trust where experience or humanness can be validated before accepting contributions. However, both could be damaging to junior developers, who are already struggling to get a start in software development.
More like this
A new operating model
Ruiz questions whether external code contributions still make sense with the arrival of coding tools. It might be more useful for contributors to start identifying and defining problems in the code, he said, and then for maintainers, who have better knowledge of the codebase, to use and guide AI tools in fixing the problem.
However, this could create challenges long-term, with fewer developers having intimate knowledge of the codebase or implementation skills. Open-source projects, even the largest, have been struggling to get developers to move up the contributor pyramid into senior maintainer and reviewer roles, even before the arrival of AI tools, Brooks said.
This is something GitHub identifies as an area of opportunity in its blog post, noting a “contribution” still leans heavily on code authorship, while a project like WordPress gives “props” to writing, reproduction steps, user testing, and community support.
Ruiz believes the majority of AI slop pull requests came from misguided students, while he suspects a minority could be predatory accounts looking to build credibility to undertake a supply chain attack – he refers to this as his “tin foil hat” theory.
“Universities haven’t really caught up,” Ruiz said. “The bodies that evaluate what a good potential software engineer profile looks like have absolutely not caught up to the fact that it is totally possible to automate open-source contributions and to make those contributions using AI.”
Just as CLAs helped Ruiz identify the arrival of AI contributions, Fueled’s Paul suggests that projects could start to implement more onerous CLAs to try to prevent AI slop.
AI tools might even help combat slop. While cURL received significant amounts of AI slop in its bug bounty program, it has been less of a problem in pull requests, which Stenberg speculates is because of the project’s extensive test coverage, where if a pull request fails test cases, then it isn’t given any attention. AI tooling could help bolster CI/CD processes, such as making it quicker to write and optimize test cases, or monitor the codebase to flag risks earlier in the integration process. Since August 2025, cURL has used two AI-powered code analyzers that have helped fix over 100 bugs, Stenberg said.

London • June 2 & 3, 2026
LDX3 London agenda is live! 🎉
Stenberg even proposes there might be a simpler solution to the problem.
“Part of the explanation for this tsunami is that it’s too cheap,” he said. “The companies are not actually making money on this, so we’re all using this so cheaply. Ideally, at some point, when we start to pay for it, that might also help to actually slow down, because then there’ll be less of an incentive to just hunt for things if you actually had to pay for it as well.”