Where does the community stand?
Most organizations with a formal policy permit AI-assisted contributions — but the conditions attached to that permission vary enormously. Complete bans tend to cluster around lower-level system projects where reviewer time is scarcest.
Policy adoption has accelerated sharply
The pace of policy adoption has grown substantially each year. The debate has shifted from "should we address this?" to "what specifically should we require?" — with disclosure labels, DCO compatibility, and quality standards emerging as central battlegrounds.
Quality vs. Copyright vs. Ethics
The Quality, Copyright, and Ethics ratings in this analysis are qualitative assessments based on close reading of each policy's language, framing, and stated justifications. Different researchers might reasonably categorize these policies differently based on their own interpretations.
High indicates the concern is explicitly named as a primary motivation for the policy, appears multiple times in the document, or drives the policy's core requirements. For example, a policy rated "High" on Quality might explicitly discuss reviewer burden, require contributors to explain their code, or cite "AI slop" as a reason for restrictions.
Medium indicates the concern is mentioned but not as the central driver, or is implied through requirements without being explicitly emphasized. A "Medium" Copyright rating might apply to a policy that requires DCO sign-off but frames this as standard practice rather than an AI-specific concern.
Low indicates the concern receives minimal or no explicit attention in the policy text. A "Low" Ethics rating typically means the policy focuses on practical or legal matters without addressing broader questions of transparency, consent, environmental impact, or community values.
The labeling fragmentation problem
Even among projects that require disclosure, there's no standard format. The commit-tag ecosystem is fragmenting into incompatible conventions, making cross-project analysis difficult.
Key findings
DCO as the legal fulcrum
The Developer Certificate of Origin (DCO) is emerging as the primary legal mechanism. Organizations like the Linux Foundation and QEMU frame AI use around whether the contributor can truthfully sign off that they have the right to submit the code.
Few policies address agentic AI
Most policies assume a human is driving the AI tool. Only a handful — like Matplotlib, which explicitly bans automated bots and agents from posting AI-generated content — address fully autonomous agents that submit code without direct human prompting.
Quality concerns trump legal ones
While copyright contamination gets headlines, most policies prioritize reviewer burden and code quality. Maintainers are drowning in low-quality AI-generated submissions that waste scarce volunteer time.
Bans cluster around system-level code
Projects maintaining kernels, drivers, hypervisors, and security-critical infrastructure (Gentoo, NetBSD, QEMU, Cloud Hypervisor) are more likely to ban AI entirely. The risk profile differs when bugs can brick hardware or enable exploits.
Documentation is the exception zone
Even projects with strict code policies often carve out exceptions for documentation, translations, and non-executable contributions — acknowledging different risk profiles for different contribution types.
Policies are evolving rapidly
Many organizations explicitly note their policies are "living documents" subject to revision. The conversation is far from settled, and what's permissive today may tighten tomorrow.
All surveyed organizations
Click any organization name to view their policy (where available).
| Organization | Type | Stance | Quality | Copyright | Ethics | Year |
|---|