Skip to main content

Software Harm Reduction

There's a repository on Codeberg called open-slopware keeping a running list of Free and Open Source Software projects that are permitting, encouraging, or quietly integrating generative AI into their codebases, along with listing AI-free alternatives wherever possible. I came across it a few weeks ago and have been thinking about it because of the specific projects on the list.

Ladybird, the from-scratch browser engine I had been excited about, an attempt to build without leaning on Chromium or Gecko, is listed there.

Python is listed. systemd is listed. curl is listed. rsync is listed. Terminal emulators, editors, password managers, search engines, messaging apps, game engines, audio frameworks, VPN tools. The well has been poisoned.

How Do We Even Know?

"AI-generated code" isn't always visible, it can be stealthy or ambiguous. The open-slopware maintainers use several kinds of evidence to flag a project.

The most direct is a CLAUDE.md, AGENTS.md, or .cursor config file checked into the repository. Instruction files for AI chatbots. If a project has one, someone on the team is actively using AI to write or review code in that codebase. Ghostty has AI_POLICY.md declaring "AI is Welcome Here". Hugo has both AGENTS.md and CLAUDE.md. ohmyzsh explicitly allows LLM contributions.

There can also be co-author attributions in a commit. If a dev uses Claude or Copilot to write code and that tool is credited, the commit history records it. You can search a repository's commits for Co-authored-by: Claude or Co-authored-by: GitHub Copilot. systemd has them. rsync's two most recent commits at time of writing were co-authored by Claude. Kitty has multiple. ohmyzsh has Claude commits and Copilot commits.

A dev or maintainer could also publicly discuss their use of AI in blog posts, podcasts, or social media, or a project that uses AI tools for automated PR review (like oh-my-bash's use of Qodo, which reviews every pull request).

None of this tells you how much of the codebase is AI-generated, or whether the generated code is any good. It just shows the door is open.

An Event Horizon

I don't know if this is pessimistic of me, but I think we are approaching the point where it will be near-impossible to use software that doesn't, in some manner, contain generative AI code. We may already be there. The thing is, I actually think we've seen this movie before.

There have been Free/Libre and Open Source Software absolutists for decades. Richard Stallman, the FSF's founder, runs a Libreboot-flashed ThinkPad X200 with Trisquel GNU/Linux. He refuses to connect to any portal that would require non-trivial, non-free JavaScript. He changes his MAC address at every location to prevent identity databases. He has been trying for years to find machines with fully free firmware.

There are those who run Devuan or Artix specifically to avoid systemd, the init system now ubiquitous in GNU/Linux. There are over twenty active Linux distributions maintained specifically for people who want to run a systemd-free stack. Hurray!

This isn't because systemd is evil, rather, it's because some people believe deeply in the Unix philosophy of small, composable tools, and systemd violates that, and they must refuse in principle.

This is the precedent. There have always been people who took their software ethics seriously enough to accept the inconvenience. The anti-genAI absolutist is, in this light, not a new creature but a familiar one.

Two Paths Forward

The way I see it, there are two coherent responses to this situation.

The first is absolutism. Be consistent with your values, regardless of the work and challenge of replacing numerous critical pieces of infrastructure with alternatives. Or forgoing and sacrificing infrastructure when no alternative exists, or building and creating it yourself. This means switching from:

It is doable. The open-slopware list provides alternatives. But it compounds every time a dependency you use touches a project that touched AI code. The Chardet Python library had a core developer use Claude to effectively launder LGPL-licensed code to a more permissive MIT license, which is an ethical problem and a legal problem one. Python itself now contains Claude-authored code, which means every Python project downstream carries that inheritance. If you're a Python developer committed to absolutism, the math starts to hurt.

The second path could be argued as being a lot more reasonable, harm reduction. Use the least amount of genAI-contaminated software as possible. Know what you're using and why. Be critical and vocal about it. Use LibreWolf but accept that the Firefox source it's built on was written by an organization that is, at this point, functionally an AI company. Use Python but know what it is now. Use the alternatives where the swap is painless. Refuse where refusal is possible. And keep the receipt.

I'm somewhere in the harm reduction camp, reluctantly. I've made peace with my own hypocrisy, and that's more useful than pretending I can build a clean room.

The Forever-Rulers

GenAI contamination is not the only ethical fault line in open source software. There is a related, older, arguably more entrenched problem, the Benevolent Dictator for Life.

The BDFL is a term for the original creator of an open source project who retains final decision-making authority indefinitely. The list of BDFLs includes the creators of Ruby, Python, Perl, Linux, Clojure, Elm, Zig, Scala, Laravel, Vue.js, and dozens of other projects that form the load-bearing walls of modern software development. Universally men. Typically white. Often running projects that thousands of people depend on professionally and personally. In a lot of cases, this arrangement functions fine. In some cases, it becomes catastrophically personal and a single point of failure.

David Heinemeier Hansson, creator of Ruby on Rails and co-founder of Basecamp, has spent years becoming one of tech's most vocal figures in what one critic called a pattern of "othering and stigmatising." The incident bringing this to a lot of people's attention was a September 2025 blog post about London's demographics, in which DHH used "native Brit" as a proxy for white British, equating whiteness with nativeness in a way that non-white Londoners found directly harmful. This arrived alongside a history of other hateful comments, including a post describing an ad featuring a plus-sized Black woman as "grotesque," and public sympathy for far-right demographics panic. In response, a group of Rails contributors drafted Plan vert, an open letter calling for a fork of Rails and a new Code of Conduct. Sponsors like Sidekiq pulled funding from Ruby Central. DHH dismissed the backlash. Rails continues.

Matt Mullenweg, co-founder of WordPress and CEO of Automattic, controls a CMS that powers 40% of websites on the internet. In 2024 he launched a campaign against WP Engine. a major WordPress hosting provider, that included private messages threatening to "go nuclear" and "scorched earth" on the company unless it agreed to pay 8% of gross revenues to Automattic, then delivered on that threat publicly during his WordCamp US keynote while WP Engine was a sponsor. WP Engine filed a lawsuit accusing him of extortion and abuse of power. The court granted a preliminary injunction. Mullenweg's also banned trans Tumblr user predstrogen while she was already being subjected to transmisogynistic harassment the platform wasn't addressing, and then followed her to other platforms to publicly expose her private account names. This was condemned by Tumblr's own trans employees, who posted in the staff blog that he did not speak for them.

Automattic is, notably, also one of the entities that has been selling WordPress.com and Tumblr user content to AI training companies. The genAI question and the BDFL question are, at times, one in the same.

Is this a case where we can separate the...er, art from the artist? Regarding both Rails and WordPress, these projects are maintained by hundreds of contributors who did not sign up to endorse their BDFL's political positions or personal conduct. The code in Rails is not xenophobic. The code in WordPress did not follow predstrogen to Twitter. But these people hold final authority and they've demonstrated what they'll do with it.

Whether you can ethically separate the infrastructure from the person who controls it doesn't have an easy resolution.

A Silver Lining?

If I'm going to look for a silver lining to this situation, I think the rise of genAI has cultivated strong values and ethical steadfastness through the repulsion. People who previously never really considered their software stack are now thinking carefully about it. I can only hope it's a gateway to get people more mindful about the politics, exploitation, and harm in many other dimensions of their life.

I wrote about this recently, talking about how we can compare the harm of genAI with the harm of the meat industry and the cognitive dissonance of genAI bros reminds me of the cognitive dissonance of uncritical meat eaters.

This was easily my most controversial piece of writing, I have received a lot of pushback for this, and even have had insults hurled at me for it. I think I can concede and admit that my passion about the issue made me far too zealous, and I ended up moralizing when I shouldn't have (that's not a way to change anybody's mind, after all).

My point here is that being mindful and critical of what we use everyday is a good thing. In truth, there is no way to separate the political from anything. Anybody who attempts to do so has privilege most don't, and is burying their head in the sand at a time when it is more important than ever to be aware and intentional.

Coda

Look, know what you're running. Use lists like open slopware as a reference, not a verdict. Make the swaps that are easy. Resist where resistance is low-friction. Accept where you're making a trade-off and name it to yourself.

Advocate within the projects you care about. The GNOME project's rules against AI-generated contributions were created by a community that wanted them and spoke up. LibreWolf made public statements and actively worked to remove Mozilla's AI integrations. Servo, the new browser engine run as a cooperative, has strong AI contribution protections built into its contributing guidelines.

If you're a developer, consider:

  • Your own contribution practices.
  • What you're asking other maintainers to review.
  • Whether your own usage of AI coding tools is something you're disclosing or obscuring.

The alternatives that we're supposed to be advocates for, the alternative browsers we use to browse alternative hosting, the alternative text editors we use to write about these issues, the alternative search engines we use to find alternatives? They're becoming genAI contaminated, but that isn't a reason for despair. It's a reason for ongoing, imperfect, honest attention that harm reduction always requires.

Comments

To comment, please sign in with your website:

How it works: Your website needs to support IndieAuth. GitHub profiles work out of the box. You can also use IndieAuth.com to authenticate via GitLab, Codeberg, email, or PGP. Setup instructions.

No comments yet. Be the first to share your thoughts!


Webmentions

1 Like

4 Reposts


Related Posts

↑ TOP